There is a not necessarily large, but definitely significant chance that developing machine intelligence compatible with human values may very well be the single most important thing that humans have or will ever do, and it seems very likely that economic forces will make strong machine intelligence happen soon, even if we're not ready for it.
So I have two questions about this: firstly, and this is probably my youthful inexperience talking (a big part of why I'm posting this here), but I see so many rationalists do so much awesome work on things like social justice, social work, medicine, and all kinds of poverty-focused effective altruism, but how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don't do everything they can to make it go the way we need it to? This sort of segues in to my second question, which is what is the most any person, more specifically, I can do for FAI? I'm still in high school, so there really isn't that much keeping me from devoting my life to helping the cause of making sure AI is friendly. What would that look like? I'm a village idiot by LW standards, and especially bad at math, so I don't think I'd be very useful on the "front lines" so to speak, but perhaps I could try to make a lot of money and do FAI-focused EA? I might be more socially oriented/socially capable than many here, perhaps I could try to raise awareness or lobby for legislation?
how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don't do everything they can to make it go the way we need it to?
Well, ChaosMote already gave part of the answer, but another reason is the idea of comparative advantage. Normally I'd bring up someone like Scott Alexander/Yvain as an example (since he's repeatedly claimed he's not good at math and blogs more about politics/general rationality than about AI), but this time, you can just look at yourself. If, as you claim,
I'm a village idiot by LW standards, and especially bad at math, so I don't think I'd be very useful on the "front lines" so to speak, but perhaps I could try to make a lot of money and do FAI-focused EA? I might be more socially oriented/socially capable than many here, perhaps I could try to raise awareness or lobby for legislation?
then your comparative advantage lies less in theory and more in popularization. Technically, theory might be more important, but if you can net bigger gains elsewhere, then by all means you should do so. To use a (somewhat strained) ...
To address your first question: this has to do with scope insensitivity, hyperbolic discounting, and other related biases. To put it bluntly, most humans are actually pretty bad at maximizing expected utility. For example, when I first head about x-risk, my thought process was definitely not "humanity might be wiped out - that's IMPORTANT. I need to devote energy to this." It was more along the lines of "huh; That's interesting. Tragic, even. Oh well; moving on..."
Basically, we don't care much about what happens in the distant future, especially if it isn't guaranteed to happen. We also don't care much more about humanity than we do about ourselves plus our close ones. Plus we don't really care about things that don't feel immediate. And so on. Then end result is that most people's immediate problems are more important to them then x-risk, even if the latter might be by far the more essential according to utilitarian ethics.
Following on from this question, since cheap energy storage is a big obstacle to using wind/wave/solar energy, why is gravity-based energy storage not used more?
Many coasts have some cliffs, where we could build a reservoir on top of the cliff and pump up seawater to store energy. What is the fundamental problem with this? Efficiency of energy conversion when pumping? Cost of building? The space the reservoir would take (or the amount of water it could hold)?
Actually, this scheme is currently employed by utilities, albeit usually not with seawater. The technique is called pumped storage hydro. Pumped storage hydro accounts for the vast majority of grid energy storage world-wide. Pumped storage hydro is used to by power companies to achieve various goals, e.g.:
flatten out load variations (as you suggested elsewhere in this thread)
provide "instant-on" reserve generation for voltage and frequency support
level out the fluctuating output of intermittent energy sources such as wind and solar (as you suggested above)
Wikipedia states that round-trip efficiency of pumped storage hydro can range between 70% and 87%, making it an economical solution in many cases.
A couple of obstacles to using pumped storage hydro are:
Certain topological/geographic features are needed to make PSH viable
Social and ecological concerns
Why isn't sea-based solar power more of a thing? Say you have a big barge of solar panels, soaking up energy and storing it in batteries. Then once in a while a transport ship takes the full batteries to land to be used, and returns some empty batteries to the barge.
Storing energy in batteries is a net loss. Even at retail prices, the total electricity stored in the battery over its entire lifespan will not pay for the upfront cost of the battery. Even if the electricity were free.
Batteries are a generic technology. If they were useful for grid energy storage, they would be used for it already, not just useful for exotic future energy generation methods. In particular, wind power is terrible because it is erratic (and badly timed where it has trends) and would be the existing technology to most benefit from improved storage.
Saltwater causes huge amounts of wear and tear, and weather fluctuations can completely destroy your ship. What you're basically doing is paying a ton more money per square foot of solar panel space than you would be on land, because every bit of that space needs to be attached to a ship.
I imagine in most places it'd be cheaper to buy an acre of land on the outskirts of town, or to rent an acre of otherwise unused rooftop from your local big-box store, than to build a barge with equivalent deck area; Wikipedia informs me for example that a Nimitz-class aircraft carrier has a deck area of only about six acres.
Land-based solutions also let you plug directly into the grid rather than futzing with also-expensive battery storage.
Why do we discuss typical mind fallacy more than the atypical mind fallacy (the later is not even an accepted term, I came up with it) ?
I am far more likely to assume that "I am so special snowflake" than to assume everybody is like me. Basically this is what the ego, the pride, the vanity in me wants to do.
Just how bad of an idea is it for someone who knows programming and wants to learn math to try to work through a mathematics textbook with proof exercises, say Rudin's Principles of Mathematical Analysis, by learning a formal proof system like Coq and using that to try to do the proof exercises?
I'm figuring, hey, no need to guess whether whatever I come up with is valid or not. Once I get it right, the proof assistant will confirm it's good. However, I have no idea how much work it'll be to get even much simpler proofs that what are expected of the textboo...
If someone reports inconsistent preferences in the Allais paradox, they're violating the axiom of independence and are vulnerable to a Dutch Book. How would you actually do that? What combination of bets should they accept that would yield a guaranteed loss for them?
How can we have Friendly AI even if we humans cannot agree about our ethical values? This is an SQ because probably this was the first problem solved -it just so obivious - yet I cannot find it.
I have not finished the sequences yet, but they sound a bit optimistic to me - as if basically everybody is a modern utilitarian and the rest of the people just don't count. To give you the really dumbest question: what about religious folks? Is it just supposed to be a secular-values AI and they can go pound sand, or some sort of an agreement, compromise drawn wit...
So my university sends ~weekly email reminders to not walk alone in dark places, because of robberies. And recently Baltimore introduced a night-time curfew to prevent rioting.
But is there any technical reason that you can't rob people or riot in daylight? Or is it all some giant coordination game where the police work hard to enforce the law during office hours, but then they go home for some well-earned rest and relaxation, while the streets devolve into a free-for-all?
I suppose people aren't robbed "in broad daylight", when there are many people on the streets, because standers-by can help the victim, call the police, or take videos that show the robber's face.
As for rioting, the rioters would rather attack and rob a store when the store-owner isn't there to defend it or, again, call for help or take photos.
But even if that wasn't so, there might be game theoretic reasons to rob and riot at night. Suppose police (or other authorities) need to invest some amount of effort to make each hour of the day or night crime-free. They don't have enough budget to make all hours crime-free; besides, the last few hours require the most effort, because it's easier to make robbers to delay their robbery by a few hours, than to make them never rob at all.
So which hours should the police invest their effort in? Since robbing affects pedestrians, and rioting affects stores and shoppers, then clearly police should prioritize daylight or working hours, when there are many more people at risk, when people can't just decide to stay home because they're afraid of being robbed, and when the police themselves want to have their shifts. And once police are more active during certain hours, criminals will become less active during those hours.
Why is it such a big deal for SpaceX to land its used booster rocket on a floating platform rather than just having the booster parachute down into the ocean and then be retrieved?
Salt water is VERY UNKIND to precision metal machinery like rocket engines. Also the tank has such thin walls that chaotic wave action will destroy it.
Consider a Coke can.
When it's closed and pressurized you have a very hard time crushing it. The internal pressure is converted to a force of tension that resists deformation. Once it's been opened, you can crush it with one hand from the side. But it's much stronger along the axis of the cylinder, since the force is directed through all the material rather than deforming it inwards.
A rocket if scaled down to the size of a coke can has walls much thinner than a coke can, and is much longer relative to its width. You can create great torques by hitting the sides to bend it, or crush it inwards. Imagine the force of tens of tons of water suddenly slapping onto the side of this tank as waves lap around, unevenly across multiple parts of the tank.
Consider a rocket.
It must, with the least possible amount of mass, generate a high acceleration along its direction of motion while subtending a very small surface area in that direction of motion. This dictates that it is long and thin, and able to withstand high forces along that long axis. But every kilogram you add to its mass is one kilogram you can't get to orbit, or a couple more kilograms of fuel. You make it withstand the for...
I think part of the problem is a fundamental misunderstanding of what parachuting into the ocean does to a rocket motor. The motors are the expensive part of the first stage; I don't know exact numbers, but they are the complicated, intricate, extremely-high-precision parts that must be exactly right or everything goes boom. The tank, by comparison, is an aluminum can.
The last landing attempt failed because a rocket motor's throttle valve had a bit more static friction than it should, and stuck open a moment too long. SpaceX's third launch attempt - the last failed launch they've had, many years ago with the Falcon 1 - was because the motor didn't shut off instantly before stage separation, like it should have. As far as I know, people still don't know why Orbital ATK (FKA Orbital Science)'s last launch attempt failed, except that it was obviously an engine failure. We talk about rocket science, but honestly the theoretical aspects of rocketry aren't that complicated. Rocket engineering, though, that's a bloody nightmare. You get everything as close to perfect as you can, and sometimes it still fails catastrophically and blows away more value than most of the people reading this th...
Elon Musk recently proposed to run the whole world on solar panels + lithium ion batteries.
Is there enough lithium in the world that we can mine to build enough batteries?
Where can I list the pages that I saved with the "save" button below the post? Or how else does the save work? I seem to remember having read how it works but it seems I can't find it.
So we want FAI's values to be conducive with human values. But how are people proposing FAI will deal with the fact that humans have conflicting and inconsistent values, both within individual humans and between different humans? Do we just hope that our values when extrapolated and reasoned far enough are consistent enough to be feasibly satisfied? I feel like I might have missed the LW post where this kind of thing was discussed, or maybe I read it and was left unsatisfied with the reasoning.
What happens, for example, to the person who takes their relig...
One common way to think about utilitarianism is to say that each person has a utility function and whatever utilitarian theory you subscribe to somehow aggregates these utility functions. My question, more-or-less, is whether an aggregating function exists that says that (assuming no impact on other sentient beings) the birth of a sentient being is neutral. My other question is whether such a function exists where the birth of the being in question is neutral if and only if that sentient being would have positive utility.
EDIT: I do recall that a similar-se...
Has anyone ever studied the educational model of studying just one subject at a time, and does it have a name? While in college the last semester, it occurred to me that, with so many subjects at once competing for my time and attention, I cannot dedicate myself to learning any given one in depth, and just achieve mediocre grades for all of them. The model I had in mind went like this:
1) Embark on one, and only one, subject for a few weeks or couple of months (example: high school trigonometry);
2) Study it full-time and exhaust the textbook;
3) Take an exam...
What is the LessWrong-like answer to whether someone born a male but who identifies as female is indeed female?
The Lesswrong-like answer to whether a blue egg containing Palladium is indeed a blegg is "It depends on what your disguised query is".
If the disguised query is which pronoun you should use, I don't see any compelling reason not use the word that the person in question prefers. If you insist on using the pronoun associated with whatever disguised query you associate with sex/gender, this is at best an example of "defecting by accident".
By the way, it is one of the best examples I've seen of quick, practical gains from reading LW: the ability to sort out problems like this.
This. After reading the Sequences, many things that seemed like "important complicated questions" before are now reclassified as "obvious confusions in thinking".
Even before reading Sequences I was already kinda supsicious that something is wrong when the long debates on such questions do not lead to meaningful answers, despite the questions do not contain any difficult math or any experimentally expensive facts. But I couldn't transform this suspicion into an explanation of what exactly was wrong; so I didn't feel certain about it myself.
After reading Sequences, many "deep problems" became "yet another case of someone confusing a map with the territory". -- But the important thing is not merely learning that the password is "map is not the territory", but the technical details of how specifically the maps are built, and how specifically the artifacts arise on those maps.
Is there anything a non-famous non-billionaire person do to meaningfully impact medical research? It seems like the barriers to innovation are insurmountable to everyone with the will to try, and the very few organizations and people who might be able to aren't dedicated to it.
Is there a reason not to drink oral rehydration solution just as a part of everyday life? Maybe as a replacement for water in general, or maybe as a replacement for sports drinks? (In my case, I'd be inclined to take a bottle of it along when I go dancing.)
If this was a good idea I'd expect it to be sold in shops, and as far as I can tell it's not, but I don't know why.
It looks like a litre has about 100 calories of sugar, and half the RDA of salt, but I'm not sure how worrying that is.
I have only taken one (undergrad microecon) course in econ - would really appreciate if some one more familiar with business and economics could go through my ramblings below and point out errors/make corrections in my understanding.
Picture the sell and buy side of business. The people on a given side compete against each other to be more favorable to the people on the other side. So more buyers => sellers capture more value. More sellers-> buyers capture more value. If you are selling a commodity, you don't capture much value at all. The price tha...
Using Bayesian reasoning, what is the probability that the sun will rise tomorrow? If we assume that induction works, and that something happening previously, i.e. the sun rising before, increases the posterior probability that it will happen again, wouldn't we ultimately need some kind of "first hyperprior" to base our Bayesian updates on, for when we originally lack any data to conclude that the sun will rise tomorrow?
Should people give money to beggars on the street? I heard conflicting opinions about this. Some say they just spend it on booze and cigarettes, so it would be more effective to donate that money to hostels for the homeless and similar institutions. Others say it's not a big deal and it makes them happy. What do you think?
Has any science fiction writer ever announced that he or she has given up writing in that genre because technological progress has pretty much ended?
Because if you think about it, the idea of sending someone to the moon has gone from science fiction to a brief technological reality ~ 45 years ago back to science fiction again.
Suppose I'm penniless. I borrow $1000 from the bank, go to a roulette table, and bet it all on red. If I win, I pay back the $1000 and end up with a profit. If I lose, I declare bankruptcy and never pay the money back. What's stopping people from doing this? Perhaps credit scores prevent this; if so, couldn't you just get some false documents and do this anyway?
If people obtain 70% information through vision, and might allocate more significance to negative feedback than to positive one, why do lecturers never bind their eyes?
How do we determine our "hyper-hyper-hyper-hyper-hyperpriors"? Before updating our priors however many times, is there any way to calculate the probability of something before we have any data to support any conclusion?
A public transportation dilemma: to get to the nearest subway station, which is on the same boulevard as my apartment building a good few blocks away, I have to take a bus to there. A bus trip to the subway station is short, about 2 or 3 minutes, but buses come at irregular times. I might find one already there, with its doors open, when I arrive to the bus stop, or I might wait 15-20 minutes for one to come. If I were to walk to my destination, the trip would take about 10 to 15 minutes.
When I'm in a hurry, I usually head for the bus stop and hope a bus c...
I can't figure out why induction is considered an axiom. It seems like it should follow from other axioms.
I did some googling and the answers were all something like "if you don't assume it, then you can't prove that all natural numbers can be reached by repeatedly applying S(x) from 0." But why can't you just assume that as an axiom, instead of induction?
You could argue that it doesn't matter because they are equivalent, and that seems to be what people said. But it doesn't feel like that. I think you could express that in first order logic, but...
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.