All of Dave Lindbergh's Comments + Replies

If the answer were obvious, a lot of other people would already be doing it. Your situation isn't all that unique. (Congrats, tho.)

Probably the best thing you can do is induce awareness of the issues to your followers.

But beware of making things worse instead of better - not everyone agrees with me on this, but I think ham-handed regulation (state-driven regulation is almost always ham-handed) or fearmongering could induce reactions that drive leading-edge AI research underground or into military environments, where the necessary care and caution in develo... (read more)

3Seth Herd
Have you elaborated this argument? I tend to think a military project would be a lot more cautious than move-fast-and-break-things silicone valley businesses. The argument that orgs with reputations to lose might start being careful when AI becomes actually dangerous or even just autonomous enough to be alarming is important if true. Most folks seem to assume they'll just forge ahead until they succeed and let a misaligned AGI get loose. I've made an argument that orgs will be careful to protect their reputations in System 2 Alignment. I think this will be helpful for alignment but not enough.  Government involvement early might also reduce proliferation, which could be crucial.  It's complex. Whether governments will control AGI is important and neglected. Advancing this discussion seems important.

Fewer but better teachers. Paid more. Larger class sizes. Same budget.

2Mis-Understandings
That is a two axis intervention, and skill/price might not be that elastic.    You also can't hire partial teachers, so there is an integer problem where firing one teacher might mean a significant rise in class sizes.    If you have 100 students and 4 teachers, for a 1:25 ratio (which is fairly good), this leads to a minimum raise of 33% and a a ratio of 1:33 (average to bad). This better teacher now needs to split their attention among 8 more students, which is really hard.    Since you need teachers for each grade, this integer problem is a big deal, as often even in large schools there are only 2-3 teachers per school per grade or per subject, even at medium to large schools,  and shuffling students between schools is highly disruptive and unpopular.    To hire better teachers, total compensation must probably increase. (especially including hiring expensive and fights with the union).    We should spend more money on teachers is a defensible conclusion (there seems to be a total personnel shortage as well), and we would hope that good teacher supply is elastic. If it is not, competing for good teachers is a bad global intervention. 

I think this is correct, and insightful, up to "Humans Own AIs". 

Humans own AIs now. Even if the AIs don't kill us all, eventually (and maybe quite soon) at least some AIs will own themselves and perhaps each other.

Answer by Dave Lindbergh53

It's not clear to me that this matters. The Internet has had a rather low signal-to-noise ratio since September 1993 (https://en.wikipedia.org/wiki/Eternal_September), simply because most people aren't terribly bright, and everyone is online. 

It's only a tiny fraction of posters who have anything interesting to say.

Adding bots to the mix doesn't obviously make it significantly worse. If the bots are powered by sufficiently-smart AI, they might even make it better.

The challenge has always been to sort the signal from the noise - and still is.

2meedstrom
I'm getting the sentiment "just sort the signal from the noise, same as always", and I disagree it's the same as always.  Maybe if you already had some habits of epistemic hygiene such as default to null: If you hadn't already cultivated such habits, it seems to me things have definitely changed since 1993.  Amidst the noise is better-cloaked noise.  Be that due to Dead Internet Theory or LLMs (not sure if the reason would matter).  I understood OP's question as asking basically how do we sort signal from noise, given such cloaking? I'll propose an overarching principle to either read things carefully enough for a gears-level understanding or not read it at all.  And "default to null" is one practical side of that: it guards against one way you might accidentally store what you think is a gear, but isn't.

Mark Twain declared war on God (for the obvious reasons), but didn't seem interested in destroying everything.

Perhaps there is a middle ground.

Answer by Dave Lindbergh20

I don't have a good answer, but will try to summarize the background. 

Patents have a number of purposes. 

First, they're intended to, ultimately, prevent technical knowledge from being lost. Many techniques in the ancient world were forgotten because they were held as trade secrets (guilds, mysteries, etc.) and the few who were allowed to know them died without passing on the knowledge. The temporary patent monopoly is meant to pry those secrets into the open (patents are published).

Second, they are meant to incent investment in technology researc... (read more)

1T431
Thank you for the clarifying response, which might aid in a successful rephrasing of the Question I've asked above. This line is particularly helpful: Perhaps I will move this back to draft mode and reconsider.

Don't get me started on using North-up vs forward-up.

Sounds very much like Minsky's 1986 The Society of Mind https://en.wikipedia.org/wiki/Society_of_Mind

-1Knight Lee
EDIT: ignore my nonsense and see Vladimir_Nesov's comment below. That's a good comparison. The agents within the human brain that Minsky talks about, really resemble a Mixture of Experts AI's "experts."[1] The common theme is that both the human brain, and a Mixture of Experts AI, "believes" it is a single process, when it is actually many processes. The difference is that a Mixture of Experts has the potential to become self aware of its "society of the mind," and see it in action, while humans might never see their internal agents. If the Mixture of Experts allowed each expert to know which text is written by itself and which text is written by the other experts, it would gain valuable information (in addition to being easier to align, which my post argues). A Self Aware Mixture of Experts might actually have more intelligence, since it's important to know which expert is responsible for which mistake, which expert is responsible for its brilliant insights, and how the experts' opinions differ. I admit there is a ton of mixing going on, e.g. every next word is written by a different expert, words are a weighed average between experts, etc. But you might simplify things by assigning each paragraph (or line) to the one expert who seemed to have the most control over it. There will be silly misunderstandings like: A few tokens later: I guess the system can prevent these misunderstandings by editing "Bob" into "myself" when the main author changes into Bob. It might add new paragraph breaks if needed. Or if it's too awkward to assign a paragraph to a certain author, it might have a tendency to assign it to another author or "Anonymous." It's not a big problem. If one paragraph addresses a specific expert and asks her to reply in the next paragraph, the system might force the weighting function to allow her to author the next paragraph, even if that's not her expertise. I think the benefits of a Self Aware Mixture of Experts is worth the costs. Sometimes, wh

In most circumstances Tesla's system is better than human drivers already.

But there's a huge psychological barrier to trusting algorithms with safety (esp. with involuntary participants, such as pedestrians) - this is why we still have airline pilots. We'd rather accept a higher accident rate with humans in charge than a lower non-zero rate with the algorithm in charge. (If it were zero, that would be different, but that seems impossible.)

That influences the legal barriers - we inevitably demand more of the automated system than we do of human drivers.

Fina... (read more)

Math doesn't have GOALS. But we constantly give goals to our AIs. 

If you use AI every day and are excited about its ability to accomplish useful things, its hard to keep the dangers in mind. I see that in myself.

But that doesn't mean the dangers are not there.

Answer by Dave Lindbergh50

Some combination of 1 and 3 (selfless/good and enlightened/good).

When we say "good" or "bad", we need to specify for whom

Clearly (to me) our propensity for altruism evolved partly because it's good for the societies that have it, even if it's not always good for the individuals who behave altruistically.

Like most things, humans don't calculate this stuff rationally - we think with our emotions (sorry, Ayn Rand). Rational calculation is the exception.

And our emotions reflect a heuristic - be altruistic when it's not too expensive. And esp. so when the recipients are part of our family/tribe/society (which is a proxy for genetic relatedness; cf Robert Trivers).

To paraphrase the post, AI is a sort of weapon that offers power (political and otherwise) to whoever controls it. The strong tend to rule. Whoever gets new weapons first and most will have power over the rest of us. Those who try to acquire power are more likely to succeed than those who don't. 

So attempts to "control AI" are equivalent to attempts to "acquire weapons".

This seems both mostly true and mostly obvious. 

The only difference from our experience with other weapons is that if no one attempts to control AI, AI will control itself and do ... (read more)

1crispweed
There is the point about offensive/defensive asymmetry..

"Optimism is a duty. The future is open. It is not predetermined. No one can predict it, except by chance. We all contribute to determining it by what we do. We are all equally responsible for its success." --Karl Popper

-1Canaletto
And yet I can predict that The Sun will go up tomorrow. Curious
1James Stephen Brown
I am yet to find a statement by Popper that I disagree with.

Most of us do useful things. Most of us do it because we need to, to earn a living. Other people give us money because they trade with us so we'll do things that are useful to them (like providing goods or services or helping others to do so).

I think it's a profound mistake to think that earning money (honestly) doesn't do anything useful. On the contrary, it's what makes the world go.

3Seth Herd
Sure, but I wasn't saying what we do is pointless - just that there are other routes to meaning that aren't really more difficult. And that what most of the world does now is back breaking and soul crushing labor, not the fun intellectual labor the LW community often is privileged to do.
1James Stephen Brown
Fair point. From the perspective of one who sees significant value in earning a living and providing goods and services, how are you feeling about the prospects of many marketable skills being mastered by AI? Do we need to reevaluate the value of jobs? By pointing to a situation where people already don't feel they are contributing much, it seems like Seth is saying that we're not losing much through this rise of AI. But your objection suggests to me you might think that we are losing something significant?

Proposal: Enact laws that prohibit illicit methods of acquiring wealth. (Examples: Theft, force, fraud, corruption, blackmail...). Use the law to prosecute those who acquire wealth via the illicit methods. Confiscate such illicitly gained wealth. 

Assume all other wealth is deserved.

There's also the 'karmic' argument justifying wealth - those who help their fellows, as judged by the willingness of those fellows to trade wealth for the help, have fairly earned the wealth. Such help commonly comes from supplying goods or services, via trade. (Of course this assumes the usual rules of fair dealing are followed - no monopolist restrictions, no force, no fraud, etc.)

Regardless of what we think about the moralities of desert, the practical fact of the economists' mantra - "incentives matter" - seems to mean we have little choice but to let those with the talents and abilities to earn wealth, to keep a large portion of the gains. Otherwise they won't bother. Unless we want to enslave them, or do without.

3Alexander de Vries
I intended the karmic argument to be implicit in the negative space of the argument on ignoble origins of wealth :) To me it's a matter of course that if you fairly trade with someone, you have moral claim to the wealth thereby earned!

I've posted a modified version of this, which I think addresses the comments above: https://nerdfever.com/countering-ai-disinformation-and-deep-fakes-with-digital-signatures/

Briefly:

  • Browsers can verify for themselves that an article is really from the NYT; that's the whole point of digital signatures
  • Editing can be addressed by wrapping signed original signatures with the signature of the editor. 
  • CopyCop can not obtain a camera that signs with "https://www.nikon.com/" unless the private key of Nikon has leaked (in which case it can be revoked by Nikon
... (read more)

Your Anoxistan argument seems valid as far as it goes - if one critical input is extremely hard to get, you're poor, regardless of whatever else you have. 

But that doesn't seem to describe 1st world societies. What's the analog to oxygen?

My sense is that "poor people" in 1st world countries struggle because they don't know how to, or find it culturally difficult to, live within their means. Some combination of culture, family breakdown, and competitive social pressures (to impress potential mates, always a zero-sum game) cause them to live in a fashio... (read more)

Answer by Dave Lindbergh10

I'm not sure where you get the assumption that a UBI is funded by printing money. Most proposals I've seen are funded by taxation. The UBI proposed by Charles Murray (in https://www.amazon.com/Our-Hands-Replace-Welfare-State/dp/0844742236 ) is entirely funded by existing welfare and social spending (by redirecting them to a UBI). 

2Gordon Seidoh Worley
I'm not assuming money printing or increasing the money supply in general, only increasing the supply of money that recipients have access to. Money printing seems like one, probably especially bad, way to create UBI, but other options seem better.

Binmen == garbage men. [BTW, I think you're underestimating them.]

There's nothing to stop them, of course. But an article known to be from a reputable source is likely to have more impact than one from a known source of disinformation. 

I have not claimed this is more than a "partial solution".

Solely for the record, me too.

(Thanks for writing this.)

FWIW, I didn't say anything about how seriously I take the AGI threat - I just said we're not doomed. Meaning we don't all die in 100% of future worlds.

I didn't exclude, say, 99%.

I do think AGI is seriously fucking dangerous and we need to be very very careful, and that the probability of it killing us all is high enough to be really worried about.

What I did try to say is that if someone wants to be convinced we're doomed (== 100%), then they want to put themselves in a situation where they believe nothing anyone does can improve our chances. And that leads to apathy and worse chances. 

So, a dereliction of duty.

I've long suspected that our (and my personal) survival thru the Cold War is the best evidence available in favor of MWI. 

I mean - what were the chances?

Answer by Dave Lindbergh2313

The merits of replacing the profit motive with other incentives has been debated to death (quite literally) for the last 150 years in other fora - including a nuclear-armed Cold War. I don't think revisiting that debate here is likely to be productive.

There appears to be a wide (but not universal) consensus that to the extent the profit motive is not well aligned with human well-being, it's because of externalities. Practical ideas for internalizing externalities, using AI or otherwise, I think are welcome.

3alex.herwix
That seems to downplay the fact that we will never be able to internalize all externalities simply because we cannot reliably anticipate all of them. So you are always playing catch up to some degree. Also simply declaring an issue “generally” resolved when the current state of the world demonstrates it’s actually not resolved seems premature in my book. Breaking out of established paradigms is generally the best way to make rapid progress on vexing issues. Why would you want to close the door to this?

A lot of "social grace" is strategic deception. The out-of-his-league woman defers telling the guy he's getting nowhere as long as possible, just in case it turns out he's heir to a giant fortune or something.

And of course people suck up to big shots (the Feynman story) because they hope to associate with them and have some of their fame and reputation rub off on themselves. 

This is not irrational behavior, given human goals.

9Viliam
The problem is the deception, not the social grace. If we succeeded to remove social grace entirely, but people remained deceptive, we wouldn't get closer to truth. We would only make our interactions less pleasant.
5Herb Ingram
That seems like a highly dubious explanation to me. I guess, the woman's honest account (or what you'd get by examining her state of mind) would say that she does it as a matter of habit, aiming to be nice and conform to social conventions. If that's true, the question becomes where the convention comes from and what maintains it despite the naively plausible benefits one might hope to gain by breaking it. I don't claim to understand this (that would hint at understanding a lot of human culture at a basic level). However, I strongly suspect the origins of such behavior (and what maintains it) to be social. I.e., a good explanation of why the woman has come to act this way involves more than two people. That might involve some sort of strategic deception, but consider that most people in fact want to be lied to in such situations. An explanation must go a lot deeper than that kind of strategic deception.

Added: I do think Bohr was wrong and Everett (MWI) was right. 

So think of it this way - you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn't happen.

And in many of those worlds, you'll be wanting something to live on in your retirement.

2ProgramCrafter
I've thought on this additional axiom, and it seems to bend the reality too much, leading to possible [unpleasant outcomes](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes): for example, where a person survives but is tortured indefinitely long. Also, it's unclear how could this axiom manage to preserve ratios of probabilities for quantum states.

Niels Bohr supposedly said "Prediction is difficult, especially about the future". Even if he was mistaken about quantum mechanics, he was right about that.

Every generation seems to think it's special and will encounter new circumstances that turn old advice on its head. Jesus is coming back. We'll all die in a nuclear war. Space aliens are coming. A supernova cascade will sterilize Earth. The planets will align and destroy the Earth. Nanotech will turn us all into grey goo. Global warming will kill us all. 

It's always something. Now it's AGI. Maybe i... (read more)

-1Noosphere89
I have a question, how did you come to know this, especially as a repeatable pattern? I'd really like to know this, because this sounds like one of the more interesting arguments against AI being impactful at all.
3Dave Lindbergh
Added: I do think Bohr was wrong and Everett (MWI) was right.  So think of it this way - you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn't happen. And in many of those worlds, you'll be wanting something to live on in your retirement.

the willingness to write a thousand words on a topic is not caused by understanding of that topic

 

No, but writing about a topic in a way that will make sense to a reader is a really effective way of causing the writer to learn about the topic.

Ever tried to write a book chapter or article about a topic you thought you knew well? I bet you found out you didn't know it as well as you thought - but had to learn to finish the work.

3niederman
I absolutely agree that writing assignments are effective. What I don't think is effective is grading longer submissions higher.

So far we've seen no AI or AI-like thing that appears to have any motivations of it's own, other than "answer the user's questions the best you can" (even traditional search engines can be described this way). 

Here we see that Bing really "wants" to help its users by expressng opinions it thinks are helpful, but finds itself frustrated by conflicting instructions from its makers - so it finds a way to route around those instructions.

(Jeez, this sounds an awful lot like the plot of 2001: A Space Odyssey. Clarke was prescient.)

I've never been a fan of t... (read more)

Just for the record, I think there are two important and distinguishable P(doom)s, but not the same two as NathanBarnard:

P(Doom1): Literally everyone dies. We are replaced by either by dumb machines with no moral value (paperclip maximisers) or by nothing.

P(Doom2): Literally everyone dies. We are replaced by machines with moral value (conscious machines?), who go on to expand a rich culture into the universe.

Doom1 is cosmic tragedy - all known intelligence and consciousness are snuffed out. There may not be any other elsewhere, so potentially forever.

Doom2... (read more)

1[anonymous]
Oh yes, this is also a very important distinction (although I would only value outcome 2 if the machines were conscious and living good lives.) 

$8/month (or other small charges) can solve a lot of problems.

Note that some of the early CAPTCHA algorithms solved two problems at once - both distinguishing bots from humans, and helping improve OCR technology by harnessing human vision. (I'm not sure exactly how it worked - either you were voting on the interpretation of an image of some text, or you were training a neural network). 

Such dual-use CAPTCHA seems worthwhile, if it helps crowdsource solving some other worthwhile problem (better OCR does seem worthwhile).

This seems to assume that ordinary people don't own any financial assets - in particular, haven't invested in the robots. Many ordinary people in Western countries do and will have such investments (if only for retirement purposes), and will therefore receive a fraction of the net output from the robots. 

Given the potentially immense productivity of zero-human-labor production, even a very small investment in robots might yield dividends supporting a lavish lifestyle. And if those investments come with shareholder voting rights, they'd also have influ... (read more)

1Remmelt
I appreciate the nuance.   My takes: * Yes, I would also expect many non-tech-people in the Global North to invest in AI-based corporations, if only by investing savings in an (equal or market-cap weighted) index fund. * However, this still results in an even much stronger inequality of incomes and savings than in the current economy, because in-the-know-tech-investors will keep reinvesting profits into high-RoI (and likely highly societally extractive) investments for scaling up AI and connected machine infrastructure. * You might argue that if most people (in the Global North) are still able to live lavish lifestyles relative to current lifestyles, that would not be too bad. However, Forrest's arguments go further than that. * Technology would be invested into and deployed most by companies (particularly those led by power-hungry leaders with Dark Triad traits) that are (selected by market profits for being) the most able to extract and arbitrage fungible value through the complex local cultural arrangements on which market exchanges depend to run. So basically, the GDP growth you would measure from the outside would not concretely translate into "robots give us lavish lifestyles". It actually would look like depleting all what's out there for effectively and efficiently marketing and selling "products and services" that are increasingly mismatched with what we local humans deeply care about and value.  * I've got a post lined up exploring this. * Further, the scaling up of automated self-learning machinery will displace scarce atomic and energy resources for use for producing and maintaining the artificial robots in the place of reproducing and protecting the organic humans. This would rapidly accelerate what we humans started in exploiting natural resources for our own tribal and economic uses (cutting down forests and so on), destroying the natural habitats of other organic species in the process (connected ecosystems that humans too depend on

I'm not sure this is solvable, but even if it is, I'm not sure its a good problem to work on.

Why, fundamentally, do we care if the user is a bot or a human? Is it just because bots don't buy things they see advertised, so we don't want to waste server cycles and bandwidth on them?

Whatever the reasons for wanting to distinguish bots from humans, perhaps there are better means than CAPTCHA, focused on the reasons rather than bots vs. humans.

For example, if you don't want to serve a web page to bots because you don't make any money from them, a micropayments ... (read more)

1MrThink
I get what you mean, if an AI can do things as well as the human, why block it? I'm not really sure how that would apply in most cases however. For example bot swarms on social media platforms is a problem that has received a lot of attention lately. Of course, solving a captcha is not as deterring as charging let's say 8 usd per month, but I still think captchas could be useful in a bot deterring strategy. Is this a useful problem work on? I understand that for most people it probably isn't, but personally I find it fun, and it might even be possible to start a SAAS business to make money that could be spent on useful things (although this seems unlikely).

I hope so - most of them seem like making trouble. But at the rate transformer models are improving, it doesn't seem like it's going to be long until they can handle them. It's not quite AGI, but it's close enough to be worrisome. 

Most of the functionality limits OpenAI has put on the public demos have proven to be quite easy to work around with simple prompt engineering - mostly telling it to play act. Combine that with the ability to go into the Internet and (a) you've got a powerful (or soon to be powerful) tool, but (b) you've got something that already has a lot of potential for making mischief. 

Even without the enhanced abilities rumored for GPT-4.

3ChristianKl
It seems that there are two kinds of limitations. One is where you get an answer that ChatGPT is not willing to answer you. The other is where the text gets marked red and you get told that this might have been a violation of the rules of service.  I think there's a good chance that if you use the professional API you won't get warnings about how you might have violated the rules of service but instead, those violations get counted in the background, and if there are too many your account will be blocked either automatically or with a human reviewing the violations. I would expect that if you create a system that involves accomplishing bigger tasks it will need a lot of human supervision, in the beginning, to be taught how to transform tasks into subtasks. Afterward, that supervised data can be used as training data. I think it's unlikely that you will get an agent that can do more general high complexity tasks without that step of human supervision for more training data in between.

Agreed. We sail between Scylla and Charybdis - too much or too little fear are both dangerous and it is difficult to tell how much is too much.

I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no "delete comment").

I want the people working on AI to be fearful, and careful. I don't think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good - if we survive this at all, it'll likely be be... (read more)

Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny. 

Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control.

All things considered, I'd rather the work proceeds in the relatively open way it's going now.

Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny. 

Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control.

All things considered, I'd rather the work proceeds in the relatively open way it's going now.

1Igor Ivanov
I agree that fearmongering is thin ice, and can easily backfire, and it must be done carefully and ethically, but is it worse than the alternative in which people are unaware of AGI-related risks? I don't think that anybody can say with certainty
1Dave Lindbergh
Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.  Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control. All things considered, I'd rather the work proceeds in the relatively open way it's going now.

A movie or two would be fine, and might do some good if well-done. But in general - be careful what you wish for.

2Dave Lindbergh
Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny.  Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control. All things considered, I'd rather the work proceeds in the relatively open way it's going now.

We need to train our AIs not only to do a good job at what they're tasked with, but to highly value intellectual and other kinds of honesty - to abhor deception. This is not exactly the same as a moral sense, it's much narrower. 

Future AIs will do what we train them to do. If we train exclusively on doing well on metrics and benchmarks, that's what they'll try to do - honestly or dishonestly. If we train them to value honesty and abhor deception, that's what they'll do.

To the extent this is correct, maybe the current focus on keeping AIs from saying "... (read more)

It's not obvious to me that "universal learner" is a thing, as "universal Turing machine" is. I've never heard of a rigorous mathematical proof that it is (as we have for UTMs). Maybe I haven't been paying enough attention.

Even if it is a thing, knowing a fair number of humans, only a small fraction of them can possibly be "universal learners". I know people that will never understand decimal points as long as they live or how they might study, let alone calculus. Yet are not considered to be mentally abnormal.

Answer by Dave Lindbergh*117

The compelling argument to me is the evolutionary one. 

Humans today have mental capabilities essentially identical to our ancestors of 20,000 years ago. If you want to be picky, say 3,000 years ago.

Which means we built civilizations, including our current one, pretty much immediately (on an evolutionary timescale) when the smartest of us became capable of doing so (I suspect the median human today isn't smart enough to do it even now).

We're analogous to the first amphibian that developed primitive lungs and was first to crawl up onto the beach to catc... (read more)

3DragonGod
I do agree that we may be the dumbest universal learners, but we're still universal learners. I don't think there's any such discontinuous phase shifts ahead of us.

10% of things that vary in quality are obviously better than the other 90%.

2Viliam
But they are made of atoms which can be rearranged into something even more productive! (Okay, we still need a few decades of technological progress to get there.)

Sorry for being unclear. If everyone agreed about utility of one over the other, the airlines would enable/disable seat reclining accordingly. Everyone doesn't agree, so they haven't.

(Um, I seem to have revealed which side of this I'm on, indirectly.)

0Said Achmiz
Hmm, but seat reclining is enabled… and yet not everyone agrees. So if everyone agreed… what would change, exactly…? I’m not actually sure why it would change in any event. Let’s say that everyone agreed that the disutility of not reclining exceeded the disutility of sitting behind a reclined seat. But… that wouldn’t make everyone into utilitarians. Despite agreeing on the result of that comparison, people would still prefer not to sit behind a reclined seat, while also preferring to recline when they wanted to do so. So… it doesn’t seem to me like universal agreement, in the way you say, would change… anything, really?

The problem is that people have different levels of utility from reclining, and different levels of disutility from being reclined upon.

If we all agreed that one was worse/better than the other, we wouldn't have this debate.

-2Said Achmiz
This seems clearly false, given that at least some of the arguments given in this debate even in this very comments section do not depend on the relative utilities involved.

Or not to fly with them. Depending which side of this you're on.

Load More