Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 22 June 2017 04:44:09PM 0 points [-]

Has it been falsified? That is, empirically shown to be not true with regard to large populations (as opposed to individual counter-examples)?

Comment author: Kaj_Sotala 22 June 2017 07:35:03PM 2 points [-]

That's what the quote I posted said; the individual counter-examples are one thing, but the main thing is the complete lack of evidence for it.

Comment author: ChristianKl 22 June 2017 12:10:11PM 1 point [-]

Currently, there's competition about how has the best cloud between Google, IBM, Amazon, Oracle and Microsoft. It seems that those companies believe that a successful cloud platform is one that has API's that can easily used for a wide variety of use cases.

I think this kind of AI research is equivalent with AGI research.

Facebooks internal AI research is broad enough that they pursued Go as a toy problem similar to how Deep Mind did so. After DeepMind's success, Tencent Holding didn't take long to debut an engine that's on par with professional players even through it isn't yet on the AlphaGo level.

Apple has money lying around. It's knows that Siri underperforms at the moment, so it makes total sense to invest money into long-term AI/AGI research. Strategically Apple doesn't want to be in a situation where Google's DeepMind/Google Brain initiatives continue to put it's assistant well ahead of Apple's performance.

Samsung wants Bixby to be a success and not be outperformed by competing assistants. Samsung also needs AI in a variety of other fields for military tech to internet of things applications.

Bridgewater Associates is working on it's AI/human hybrid to replace Ray Dalio. Using humans as subroutines might mean that the result get's dangerous much faster.

Palantir wants the money of the US military for doing various analysis tasks. Given that it's a broad spectrum of tasks it pays to have quite AI general capabilities. The US military wants to buy AI. While the CIA is now in the Amazon cloud, Palantir wants to stay competitive and don't lose projects to Amazon and that requires it do to basic research.

Salesforce has the money and it will need to do a lot of AI to keep up with the times.

I think Baidu and Alibaba will face similar pressures as Google and Amazon. I think both need to invest into basic AI and the have the capability to do so.

Given the possible consequences of AGI for geopolitical power, I think it's very likely that the Chinese Government has an AGI project.

Comment author: Kaj_Sotala 22 June 2017 04:26:38PM *  2 points [-]

Okay, you have a much broader definition of what's AGI research, then. I usually interpret the term to only mean research that has making AGI as an explicit objective, especially since most researchers would (IME) disagree with "API's that can easily used for a wide variety of use cases" being equivalent to AGI research.

Comment author: komponisto 22 June 2017 10:42:46AM *  0 points [-]

Basically, Maslow's hierarchy of needs is a myth, and everyone would be better off forgetting about it entirely.

Not necessarily; it depends on what one's default or alternative theory would be. Let's be Bayesian, after all.

As I interpret it, "Maslow's hierarchy of needs" is little more than the claim that people's goals depend on their internal sense of security and status (in addition to whatever else they might depend on).

When I speak about it, I'm usually talking about something like a spectrum of exogenous vs. endogenous motivation: at one end you have someone being chased by a wild animal (thus maximally influenced by the environment), and at the other, the Nietzschean "superhuman" who lives only according to their own values, rather than channeling or being a tool of anyone or anything else (thus minimally influenced by the environment in some sense, although obviously everything is ultimately a product of some external force).

Comment author: Kaj_Sotala 22 June 2017 04:24:16PM 3 points [-]

Not necessarily; it depends on what one's default or alternative theory would be.

Self-determination theory is the standard alternative theory I usually point to (which also incorporates the spectrum of exogenous vs. endogenous motivation, but which I don't think the hierarchy of needs as usually conceived does).

Comment author: Lumifer 22 June 2017 02:34:10PM 0 points [-]

Maslow's hierarchy of needs is a myth

It's certainly not a myth because it's a theory (or a hypothesis) which actually exists. Its weak forms are rather obvious, famished poets notwithstanding. Psychology is not physics and should not pretend to be physics, it deals in weak generalizations and fuzzy conclusions. Maslow's hierarchy should not be thought of as an iron law which applies everywhere to everyone -- it's merely a framework for thinking about needs.

Comment author: Kaj_Sotala 22 June 2017 04:19:24PM *  1 point [-]

Psychology is not physics and should not pretend to be physics, it deals in weak generalizations and fuzzy conclusions.

Sure, but we're talking about a theory that isn't even accepted as a psychological theory: psychologists themselves have examined it, decided there was no reason to believe in it, and moved on.

Comment author: ChristianKl 19 June 2017 08:34:42PM *  1 point [-]

10 large companies seem to be an understatement.

From my head:
1. Baidu
2. Alibaba
3. Salesforce
4. Facebook
5. Amazon
6. Palantir
7. IBM
8. Google
9. Apple
10. Samsung
11. Microsoft
12. Bridgewater Associates
13. Infosys
14. The Chinese Government
15. Toyota
16. Tencent Holding
17. Oracle

Comment author: Kaj_Sotala 22 June 2017 06:23:08AM 1 point [-]

Are you saying that all of those are working on AGI? That would be enormously surprising to me.

Comment author: Wei_Dai 21 June 2017 06:18:53AM 0 points [-]

Artistic pursuits may be "upper-class", but they are not unproductive. They serve to keep the upper classes practiced in physical cognition, counteracting a tendency to shift entirely into social modes of cognition (gossip and status-signaling games) as one ascends the social ladder.

I'm having trouble understanding this. Why do artistic pursuits constitute practice in physical cognition as opposed to social cognition? It seems obvious to me that artistic pursuits are (among other things) a type of status signaling, so I'm confused why you're contrasting the two. Please explain?

With this level of resources (distributed in whatever way within a portfolio of financial, social, and intellectual capital), there is no excuse for conceiving oneself at any level below 4 of the Maslow hierarchy. Probably 5, really.

(Aside from not being sure how valid the Maslow hierarchy is) I agree with this. But I don't see art/music/dance classes as a particularly good way to prepare most kids to fulfill their level 4 and 5 needs, mostly because there is too much competition from other parents pushing their kids into artistic pursuits. The amount of talent, time, and effort needed to achieve recognition or a feeling of accomplishment seem too high, compared to other possible pursuits.

Comment author: Kaj_Sotala 22 June 2017 06:18:24AM *  2 points [-]

Aside from not being sure how valid the Maslow hierarchy is

Basically, Maslow's hierarchy of needs is a myth, and everyone would be better off forgetting about it entirely.

... critics point to dozens of counter-examples. What about the famished poet? Or the person who withdraws from society to become a hermit? Or the mountaineer who disregards safety in his determination to reach the summit?

Muddying things slightly, Maslow said that for some people, needs may appear in a different order or be absent altogether. Moreover, people felt a mix of needs from different levels at any one time, but they varied in degree.

There is a further problem with Maslow's work. Margie Lachman, a psychologist who works in the same office as Maslow at his old university, Brandeis in Massachusetts, admits that her predecessor offered no empirical evidence for his theory. "He wanted to have the grand theory, the grand ideas - and he wanted someone else to put it to the hardcore scientific test," she says. "It never quite materialised."

However, after Maslow's death in 1970, researchers did undertake a more detailed investigation, with attitude-based surveys and field studies testing out the Hierarchy of Needs.

"When you analyse them, the five needs just don't drop out," says Hodgkinson. "The actual structure of motivation doesn't fit the theory. And that led to a lot of discussion and debate, and new theories evolved as a consequence."

Comment author: Wei_Dai 21 June 2017 07:49:36AM 2 points [-]

teach a programming class to some kids in my area

Interesting, how do you motivate the kids to want to learn?

YMMV, but I haven't seen much progress happening as a result of boredom. As a child I was in this situation and spent most of my time pointlessly reading fiction.

Reading fiction hardly seems pointless, compared to other pursuits a parent might push a child into. It develops vocabulary and reading comprehension (helpful when you later want to read non-fiction), general knowledge and social abilities, and can lead to other interests. I got interested in crypto and the Singularity from reading Vernor Vinge, and philosophy in part from reading Greg Egan.

It seems like boredom as a strategy requires a lot of time and patience, even when it succeeds. I wasn't that serious about programming (despite learning the basics as a kid) until I got into crypto and decided that writing an open source crypto library would be a good way to help push towards a positive Singularity, and that only happened in college after I read Vinge's A Fire Upon the Deep.

At school level I'm not sure, but I feel that my verbal abilities are low because I never did anything like debating in my teens.

Your verbal abilities don't seem low to me (at least in writing). Maybe low compared to Eliezer, but then he is just off the charts.

I'm worried that competitive debating trains for the wrong things (e.g., using arguments as soldiers). ChristianKl's suggestion of drama lessons doesn't seem like it would increase verbal abilities more than say reading, but I'd be interested if anyone has evidence to offer about that. I'll probably have to do some research to see what other activities are good for increasing verbal skills.

Comment author: Kaj_Sotala 22 June 2017 06:12:05AM *  1 point [-]

ChristianKl's suggestion of drama lessons doesn't seem like it would increase verbal abilities more than say reading,

Reading, writing, speaking and listening are all somewhat distinct skills; my guess would be that if you wanted to optimize verbal abilities, you'd want to encourage all four. Drama lessons sound like they would help with the speaking and listening skills in a way that reading doesn't.

Also I'm under the impression that if you have four interrelated skills A-D, even if you were only interested in optimizing A, spending some time on each of B-D also lets you learn A better. I don't have a formal cite for that, but at least this article discusses it.

Comment author: komponisto 21 June 2017 11:44:28AM 0 points [-]

That sort of subject is inherently implicit in the kind of decision-theoretic questions that MIRI-style AI research involves. More generally, when one is thinking about astronomical-scale questions, and aggregating utilities, and so on, it is a matter of course that cosmically bad outcomes are as much of a theoretical possibility as cosmically good outcomes.

Now, the idea that one might need to specifically think about the bad outcomes, in the sense that preventing them might require strategies separate from those required for achieving good outcomes, may depend on additional assumptions that haven't been conventional wisdom here.

Comment author: Kaj_Sotala 21 June 2017 11:53:49AM 0 points [-]

Now, the idea that one might need to specifically think about the bad outcomes, in the sense that preventing them might require strategies separate from those required for achieving good outcomes, may depend on additional assumptions that haven't been conventional wisdom here.

Right, I took this idea to be one of the main contributions of the article, and assumed that this was one of the reasons why cousin_it felt it was important and novel.

Comment author: komponisto 21 June 2017 04:04:10AM *  0 points [-]

Decision theory (which includes the study of risks of that sort) has long been a core component of AI-alignment research.

Comment author: Kaj_Sotala 21 June 2017 09:07:00AM 0 points [-]

That doesn't seem to refute or change what Alex said?

Comment author: lifelonglearner 21 June 2017 12:36:05AM 1 point [-]

Thanks for voicing this sentiment I had upon reading the original comment. My impression was that negative utilitarian viewpoints / things of this sort had been trending for far longer than cousin_it's comment might suggest.

Comment author: Kaj_Sotala 21 June 2017 09:04:01AM 0 points [-]

The article isn't specifically negative utilitarian, though - even classical utilitarians would agree that having astronomical amounts of suffering is a bad thing. Nor do you have to be a utilitarian in the first place to think it would be bad: as the article itself notes, pretty much all major value systems probably agree on s-risks being a major Bad Thing:

All plausible value systems agree that suffering, all else being equal, is undesirable. That is, everyone agrees that we have reasons to avoid suffering. S-risks are risks of massive suffering, so I hope you agree that it’s good to prevent s-risks.

View more: Next