Algon

Wikitag Contributions

Comments

Sorted by
Algon*20

EDIT 2: Did you mean that there are advantages to having both courage and caution, so you can't have a machine that has maximal courage and maximal caution? That's true, but you can probably still make pareto improvements over humans in terms of courage and caution. 

Would changing "increase" to "optimize" fix your objection? Also, I don't see how your first paragraph contradicts the first quoted sentence. 

Mathematically impossible. If X matters then so does -X, but any increase in X corresponds to a decrease in -X.

I don't know how the second sentence leads to the first. Why should a decrease in -X lead to less success? Moreover, claims of mathematical impossibility are often over-stated. 

As for the paragraph after, it seems like it assumes current traits being on some sort of pareto frontier of economic-fitness. (And, perhaps, an assumption of adequate equilibria). But I don't see why that'd be true. Like, I know of people who are more diligent than me, more intelligent, have lower discount rates etc. And they are indeed successful. EDIT: AFAICT, there's a tonne of frictions and barriers, which weaken the force of the economic argument I think you're making here.

 

Algon42

That said, “nice to most people but terrible to a few” is an archetype that exists.

Honestly, this is close to my default expectation. I don't expect everyone to be terrible to a few people, but I do expect there to be some class of people I'd be nice to that they'd be pretty nasty towards. 

Algon20

It’s kind of like there is this thing, ‘intelligence.’ It’s basically fungible, as it asymptotes quickly at close to human level, so it won’t be a differentiator.

I don't think he ever suggests this. Though he does suggest we'll be in a pretty slow takeoff world.

Algon8-5

Consistently give terrible strategic takes, so people learn not to defer you. 

Algon30

Yeah! It's much more in-depth than our article. We were thinking we should re-write ours to give the quick run down of EY's and then link to it.

Algon20

: ) You probably meant to direct your thanks to the authors, like @JanB.

Algon51

A lot of the ideas you mention here remind me of stuff I've learnt from the blog commoncog, albeit in a business expertise context. I think you'd enjoy reading it, which is why I mentioned it.

Algon42

Presumably, you have this self-image for a reason. What load-bearing work is it doing? What are you protecting against? What forces are making this the equilibrium strategy? Once you understand that, you'll have a better shot of changing the equilibrium to something you prefer. If you don't know how to get answers to those questions, perhaps focus on the felt-sense of being special. 

Gently hold a stance of curiosity as to why you believe these things, give your subconscious room and it will float up answers your self. Do this for perhaps a minute or so. It can feel like there's nothing coming for a while, and nothing will come, and then all of a sudden a thought floats into view. Don't rush to close your stance, or protest against the answers you're getting. 

Algon30

Yep, that sounds sensible. I sometimes use consumer reports in my usual method for buying something in product class X. My usual is: 
1) Check what's recommended on forums/subreddits who care about the quality of X. 
2) Compare the rating distribution of an instance of X to other members of X. 
3) Check high quality reviews. This either requires finding someone you trust to do this, or looking at things like consumer reports. 
 

Load More