All of aaq's Comments + Replies

1a -> Broadly agree. "Weaker" is an interesting word to pick here; I'm not sure whether an anarcho-primitivist society would be considered weaker or stronger than a communist one systemically. Maybe it depends on timescale. Of course, if this were the only size lever we had to move x-risk up and down, we'd be in a tough position - but I don't think anyone takes that view seriously.

1b -> Logically true, but I do see strong reason to think short term x-risk is mostly anthropogenic. That's why we're all here.

2 -> I do agree it would probably take a w... (read more)

2Pattern
This could mean a few different things. What did you mean by it? (Specifically "That's why we're all here.".)

Why is it a stretch?

2Dagon
It's hard to quantify the resource or define how it reduces with use or how it's replenished.  This makes it an imperfect match for the TotC analogy.

AI development is a tragedy of the commons

Per Wikipedia:

In economic science, the tragedy of the commons is a situation in which individual users, who have open access to a resource unhampered by shared social structures or formal rules that govern access and use, act independently according to their own self-interest and, contrary to the common good of all users, cause depletion of the resource through their uncoordinated action.

The usual example of a TotC is a fishing pond: Everyone wants to fish as much as possible, but fish are not infinite, and if... (read more)

4Dagon
With a little stretch, EVERY coordination problem is a tragedy of the commons.  It's only a matter of identifying the resource that is limited but has uncontrolled consumption.   In this case, it IS a stretch to think of "evil-AGI-free world" as a resource that's being consumed.  and it doesn't really lead to solutions - many TotC problems can be addressed by defining property rights and figuring out who has the authority/ability to exclude uses in order to protect the long-term value of the resource.  

Would AGI still be an x-risk under communism?

1-bit verdict

Yes.

2-bit verdict

Absolutely, yes.

Explanation

An artificial general intelligence (AGI) is a computer program that can perform at least as good as an average human being can across a wide variety of tasks. The concept is closely linked to that of a general superintelligence, which can perform better than even the best human being can across a wide variety of tasks.

There are reasons to believe most, perhaps almost all, general superintelligences would end up causing human extinction. AI safety is ... (read more)

3Pattern
1. It seems like: 'Weaker Econ system' -> less human made x-risk with high development cost. (Natural pandemics can occur, so whether they would be difficult to make isn't clear.) That's not to say that overall x-risk is lower - if a meteor hits and wipes out earth's entire population, then not being on other worlds is also an issue. 2. There is no reason to suspect such black markets wouldn't have just as strong a profit motive to create stronger and stronger AGIs. This seems surprising - developing to the level of 'we're working on AI', it takes a while to get there. 3. I'd have guessed you'd mention 'communism' creating AGI. (These markets keep popping up! What should we do about them? We could allocate stuff using an AI...) There's deterrence oriented legislation?

Towards a #1-flavored answer, a Hansonian fine insured bounty system seems like it might scale well for enforcing cooperation against AI research.

https://www.overcomingbias.com/2018/01/privately-enforced-punished-crime.html

OP here, talking from an older account because it was easier to log into on mobile.

Kill: I never said anything about killing them. Prisoners like this don't pose any immediate threat to anyone, and indeed are probably very skilled white collar workers who could earn a lot of money even behind bars. No reason you couldn't just throw them into a minimum security jail in Sweden or something and keep an eye on their Internet activity.

McCarthyism: Communism didn't take over in the US. That provides if anything weak evidence that these kinds of policies can work... (read more)

[This comment is no longer endorsed by its author]Reply
1Jalex Stark
  I don't see it. Literally how would I defend myself? Someone who doesn't like me tells you that I'm doing AI research. What questions do you ask them before investigating me? What questions do you ask me? Are there any answers I can give that meaningfully prove that I never did any such research (without you ransacking my house and destroying my computers?) re q2: If you set up the bounty, then other people can use it to target whoever they want. Other people might have plenty of reasons to target alignment-oriented researchers. Alignment-oriented researchers are a more extreme / weird group of people than AI researchers at large, so I expect there to be more optimization pressure per target trying to target them. (jail / neutralize / kill / whatever you want to call it)  I don't think Goodhart is to blame here, per se. You are giving out a tool that preferentially favors offense to defense (something of an asymmetric weapon). Making the criteria coarser gives more power to those who want to abuse it, not less. I really don't empathize with an intuition that this would be effective at causing differential progress of alignment over capability. Much like McCarthyism, the first order effect is terrorism, (especially in adjacent communities but also everywhere) and the intended impact is a hard-to-measure second order effect. (Remember, you need to slow down AI progress more than you slow down AI alignment progress, and that is hard to measure.) Eliezer recently pointed out that in the reference class of "do something crazy and immoral because it might have good second-order effects" tends to underperform pretty badly on those second-order effects.

Metcalfe's (revised!) law states that the value of a communications network grows at about .

I frequently give my friends the advice that they should aim to become pretty good at 2 synergistic disciplines (CS and EE for me, for example), but I have wondered in the past why I don't give them the advice to become okay at 4 or 5 synergistic disciplines instead.

It just struck me these ideas might be connected in some way, but I am having trouble figuring out exactly how.

Try to think about this in terms of expected value. On your specific example, they do score more, but this is probabilistic thinking, so we want to think about it in terms of the long run trend.

Suppose we no longer know what the answer is, and you are genuinely 50/50 on it being either A or B. This is what you truly believe, you don't think there's a chance in hell it's C. If you sit there and ask yourself, "Maybe I should do a 50-25-25 split, just in case", you're going to immediately realize "Wait, that's moronic. ... (read more)

2Bucky
I think all of this is also true of a scoring rule based on only the probability placed on the correct answer? In the end you'd still expect to win but this takes longer (requires more questions) under a rule which includes probabilities on incorrect answers - it's just adding noise to the results.

I disagree with your first point, I consider the 50:25:25:0 thing is the point. It's hard to swallow because admitting ignorance rather than appearing falsely confident always is, but that's why it makes for such a good value to train.

3Bucky
But if I my genuine confidence levels are 50:50:0:0 it seems unfair that I score less than someone whose genuine confidence levels are 50:25:25:0 - we both put the same probability on the correct score so why do they score more?

Agreed on the difference. Different subcultures, I think, all try to push different narratives about how they are significantly different from other subcultures; they are in competition with other subcultures for brain-space. On that observation, my priors that rationalist content is importantly different to other subcultures in that regard are low.

I suppose my real point in writing this is to advise against a sort of subcultural Fear Of Being Ordinary -- rationalism doesn't have to be qualitatively different from other subcultures to be valuable. For people under its umbrella, it can be very valuable, for reasons that have almost nothing to do with the quirks of the subculture itself.

2Raemon
Nod. I do agree with that. 

This actually seems like a really, really good idea. Thanks!

Great post! Simple and useful. For spaced-repetition junkies in the crowd, I created a small Anki deck, created from this post to help me retain the basics.

27 cards: https://ankiweb.net/shared/info/187030147

You could normalize the scoring rule back to 1, so that should be fine.

Scattered thoughts on how the rationalist movement has helped me:

On the topic of rationalist self-improvement, I would like to raise the point that simply feeling as though there's a community of people who get me and that I can access when I want to has been hugely beneficial to my sense of happiness and belonging in the world.

That generates a lot of hedons for me, which then on occasion allow me to "afford" doing other things I wouldn't otherwise, like spend a little more time studying mathematics or running through Anki flashcards. ... (read more)

2ChristianKl
Which post from Ozy do you mean?
5paul ince
I have spent many years unintentionally dumbing myself down by not exercising my brain sufficiently. This place is somewhere I can come and flex a bit of mental muscle and get a bit of a dopamine reward for grasping a new concept or reading about how someone else worked their way through a problem and I am really glad it exists. The HPMOR series was especially useful for becoming more rational and since reading it my peers have noticed a change in the way I discuss difficult topics. I really enjoy recognising when the tools I've learnt here help me in my day to day stuff. In saying all that I feel I'm a rare 'right of centre' member here but because you are all rational it's not such a big deal. Rational people are so much nicer to talk to eh!
4Raemon
This certainly seems important (I do think this is a key value the community provides). But it is importantly different from "the rationality content of the community is directly helpful for people-in-general." If it were just "people who get you", this wouldn't obviously be more or differently important than other random subcultures.
5Viliam
Yeah, similar here. The existence of people with values similar to mine is emotionally comforting, and they also give good advice.

When I stop to think of people I support who I would peg as "extreme in words, moderate in actions", I think I feel a sense of overall safety that might be relevant here.

Let's say I'm in a fierce, conquering mood. I can put my weight behind their extremism, and feel powerful. I'm Making A Difference, going forth and reshaping the world a little closer to utopia.

When I'm in a defeatist mood, where nothing makes sense and I feel utterly hopeless, I can *also* get behind the extremism -- but it's in a different light, now. I... (read more)

Causality seems to be a property that we can infer in the Democritan atoms and how they interact with one another. But when you start reasoning with abstractions, rather than the interactions of the atoms directly, you lose information in the compression, which causes causality in the interactions of abstractions with another to be a harder thing to infer from watching them.

I don't yet have a stronger argument than that; this is a fairly new topic of interest to me.

1ErickBall
But we have no problem observing causality in nature as well as in man-made environments. It seems like human culture has not so much made the world friendly to human concepts of causality, rather it has built up a standard set of human-friendly abstractions that are selected for their ability to fit causal models onto a complex world. There are lots of parts of the world where causality exists but is not observable through abstractions (e.g. butterfly effects). We generally ignore these.

I would picture them as rectangles and count. Like, 2x3 would look like

xxx

xxx

in my head, and for small numbers I could use the size of it to feel whether I was close. I remember doing really well with ratios and fractions and stuff for that reason.

For larger numbers, like 8x8, I would often subdivide into smaller squares (like 4x4 or 2x2), and count those. Then it would be easy to subdivide the larger one and repeat-add. I would often get a sour taste if the answer just "popped" into my head and I would actively fight against it, so I think there... (read more)

Agreed. I'm a big fan of spaced repetition systems now, even though I have a long way to go towards consistently using them.

6Viliam
By the way, I am usually a big fan of (the motte of) constructionism. Specifically with math, it makes me angry how often kids just memorize things without thinking, either because there is not enough time, or because the teachers do not understand it themselves. But somehow, humans have the habit of taking good and reasonable ideas, making strawman versions of them, and presenting them as the true thing. (I suppose this is about signalling -- the more extreme version of X you believe, the more respected you are in the crowd that chose X as an applause light, even if that version does not make sense anymore. It's no longer about making sense, but about showing loyalty.) "Hey, if understanding things is good, then 100% of understanding with 0% of anything else (remembering, practice, etc.) must be the best thing ever, am I right?"
Answer by aaq70

For your specific situation, may I recommend curling up with Visual Complex Analysis for a few hours? 😊 http://pipad.org/tmp/Needham.visual-complex-analysis.pdf

On a more general note, I find that anyone who says they "learned it from first principles" is usually putting on airs. It's an odd intellectual purity norm that I think is unfortunately very common among the mathematically- and philosophically-minded.

As evolved chimpanzees, we are excellent at seeing a few examples of something and then understanding the more general abstrac... (read more)

I always like seeing someone else on LessWrong who's as interested in the transformative potential of SRS as I am. 🙂

Sadly, I don't have any research to back up my claims. Just personal experiences, as an engineering student with a secondary love of computer science and fixing knowledge-gaps. Take this with several grains of salt -- it's not exactly wild, speculative theory, but it's not completely unfounded thinking either.

I'm going to focus on the specific case you mentioned because I'm not smart enough to generalize ... (read more)

Answer by aaq110

Set up for Success: Insights from 'Naive Set Theory'

I very much doubt anyone else will care much about this post, so I will give my reasoning.

Please vote before you read my reasoning. :)

  • This is the only post I've ever read that actually convinced me to do something with substantial effort, that is, actually read Naive Set Theory. I really, really wanted to practice kata on sets before I attempted a math minor and I still look back on that as the best 3 weeks of last summer.
  • Reading NST the way I did taught me a lot about how not to read a ma
... (read more)

To generalize this heuristic a bit, and to really highlight where its weaknesses lie: An ethical argument that you should make a significant change to your lifestyle should be backed up more strongly in proportion to that change.

For example, to most people, the GWWC 10% pledge is a mildly extraordinary claim. (My parents actually yelled at me when I donated my Christmas money at 17.) But I think it does meet our bar of evidence: 10% income is usually no great hardship if you plan for it, and the arguments that the various EAs put forward for it are often quite strong.

Where this heuristic breaks down is an exercise to the reader. :)

Thanks! :)

I think the approach is different for me, but maybe other person leverage gratitude as a way to fight their negative thoughts closer to the way you imply.

I'm very wary of this post for being so vague and not linking to an argument, but I'll throw my two cents in. :)

The future will not have a firm concept of individuals.

I see two ways to interpret this:

  1. You could see it as individuals being uploaded to some giant distributed AI - individual human minds coalescing into one big super-intelligence, or being replaced by one; or
  2. Having so many individuals that the entire idea of worrying about 1 person, when you have 100 billion people per planet per quadrant or whatever, becomes laughable.

The common threa... (read more)

Hello from Boston. I've been reading LW since some point this summer. I like it a lot.

I'm an engineering student and willing to learn whatever it takes for me to tackle world problems like poverty, hunger and transmissible diseases. But for now I'm focusing my efforts on my degree.

I'm still a student, so I don't think I'd be able to take this sort of job. But consider my volunteer application sent. You guys are doing important work! -Andrew Quinn