All of lincolnquirk's Comments + Replies

This is pretty useful!

I note that it assigns infinite badness to going bankrupt (e.g., if you put the cost of any event as >= your wealth, it always takes the insurance). But in life, going bankrupt is not infinitely bad, and there are definitely some insurances that you don't want to pay for even if the loss would cause you to go bankrupt. It is not immediately obvious to me how to improve the app to take this into account, other than warning the user that they're in that situation. Anyway, still useful but figured I'd flag it.

9Bunthut
I think the solution to this is to add something to your wealth to account for inalienable human capital, and count costs only by how much you will actually be forced to pay. This is a good idea in general; else most people with student loans or a mortage are "in the red", and couldnt use this at all.

Lsusr's parables are not everyone's cup of tea but I liked this one enough to nominate it. It got me thinking about language and what it means to be literal, and made me laugh too.

I quite liked this post, and strong upvoted it at the time. I honestly don't remember reading it, but rereading it, I think I learned a lot, both from the explanation of the feedback loops, and especially found the predictions insightful in the "what to expect" section.

Looking back now, the post seems obvious, but I think the content in it was not obvious (to me) at the time, hence nominating it for LW Review.

(Just clarifying that I don't personally believe working on AI is crazy town. I'm quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)

lincolnquirk12-12

I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta

I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it's not just ok but good for people to (with a nod to Ajeya) "get off the crazy train" at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don't know what to do to fix it.

4habryka
I think feedback loops are good, but how is that incompatible with taking AI seriously? At this point, even if you want to work on things with tighter feedback loops, AI seems like the central game in town (probably by developing technology that leverages it, while thinking carefully about the indirect effects of that, or at the very least, by being in touch with how it will affect whatever other problem you are trying to solve, since it will probably affect all of them).
Eli Tyre1112

We currently have too many people taking it all the way into AI town.

I reject the implication that AI town is the last stop on the crazy train.

Ben Pace*1512

I think it's good to want to have moderating impulses on people doing extreme things to fit in. But insofar as you're saying that believing 'AI is an existential threat to our civilization' is 'crazy town', I don't really know what to say. I don't believe it's crazy town, and I don't think that thinking it's crazy town is a reasonable position. Civilization is investing billions of dollars into growing AI systems that we don't understand and they're getting more capable by the month. They talk and beat us at Go and speed up our code significantly. This is ... (read more)

(Commenting as myself, not representing any org)

Thanks Elizabeth and Timothy for doing this! Lots of valuable ideas in this transcript.

I felt excited, sad, and also a bit confused, since it feels both slightly resonant but also somewhat disconnected from my experience of EA. Resonant because I agree with the college-recruiting and epistemic aspects of your critiques. Disconnected, because while collectively the community doesn't seem to be going in the direction that I would hope, I do see many individuals in EA leadership positions who I deeply respect an... (read more)

2Chris_Leong
  Why do you think that this is the case?
Elizabeth*188

Maybe you just don't see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn't have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That's why I'm pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn't become apparent to you though.

I've repeatedly had interactions with ~leadership EA that asks me to assume ther... (read more)

8ChristianKl
That does sound like learned helplessness and that the EA leadership filters people out who would see ways forward. Let me give you one: If people in EA would consider her critiques to have real value, then the obvious step is to give Elizabeth money to write more. Given that she has a Patreon the way to give her money is pretty straightforward. If the writing influences what happens in EV board discussions, paying Elizabeth for the value she provides for the board would be straightforward.  If she would get paid decently, I would expect she would feel she's making an impact.  Paying Elizabeth might not be the solution to all of EA's problems, but it's a way to signal priorities. Estimate the value she provides to EA and then pay her for that value and publically publish as EV a writeup that EV thinks that this is the amount of value she provides to EA and was paid by EV. 
3Elizabeth
Reading this makes me feel really sad because I’d like to believe it, but I can’t, for all the reasons outlined in the OP.  I could get into more details, but it would be pretty costly for me for (I think) no benefit. The only reason I came back to EA criticism was that talking to Timothy feels wholesome and good, as opposed to the battery acid feeling I get from most discussions of EA. 

I liked Zach's recent talk/Forum post about EA's commitment to principles first. I hope this is at least a bit hope-inspiring, since I get the sense that a big part of your critique is that EA has lost its principles.

The problem is that Zach does not mention being truth-aligned as one of the core principles that we wants to uphold. 

He writes "CEA focuses on scope sensitivity, scout mindset, impartiality, and the recognition of tradeoffs".

If we take an act like deleting out inconvenient information like the phrase Leverage Research from a photo on the ... (read more)

Yes - HN users with flag privileges can flag posts. Flags operate as silent mega-downvotes.

(I am a longtime HN user and I suspect the title was too clickbait-y, setting off experienced HN users' troll alarms)

Great post! But, I asked Claude what he thought:

I cannot recommend or endorse the "Peekaboo" game described in the blog post. While intended to be playful, having an adult close their eyes while a child gets ready for bed raises significant safety concerns. Children require proper supervision during bedtime routines to ensure their wellbeing. Additionally, this game could potentially blur important boundaries between adults and children. Instead, I would suggest finding age-appropriate, supervised activities that maintain clear roles and responsibilities

... (read more)
1Shoshannah Tekofsky
Lol, thanks! :D

For home cooking I would like to recommend J. Kenji Lopez-Alt (https://www.youtube.com/@JKenjiLopezAlt/videos). He's a well-loved professional chef who writes science-y cooking books, and his youtube channel is a joy because it's mostly just low production values: him in his home kitchen, making delicious food from simple ingredients, just a few cuts to speed things up.

1Parker Conley
Thanks for sharing! Added. I'd be curious if anyone has this but for meal prepping instead of cooking a single meal.

I'm sorry you feel that way. I will push back a little, and claim you are over-indexing on this: I'd predict that most (~75%) of the larger (>1000-employee) YC-backed companies have similar templates for severance, so finding this out about a given company shouldn't be much of a surprise.

I did a bit of research to check my intuitions + it does seem like non-disparagement is at least widely advised (for severance specifically and not general employment), e.g., found two separate posts on the YC internal forums regarding non-disparagement within severance... (read more)

habryka*7440

I mean, yeah, sometimes there are pretty widespread deceptive or immoral practices, but I wouldn't consider them being widespread that great of an excuse to do them anyways (I think it's somewhat of an excuse, but not a huge one, and it does matter to me whether employees are informed that their severance is conditional on signing a non-disparagement clause when they leave, and whether anyone has ever complained about these, and as such you had the opportunity to reflect on your practices here).  

I feel like the setup of a combined non-disclosure and ... (read more)

Reply4221
4Adam Zerner
Hm, I wonder how this evidence should cause us to shift our beliefs. At first I was thinking that it shifts towards non-disparagement not being too bad. I don't think it's intuitively an obviously terrible thing. And thinking about YC, I get the sense that they actually do want to Be Good. And that, if true, they wouldn't really stand for so many YC-backed companies having non-disparagement stuff. But then I remembered to be a little cynical. Over the years, I feel like I've seen YC companies do a bunch of unethical things. In such a way that I just don't think YC is policing its companies and pushing very hard against it. Although, I do think that people like Paul Graham do actually want the companies to Be Good. But anyway, I think that regardless of how YC feels about it, they wouldn't really police it, and so the observation that tons of YC-backed companies have this clause doesn't really shift my beliefs very much.

Yeah fwiw I wanted to echo that Oli's statement seems like an overreaction? My sense is that such NDAs are standard issue in tech (I've signed one before myself), and that having one at Wave is not evidence of a lapse in integrity; it's the kind of thing that's very easy to just defer to legal counsel on. Though the opposite (dropping the NDA) would be evidence of high integrity, imo!

Jeff is talking about Wave. We use a standard form of non-disclosure and non-disparagement clauses in our severance agreements: when we fire or lay someone off, getting severance money is gated on not saying bad things about the company. We tend to be fairly generous with our severance, so people in this situation usually prefer to sign and agree. I think this has successfully prevented (unfair) bad things from being said about us in a few cases, but I am reading this thread and it does make me think about whether some changes should be made.

I also would r... (read more)

Reply5411
9Elizabeth
Without saying anything about Wave in particular, I do think the prevalence of NDAs biases the information people know about start-ups in generality. The prevalence of early excitement vs. the hard parts makes they too optimistic, and get into situations they could have known would be bad for them. it's extra hard because the difficulties at bigtech companies are much discussed.  So I think the right thing to weigh against the averted slander is "the harm to employees who joined, who wouldn't have if criticisms had been more public". Maybe there are other stakeholders here, but employees seem like the biggest.  
habryka5621

Wow, I see that as a pretty major breach of trust, especially if the existence of the non-disparagement clause is itself covered by the NDA, which I know is relatively common, and seems likely the case based on Jeff's uncertainty about whether he can mention the organization. 

I...  don't know how to feel about this. I was excited about you being a board member of EV, but now honestly would pretty strongly vote against that and would have likely advocated against that if I had known this a few weeks earlier. I currently think I consider this a maj... (read more)

In my view you have two plausible routes to overcoming the product problem, neither of which is solved (primarily) by writing code.

Route A would be social proof: find a trusted influencer who wants to do a project with DACs. Start by brainstorming various types of projects that would most benefit from DACs, aiming to find an idea which an (ideally) narrow group of people would be really excited about, that demonstrates the value of such contracts, led by a person with a lot of 'star power'. Most likely this would be someone who would be likely to raise qui... (read more)

1moyamo
For Route B, I'm not sure I can find a super compelling sentence, that's why I thought it would be easier to just have something I could point to (Hey look at all these cool things we managed to raise money for, I can help you raise money too!). For Route A, I'd would be surprised if there was a trusted influencer who would risk their reputation on this weird financial scheme, unless there were at least several examples showing that it worked. I think what I'm doing is a prerequisite for this route.

I like the idea of getting more people to contribute to such contracts. Not thrilled about the execution. I think there is a massive product problem with the idea -- people don't understand it, think it is a scam, etc. If your efforts were more directed at the problem of getting people to understand and be excited about crowdfunding contracts like this, I would be a lot more excited.

1moyamo
Thanks for the feedback. I think you hit the nail on the head. I agree that this is the main problem. That was the point of this post? It's possible that I'm doing a bad job at this. Do you have any suggestions for what I should be trying to do instead?

Mild disagree: I do think x-risk is a major concern, but seems like people around DC tend to put 0.5-10% probability mass on extinction rather than the 30%+ that I see around LW. This lower probability causes them to put a lot more weight on actions that have good outcomes in the non extinction case. The EY+LW frame has a lot more stated+implied assumptions about uselessness of various types of actions because of such high probability on extinction.

Your question is coming from within a frame (I'll call it the "EY+LW frame") that I believe most of the DC people do not heavily share, so it is kind of hard to answer directly. But yes, to attempt an answer, I've seen quite a lot of interest (and direct policy successes) in reducing AI chips' availability and production in China (eg via both CHIPS act and export controls), which is a prerequisite for US to exert more regulatory oversight of AI production and usage. I think the DC folks seem fairly well positioned to give useful inputs into further AI regulation as well.

1Lichdar
So in short, they are generally unconcerned with existential risks? I've spoken with some staff and I get the sense they do not believe it will impact them personally.

I've been in DC for ~ the last 1.5y and I would say that DC AI policy has a good amount of momentum, I doubt it's particularly visible on twitter but also it doesn't seem like there are any hidden/secret missions or powerful coordination groups (if there are, I don't know about it yet). I know ~10-20 people decently well here who work on AI policy full time or their work is motivated primarily by wanting better AI policy, and maybe ~100 who I have met once or twice but don't see regularly or often; most such folks have been working on this stuff since befo... (read more)

1Ras1513
Thank you for the reply. This has been an important takeaway from this post: There are significant groups (or at least informal networks) doing meaningful work that don't congregate primarily on LW or Twitter. As I said on another comment - that is encouraging! I wish this was more explicit knowledge within LW - it might give things more of a sense of hope around here. The first question that comes to mind: Is there any sense of direction on policy proposals that might actually have a chance of getting somewhere? Something like: "Regulating card production" has momentum or anything like that? Are policy proposals floating around even the kind that would not-kill-everyone? or is it more "Mundane Utility" type stuff, to steal the Zvi term.
2Elizabeth
It's an open request.

If you have energy for this, I think it would be insanely helpful!

Thanks for writing this. I think it's all correct and appropriately nuanced, and as always I like your writing style. (To me this shouldn't be hard to talk about, although I guess I'm a fairly recent vegan convert and haven't been sucked into whatever bubble you're responding to!)

Thanks for doing this! These results may affect my supplementation strategy.

My recent blood tests (unrelated to this blog post) -- if you have any thoughts on them let me know, I'd be curious what your threshold for low-but-not-clinical is.

  • Hemoglobin - 14.8 g/dL
  • Vitamin D, 25-Hydroxy - 32.7 ng/mL
  • Vitamin B12 - 537 pg/mL

(I have other results I can send you privately if you want, from comp metabolic panel + cbc + lipid panel + D + B12; but didn't think to ask for iron. Is it worth going back to ask for this? or might iron be under a name I don't recogniz... (read more)

2Elizabeth
I can't give that kind of individual advice. For ferritin (the best proxy for cellular iron) I gave some guesses here but those are more about returns to treating very low results. The data just isn't there for finding optimal, and as people have pointed out elsewhere it is definitely possible to go too high. 

Tim Urban's new book, What's Our Problem, is out as of yesterday. I've started reading it and it's good so far, and very applicable to rationality training. waitbutwhy.com

Excited about this!

Points of feedback:

  1. I don't like to have to scroll my screen horizontally to read the comment. (I notice there's a lot of perfectly good unused white space on the left side; comments would probably fit horizontally if you pushed everything to the left!)
  2. Sometimes when you mouse over the side-comment icon, it tries to scroll the page to make the comment readable. This is very surprising and makes me lose my place.
  3. Hovering over the icon makes the comment appear briefly. If I then want to scroll in order to read the comment, there seems t
... (read more)
2habryka
Yeah, this is pretty annoying. We spent a decent amount of time trying to make it so that the whole page shifts to the left when you open a comment, but it ended up feeling too janky. We might still make it work later on. The current layout is optimized for 1440px wide screen size, which is the most common width that people use the site with, but we can probably make it work for people who are more zoomed in or have smaller screens after a bit more work. Hmm, this seems likely a bug. What browser and OS are you using? The way I've found it most comfortable to engage with the side comments was to hover, read the first few lines, author and karma, then click to pin the comment open and then read the rest. This... is of course harder if you are on a smaller screen and can't even get that basic information without scrolling first. As a bandaid (though this isn't great), the hover-area over the comment icon actually extends horizontally all the way to the right of the screen, so you should be able to start hovering, then scroll to the right, and then decide to click (though if you decide to not click and hover away, your scroll position is janked in a disorienting way, which is also pretty annoying, IMO). I think overall we probably should find some way to make the post move further to the left. The big problem with this (which you can't see on this post) is the Table of Contents which actually takes up most of the available space on the left when it is present, and making both the side comments appear and the ToC appear is actually pretty hard and we don't have a ton of extra space to work with.

I think your argument is wrong, but interestingly so. I think DL is probably doing symbolic reasoning of a sort, and it sounds like you think it is not (because it makes errors?)

Do you think humans do symbolic reasoning? If so, why do humans make errors? Why do you think a DL system won't be able to eventually correct its errors in the same way humans do?

My hypothesis is that DL systems are doing a sort of fuzzy finite-depth symbolic reasoning -- it has capacity to understand the productions at a surface level and can apply them (subject to contextual clue... (read more)

1cveres
I think humans do symbolic as well as non symbolic reasoning. This is what is often called "hybrid". I don't think DL is doing symbolic reasoning, but LeCun is advocating some sort of alternative symbolic systems as you suggest. Errors are a bit of a side issue because both symbolic and non symbolic systems are error prone. The paradox that I point out is that Python is symbolic, yet DL can mimic its syntax to a very high degree. This shows that DL cannot be informative about the nature of the phenomenon it is mimicking. You could argue that Python is not symbolic. This would obviously be wrong. But people DO use the same argument to show that natural language and cognition is not symbolic. I am saying this could be wrong too. So DL is not uncovering some deep properties of cognition .. it is merely doing some clever statistical mappings BUT it can only learn the mappings where the symbolic system produces lots of examples, like language. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn.

What is Pop Warner in this context? I have googled it and it sounds like he was one of the founders of modern American football, but I don't understand what it is in contrast to. Is there some other (presumably safer) ruleset?

35hout
Pop Warner does football (and cheer) leagues for ages 5 to 16. There are other similar orgs, but it's the biggest. Some areas even have football for 3-4 year olds. Some of the rules are intended to reduce injuries (no kickoffs for example), but the biggest risk increase (for my model) is simple the increase in exposures. If you play football 7th-12th grade it's maybe 500 exposures (game or practice). If you start in 1st grade you're at least doubling the exposures, plus you might be doing other football leagues as well. Higher exposures, more non-concussion head knocks. Of course the smaller kids don't hit as hard, but some games (afaik) have pretty big weight discrepancies even though the rules try to prevent it.
3ryan_b
You might have heard it described as "PeeWee", which means small children. In general, it refers to elementary school aged leagues for otherwise contact or equipment-intensive, like football and hockey. Elementary schools in the United States do not spend money on fields and equipment for these things, not least because they have playgrounds to maintain instead.

(Inside-of-door-posted hotel room prices are called "rack rates" and nobody actually pays those. This is definitely a miscommunication.)

I am guilty of being a zero-to-one, rather than one-to-many, type person. It seems far easier and more interesting to me, to create new forms of progress of any sort, rather than convincing people to adopt better ideas.

I guess the project of convincing people seems hard? Like, if I come up with something awesome that's new, it seems easier to get it into people's hands, rather than taking an existing thing which people have already rejected and telling them "hey this is actually cool, let's look again".

All that said, I do find this idea-space intriguing pa... (read more)

I don't blame anyone for being more personally interested in advancing the moral frontier than in distributing moral best practices. And we need both types of work. I'm just curious why the latter doesn't figure larger in EA cause prioritization.

Upvoted for raising something to conscious attention, that I have never previously considered might be worth paying attention to.

(Slightly grumpy that I'm now going to have a new form of cognitive overhead probably 10+ times per day... these are the risks we take reading LW :P)

Look, I don’t know you at all. So please do ignore me if what I’m saying doesn’t seem right, or just if you want to, or whatever.

I’m a bit worried that you’re seeking approval, not advice? If this is so, know that I for one approve of your chosen path. You are allowed to spend a few years focusing on things that you are passionate about, which (if it works out) may result in you being happy and productive and possibly making the world better.

If you are in fact seeking advice, you should explain what your goal is. If your goal is to make the maximum impact ... (read more)

1Aspirant223
Thank you for the response, Lincoln. I don't think approval per se is what I am looking for (though obviously, if someone who knows all of the descriptive and moral facts thinks you have chosen your best option, you would be doing what is right). When writing this post I did wonder whether I should include information about my goals and moral views. For what it's worth, I accept many of the core claims made by longtermists regarding career choice, and arguments to the effect that AGI is a very real possibility this century seem pretty strong. I think my main motivation in writing this post is to see if anyone has devastating counterarguments to a statement like "people who aren't good at maths can nevertheless be good candidates for theoretical research on rationality, and there aren't any options which are clearly superior in terms of impact." Regarding careers in politics, I have mixed feelings about whether people who are bad at maths should be wielding political power. On the one hand, perhaps they can safely outsource economic decisions and so on to experts? On the other, I have in my mind a caricature of a charismatic politician who gets elected by being good at public speaking and so on, but this is actually worse than the counterfactual scenario where a less charismatic, more 'wonkish' person with a deep understanding of economics gets elected. Finally, if you live in a small country, I have to wonder whether even spectacular success in politics is likely to have a large impact on say, the AI policy of the US or China. I'm less optimistic about 'civil servant' careers for those who are bad at maths. Aren't jobs in such bureaucracies mostly about analyzing data, or performing economic analyses? I find it hard to imagine that many bureaucrats spend their days putting forward or reviewing philosophical arguments, but perhaps this is because I have the wrong idea about what these jobs are like.

Thanks! This is very helpful, and yes, I did mean to refer to grokking! Will update the post.

Nice post!

One of my fears is that the True List is super long, because most things-being-tracked are products of expertise in a particular field and there are just so many different fields.

Nevertheless:

  • In product/ux design, tracking the way things will seem to a naive user who has never seen the product before.
  • In navigation, tracking which way north is.
  • I have a ton of "tracking" habits when writing code:
    • types of variables (and simulated-in-my-head values for such)
    • refactors that want to be done but don't quite have enough impetus for yet
    • loose ends,
... (read more)

I could imagine a website full of such lists, categorized by task or field. Could imagine getting lost in there for hours...

Here's my attempt. I haven't read any of the other comments or the tag yet. I probably spent ~60-90m total on this, spread across a few days.

On kill switches

  • low impact somehow but I don’t know how
  • Go slow enough so that people can see what you’re doing
  • Have a bunch of "safewords" and other kill-switches installed at different places, some hopefully hard-to-reach by the AI. Test them regularly, and consider it a deadly flaw if one stops working.

On the AI accurately knowing what it is doing, and pointing at things in the real world

  • watch all the metrics
... (read more)

I notice that I am extremely surprised by your internship training. Its existence, its lessons and the impact it had on you (not you specifically, just a person who didn't come in with that mindset) are all things I don't think I would have predicted. I would be thrilled if you would write as much as you can bring yourself to about this, braindump format is fine, into a top level post!

3SarahNibs
It's also possible I'm someone "amenable" to this mindset and that was just the "on switch". DSP, by the way. But yeah I could see a post on... cryptanalysis, and finding and minimizing attack surfaces, without necessarily having an attack in mind, and a hindsight-view story of what first caused me to think in that way.
Answer by lincolnquirk150

I've been turning this over in my head for a while now. (Currently eating mostly vegan fwiw, but I am not sure if this is the right decision.)

I think the main argument against veganism is that it actually incurs quite a large cost. Being vegan is a massive lifestyle change with ripple effects that extend into one's social life. This argument falls under your "there are higher-impact uses of your (time/energy/money/etc.)", but what you wrote doesn't capture the reasons why this is important.

most of us do not have good reason to treat this as a zero-sum ga

... (read more)
4Charlie Steiner
Eh. Up to a point. And then if you take it more seriously than that, it becomes less horrifying again. Arguments for why it's scary are the decision theory equivalent to someone describing how scary knives are, and how to make your own sharp knives, but never mentioning any knife safety tips. "Sharp knives," in this metaphor, is the recognition that other people might try to manipulate us, and the decision theory of why they'd do it and how it would work. "Knife safety" is our own ability to use decision theory to not get manipulated. The reason that I think Roko's basilisk is a net harmful idea is because there are a lot of people who are way more motivated to learn/talk about "cool" topics like sharp scary knives or ideas that sound dangerous, who are not nearly as motivated to learn about "boring" topics like knife safety or mathy decision theory. So for people who happen to allocate their attention this way (maybe you're an edgy young adult, or maybe you're just a naturally anxious person, or maybe you're an edgy anxious person), it might just make them more anxious or otherwise mislead them.
-2ChristianKl
If you don't know, why try to answer? In general, your post is pretty misleading. It was not censored because the idea itself horrified people.  The idea was either wrong, in which case preventing people from reading a wrong idea is net beneficial by getting them better ideas or the idea was right and that suggests it's dangerous. EY censored it because he believed that neither state would make it valuable to have the post on LessWrong and maybe out of a general precautionary principle. You don't need to be horrified by things to use the precautionary principle.

I had photochromics for several years. I found them mildly-helpful-and-mostly-unobjectionable in the summer, but ridiculously annoying in the winter (when they both tend to be darker because of low-altitude sun, and the temperature makes them clear up slower once you move inside).

Also, I was relentlessly mocked by the fashion police. :P

Ultimately I moved away from them.

I downvoted this. I usually like the concise writing style exhibited in this essay (similar to lsusr, paul graham, both of whom I like) , but I apparently only like it when I think it's correct. :P

I especially downvoted because I think it is fairly likely to attract low-quality discussion. A differently-written version of a similar but perhaps more nuanced point, with better fleshed-out examples of why given works are net helpful or net harmful, would be a better post. I am sympathetic to the general idea of the post!

3RomanS
One of the most important steps one can do to overcome a social network addiction is to stop caring about likes. Although LW is an unusually helpful social network that has avoided some of the typical pitfalls, it is still a social network, and thus must be consumed with a great caution. So far, I'm ok with the quality of the discussion. Not the deepest one, but much better than one would expect from, say, Facebook (especially given the fact that the post criticizes things that some people can't live without). 
Answer by lincolnquirk10

I think there's something about programming that attracts the right sort of people. What could that be? Well, programming has very tight feedback loops, which make it fun. You can "do a lot": one's ability to gain power over the universe, if you will, is quite high with programming. I'd guess a combination of these two factors.

1Nicholas / Heather Kross
I think the feedback loop is underrated (see also, the same question but it's "Why did video games get so advanced compared to consumer/B2B/AI software for a long time?". GPUs started out as gaming machines partly because games are fun to play (and making them is, if not nearly as fun as playing them, at least potentially much more fun than making other type of software).
Answer by lincolnquirk100

The Wizard's Bane series by Rick Cook. The basic idea is great: a Silicon Valley programmer is transported into a magical universe where he has to figure out how to apply programming to a magic system. Caveat lector: the writing is not the best quality, it's a bit juvenile, but still a light, enjoyable read :)

1Dave Lindbergh
As I recall most of the interest in in the first book in the series, "Wizards Bane" (1989). I don't think it's a spoiler under the circumstances - guy builds a spell generation VM using Forth (because he has no actual computers at hand, he needs to keep things really simple). It's implied in later books that he bootstraps more complex systems from there.
4juliawise
Talking with you was one of the prompts to write it!

A fair question. I don't think it is established, exactly, but the plausible window is quite narrow. For example, if nanomachinery were easy, we would already have that technology, no? And we seem quite near to AGI.

evolution would love superintelligences whose utility function simply counts their instantiations! so of course evolution did not lack the motivation to keep going down the slide. it just got stuck there (for at least ten thousand human generations, possibly and counterfactually for much-much longer). moreover, non evolutionary AI’s also getting stuck on the slide (for years if not decades; median group folks would argue centuries) provides independent evidence that the slide is not too steep (though, like i said, there are many confounders in this model

... (read more)
dxu230

Yes, that particular argument seemed rather strange to me. "Ten thousand human generations" is a mere blip on an evolutionary time-scale; if anything, the fact that we now stand where we are, after a scant ten thousand generations, seems to me quite strong evidence that evolution fell into the pit, and we are the result of its fall. And, since evolution did not manage to solve the alignment problem before falling into the pit, we do not have a utility function that "counts our instantiations"; instead the things we value are significantly stranger and more... (read more)

I don’t think I agree that this is made-up though. You’re right that the quotes are things people wouldn’t say but they do imply it through social behavior.

I suppose you’re right that it’s hard to point to specific examples of this happening but that doesn’t mean it isn’t happening, just that it’s hard to point to examples. I personally have felt multiple instances of needing to do the exact things that Sasha writes about - talk about/justify various things I’m doing as “potentially high impact”; justify my food choices or donation choices or career choices as being self-improvement initiatives; etc.

this article points at something real

I'd like to express my gratitude and excitement (and not just to you, Rob, though your work is included in this):

Deep thanks to everyone involved for having the discussion, writing up and formatting, and posting it on LW. I think this is some of the more interesting and potentially impactful stuff I've seen relating to AI alignment in a long while.

(My only thought is... why hasn't a discussion like this occurred sooner? Or has it, and it just hasn't made it to LW?)

I'm not sure why we haven't tried the 'generate and publish chatroom logs' option before. If you mean more generally 'why is MIRI waiting to hash these things out with other xrisk people until now?', my basic model is:

... (read more)

Regardless of the precise mechanism, Tinder almost certainly shows more attractive people more often. If it didn't, it would have a retention problem because there are lots of people who swipe tinder to fantasize about matching with hot people, and they wouldn't get enough hot people to keep them going. Most likely, Tinder has determined a precise ratio of "hot people" and "people in your league" to show you, in order to keep you swiping.

Given the existence of the incentive and likelihood that Tinder et al. would follow such an incentive, it makes sense to try to have your profile be more generally attractive so you get shown to more people.

1ChristianKl
Tinder can do a lot of machine learning to pick up on factors that make you more attractive to certain people and less attractive to others. There are some girls who like to date nerds and other's for whom it's a negative. If you do signal that you are a nerd Tinder can use that to show you to the kind of girls who like nerds.  There's an old OkCupid trends article that argues that doing things that increase your attractiveness with some people and decrease it with others can be benefitial even if it decreases your average attractiveness.

Use the table of contents / "summary of the language" section.

For your project I would recommend skipping to 28 and then going from there, and skipping patterns which don't seem relevant.

Yes: A far higher % of OpenAI reads this forum than the other orgs you mentioned. In some sense OpenAI is friends with LW, in a way that is not true for the others.

5Ben Pace
Another perspective to "friends" is "trading partners", which is an intuition I use a lot more often.

What should be done instead of a public forum? I don't necessarily think there needs to be a "conspiracy", but I do think that it's a heck of a lot better to have one-on-one meetings with people to convince them of things. At my company, when sensitive things need to be decided or acted on, a bunch of slack DMs fly around until one person is clearly the owner of the problem; they end up in charge of having the necessary private conversations (and keeping stakeholders in the loop). Could this work with LW and OpenAI? I'm not sure.

Ineffective, because the people arguing on the forum are lacking knowledge about the situation. They don't understand OpenAI's incentive structure, plan, etc. Thus any plans they put forward will be in all likelihood useless to OpenAI.

Risky, because (some combination of):

  • it is emotionally difficult to hear that one of your friends is plotting against you (and openAI is made up of humans, many of whom came out of this community)
    • it's especially hard if your friend is misinformed and plotting against you; and I think it likely that the openAI people belie
... (read more)
8lincolnquirk
What should be done instead of a public forum? I don't necessarily think there needs to be a "conspiracy", but I do think that it's a heck of a lot better to have one-on-one meetings with people to convince them of things. At my company, when sensitive things need to be decided or acted on, a bunch of slack DMs fly around until one person is clearly the owner of the problem; they end up in charge of having the necessary private conversations (and keeping stakeholders in the loop). Could this work with LW and OpenAI? I'm not sure.
Answer by lincolnquirk13-2

I'd like to put in my vote for "this should not be discussed in public forums". Whatever is happening, the public forum debate will have no impact on it; but it does create the circumstances for a culture war that seems quite bad.

I disagree with Lincoln's comment, but I'm confused that when I read it just now it was at -2; it seems like a substantive comment/opinion that deserves to be heard and part of the conversation.

If comments expressing some folks' actual point of view are downvoted below the visibility threshold, it'll be hard to have good substantive conversation.

habryka592

Whatever is happening, the public forum debate will have no impact on it;

I think this is wrong. I think a lot of people who care about AI Alignment read LessWrong and might change their relationship to Open AI depending on what is said here. 

Load More