All of DusanDNesic's Comments + Replies

Oh, I didn't notice, but yeah, just a link to it, not the whole text!

2Ruby
Was there the text of the post in the email or just a link to it?

Quicky thoughts, not fully fledged, sorry.

Maybe it depends on the precise way you see the human take-over, but some benefits of Stalin over Clippy include:

Humans have to sleep, have biological functions, and have need to be validated and loved etc which is useful for everyone else.

Humans also have limited life span and their progeny has decent random chances of wanting things to go well for everyone.

Humans are mortal and posses one body which can be harmed if need be making them more likely to cooperate with other humans.

I mean humans with strong AGIs under their control might function as if they don't need sleep, might become immortal, will probably build up superhuman protections from assasination, etc

A crux I have on the point about disincentivising developers from developing parts of their own land - how common is this? In my own country, the answer is - not at all, almost all development comes from the government building infrastructure, schools, etc. and developers buy land near where they know the government will build a metro line or whatever to leech off the benefits. Is the situation in the US that developers often buy big plots of cheap land and develop them with roads, hospitals, schools, to benefit from the rise in value of all the other land?

I think this view is quite US-centric as in fact most countries in the world do not include mineral rights with the land ownership (and yet, minerals are explored everywhere, not just US, meaning imo that profit motive is alive and well when you need to buy licences on top of the land, it's just priced in differently). From Claude:

In a relatively small number of countries, private landowners own mineral rights (including oil) under their property. The United States is the most notable example, where private mineral rights are common through the concept of

... (read more)

Excellent article, and helpful by introducing vocabulary that makes me think things which I was trying to understand. Perhaps it should be cross posted to EA Forum?

Future wars are about to look very silly.

I'm very sad I cannot attend at that time, but I am hyped about this and believe it to be valuable, so I am writing this endorsement as a signal to others. I've also recommended this to some of my friends, but alas UK visa is hard to get on such short notice. When you run it in Serbia, we'll have more folks from the eastern bloc represented ;)

1Sahil
Thank you, Dusan! Next time there will be more notice, and also a more refined workshop!

I think an important thing here is:

A random person gets selected for office. Maybe they need to move to the capital city, but their friends are still "back home." Once they serve their term, they will want to come back to their community most likely. So lobbying needs to be able to pay to get you out of your community, break all your bonds and all that during your short stint in power. Currently, politicians slowly come to power and their social clique is used to being lobbies and getting rich and selling out ideals.

This would cut down on corruption a lot ... (read more)

Apologies, typo in the original, I do think it's not charity to not increase publicity, the post was missing a "not". Your response still clarified your position, but I do disagree - common courtesy is not the same as charity, and expecting it is not unreasonable. I feel like not publishing our private conversation (whether you're a journalist or not) falls under common courtesy or normal behaviour rather than "charity". Standing more than a 1 centimeter away from you when talking is not charity just because it's technically legal - it's a normal and polit... (read more)

1gb
I feel like this falls into the fallacy of overgeneralization. "Normal" according to whom? Not journalists, apparently. It's (almost by definition) not unreasonable to expect common courtesy, it's just that people's definitions of what common courtesy even is vary widely. Journalists evidently don't think they're denying you common courtesy when they behave the way most journalists behave. This is an interesting pushback, but I feel the same reply works here: failing to respect someone's personal space is not inherently wrong, but it will be circumstantially wrong most of the time because it tends to do much more harm (i.e. annoy people) than good.

I feel like if someone internalized "treat every conversation with people I don't know as if they may post it super publicly - and all of this is fair game", we would lose a lot of commons, and your quality of life and discourse your would go down. I don't think it's "charity" to [EDIT: not] increase the level of publicity of a conversation, whether digital or in person. I think drawing a parallel with in person conversation is especially enlightening - imagine we were having a conversation in a room with CCTV (you're aware it's recorded, but believe it to be private). Me taking that recording and playing it on local news is not just "uncharitable" - it's wrong in a way which degrades trust.

1gb
Neither do I: as I said, I actually think it's charity NOT to increase the level of publicity. And people are indeed charitable most of the time. I just think that, if you live your life expecting charity at every instance, you're in for a lot of disappointment, because even though most people are charitable most of the time, there's still going to be a lot of instances in which they won't be charitable. The OP seems to be taking charity for granted, and then complaining about a couple of instances in which it didn't happen. I think it's better to do the opposite: not to expect charity, and then be grateful when it does happen. I don't think it's inherently wrong. It may still be (and in most cases will be) circumstantially wrong, in the sense that it does much more damage to others (including, as you mention, by collaborating to degrade public trust) than it does good to anyone (yourself included).

Amazing recommendation which I very much enjoyed, thanks for sharing!

Amazing write-up, thank you for the transparency and thorough work of documenting your impact.

Answer by DusanDNesic6-1

[Epistemic status: somewhat informed speculation] TLDR: I do not believe China was a major threat source, recession makes it slightly less likely they will be one too. Conventional wars are more likely to happen, and their effect on AI development is uncertain.


I generally do not think China is a big of a threat in the AGI race as some others (notably Aschenbrenner) think. I think for AGI to be first developed in China, several factors need to be true: China has more centralized compute available than other countries, open models are near the frontier but n... (read more)

I agree with the spirit of what you are saying but I want to register a desire for "long timelines" to mean ">50 years" or "after 2100". In public discourse, heading Yann LeCunn say something like "I have long timelines, by which I mean, no crazy event in the next 5 years" - it's simply not what people think when they think long timelines, outside of the AI sphere.

Hi! Thanks for the kind words and for sharing your thought process so clearly! I am also quite happy to see discussions on PIBBSS' mission and place in the alignment ecosystem, as we have been rethinking PIBBSS outbound comms since the introduction of the board and executive team.

Regarding the application selection process:

Currently (scroll down to see stages 1-4), it comes down to having a group of people who understand PIBBSS (in addition to the Board, this would be alumni, mentors, and people who have worked with PIBBSS extensively before) looking ... (read more)

A "Short-term Honesty Sacrifice", "Hypocrisy Gambit", something like that?

1Mateusz Bagiński
It's better but still not quite. When you play on two levels, sometimes the best strategy involves a pair of (level 1 and 2) substrategies that are seemingly opposites of each other. I don't think there's anything hypocritical about that. Similarly, hedging is not hypocrisy.

There's also something like "just the right amount of friction" which enables true love to happen without being sabotaged by existing factors. There are things which cause relationship-breaking kind of issues, such as permanent long distance, disagreement on how many kids to have and when and how to raise them, how to earn and spend money, religion and morals, work/life balance stuff, and physical attraction. Then there's the fun kind of friction where you can grow from each other or enjoy your differences - things would be bland without these. There's als... (read more)

Thank you for the great write-up. It's the kind of thing I believe and act upon but said in a much clearer way than I could, and that to me has enormous value. I especially appreciate the nuance in the downsides of the view, not too strong nor too weak in my view. And I also love the point of "yeah, maybe it doesn't work for perfect agents with infinite compute in a vacuum, but maybe that's not what'll happen, and it works great for regular bounded agents such as myself n=1 and that's maybe enough?" Anyhow, thank you for writing up what feels like an important piece of wisdom.

2JenniferRM
You're welcome! I'm glad you found it useful.

I had no idea, thanks for sharing! My mother in law was GP in public hospital in Kamchatka and she's super against homeopathy so I assumed things there are like things here on Serbia (some private "doctors" deal with homeopathy but no one else). Your comment does explain a thing which I didn't understand which is why in Russia I saw so much homeopathy sold in packaging very similar to regular medicine.

To answer things which Raymond did not, it is hard for me to say who has the agenda which you think has good chances for solving alignment. I'd encourage you to reaching out to people who pass your bar perhaps more frequently than you do and establish those connections. Your limits on no audio or video do make it hard to participate in something like the PIBBSS Fellowship, but perhaps worth taking a shot at it or others. See if people whose ideas you like are mentoring in some programs - getting to work with them in structured ways may be easier than otherwise.

Love it! As a DM and parent (albeit of a 1 years old) reading this really made me smile and think through all the things I have in the house that I can design games around :) Thank you for the write-up!

1Shoshannah Tekofsky
Aw glad to hear it! That brought a smile to my face! :D

This sounds a bit like davidad's agenda in ARIA, except you also limit the AI to only writing provable mathematical solutions to mathematical questions to begin with. In general, I would say that you need possibly better feedback loops than that, possibly by writing more on LW, or consulting with more people, or joining a fellowship or other programs.

1[anonymous]
[deleted]

To add to the anecdata, I've heard it advised (like Raemon below) and started using it occasionally. It has been good for me, although not transformative - possibly I come from different baseline of how important the change is, I don't apologise constantly, but as I've learned, it used to be more than I should.

Hmm, but that has trade-off with not showing up as suspect to X-ray. So maybe a mix of approaches makes it quite expensive to smuggle drugs and thus limit supply/raise price/drop consumption

If all that is lost could be defined, it would, by definition, not be lost once definition is expanded that much.

There is this video: https://youtu.be/OfgVQKy0lIQ on why Asian parents don't say "I love you" to their kids, and it analyzes how the same word in different languages has different meaning. I would also add - to different people as well. So whatever you classify is always missing something in the gaps. It's the issue of legibilizing (in Seeing Like a State terms) - in trying to define it, you restrict it to only those things.

A lot of the meaning ... (read more)

1SpectrumDT
As I see it, the video is compatible with my claim. Aini argues that "I love you" is a useful emotional signal in many situations, which I agree with in my OP.  Aini also argues around 19-21 minutes in for clearer communication. Her example is that saying "I love you" is in some situations clearer communication than giving someone a platter of fruit. I agree, and I further argue that there are situations where it is better to be even clearer than that.
1SpectrumDT
Thanks. I will get around to watching that video later. These examples do not seem to support your conclusion. If I can already laugh at a joke, then analyzing the humour and its neuro-psychology does not diminish that. I can still laugh at the next joke just as well as I could before. Nothing is lost. I can avoid the term love as much as possible, and I can still experience all the feelings of companionship, compassion, and attraction as before. The muddled thinking did not create those experiences, and cleaning up the muddled thinking does not ruin the experiences.  (I do suffer from a kind of anhedonia, but I had that long before I started to dissect concepts such as love.) Am I missing something in your argument? I do not understand what I am supposed to do with this. I apologize for my harsh tone in the following, but to be honest, to me this comes off more like a humblebrag than an attempt to explain or advise. Maybe you are unusually talented at intimate communication and/or were lucky to find a partner who is unusually talented at intimate communication. Or maybe you did some specific non-book-based self-improvement work to learn this - in which case, why not say something about that?  This comes off as if I entered a discussion about poverty and said: "I speak from perspective of someone with a stable career and 0 financial troubles, none of which came from attempts to overcome glass ceilings or discrimination or other systemic issues."

I'm not sure - in dissecting the Frog something is lost while knowledge is gained. If you do not see how analysis of things can sometimes (not always!) diminish them, then that may be the crux. I agree with Wbrom above - some things in human experience are irreducible, and sometimes trying to get to a more atomic level means that you lose a lot in the process, in the gaps between the categories.

1SpectrumDT
Could I please get you to elaborate on what you think gets lost when I replace love with more well-defined terms? I can think of one thing. It is a kind of emotional attachment to an idea due for cultural/memetic reasons. People are brought up to think that love is something super-important and valuable, even if they do not understand what it is. In this way, the term love can have a strong emotional effect. It communicates less actual meaning than a more specific term, but it communicates more emotion. I think I covered it in my OP when I conceded that the term can be useful when I am trying to convey emotion rather than any detailed information: Is that what you have in mind?

This sounds like a case of the Rule of Equal and Opposite Advice: https://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/ I'm sure for some people more honesty would be harmful, but it does sound like the caveats here make it clear when not to use it. I more agree with questions Tsvi raises in the other thread than with "this is awful advice". I can imagine that you are a person for whom more honesty is bad, although if you followed the caveats above it would be imo quite rare to do it wrong. I think the authors do a good job of outlining many cases where it goes wrong.

Is a lot of the effect not "people who read ACX trust Scott Alexander"? Like, the survey selects for most "passionate" readers, those willing to donate their free time to Scott for research with ~nothing in return. Him publicly stating on his platform "I am now much less certain of X" is likely to make that group of people be less certain of X?

Great post Anna, thanks for writing - it makes for good thinking.

It reminds me of The Use and Abuse of Witchdoctors for Life by Sam[]zdat, in the Uruk Series (which I highly recommend). To summarize, our modern way of thinking denies us the benefits of being able to rally around ideas that would get us to better equilibria. By looking at the priest calling for spending time in devoted prayer with other community members and asking, "What for?" we end up losing the benefits of community, quiet time, and meditation. While we are closer to truth (in territory... (read more)

I assume Jan 1st 2025 is the natural day for a sequel :D

Finding reliable sources is 99% of the battle, and I have yet to find one which would for sure pass the "too good to check" situation: https://www.astralcodexten.com/p/too-good-to-check-a-play-in-three

Some people on this website get that for some topics, acoup blog does that for history, etc, but it's really rare, and mostly you end up with "listen to Radio Liberty and Pravda and figure out the truth if you can."

On a style side, I agree with other commenters that you have selected something where even after all the reading I am severely not convinced your ... (read more)

-2Lyrongolem
Completely fair. Maybe I should share a few then?  I find Money & Macro (economics youtuber with Ph.d in the field) to be a highly reliable source capable of informed and nuanced reporting. Here is, for instance, his take on the Argentine dollarization plan, which I found much more comprehensive than most media sources.  Argentina's Radical Plan to End Inflation, Explained - YouTube In terms of Ukraine reporting, I rely pretty heavily on Perun, who likewise provides very informative takes with high emphasis on research and prevalent defense theories.  All Bling, no Basics - Why Ukraine has embarrassed the Russian Military (youtube.com) See here, for instance, on his initial reaction to the invasion, and predictions of many of the war's original dynamics (acute manpower shortages on the part of Russia, effects of graft and corruption, a close match of capabilities and tendency to devolve towards a longer war).  I consider these sources highly reliable, based off their ability to make concrete, verifiable predictions, steer clear of political biases, and provide coherent worldview models. Would you like to check them out and provide your thoughts?  Maybe a good idea. It depends on whether I can muster the energy for a separate edit, and if I can find a good relevant example. Do you have any suggestions in that regard? I know that unless I stumble across something very good I'm unlikely to make an edit. 

On phone, don't know how to format block quotes but: My response to your Ramaswamy example was to skip ahead without reading it to see if you would conclude with "My counterarguments were bullshit, did you catch it?".

This was exactly what I did, such a missed opportunity!!

I also agree with other things you said, and to contribute a useful phrase, your response to BS: " is to notice when I don't know enough on the object level to be able to know for sure when arguments are misleading, and in those cases refrain from pretending that I know more than I do. In... (read more)

2jimmy
The difference between what I strive for (and would advocate) and "epistemic learned helplessness" is that it's not helpless. I do trust myself to figure out the answers to these kinds of things when I need to -- or at least, to be able to come to a perspective that is worth contending with. The solution I'm pointing at is simply humility. If you pretend that you know things you don't know, you're setting yourself up for failure. If you don't wanna say "I dunno, maybe" and can't say "Definitely not, and here's why" (or "That's irrelevant and here's why" or "Probably not, and here's why I suspect this despite not having dived into the details"), then you were committing arrogance by getting into a "debate" in the first place. Easier said than done, of course.
-4Lyrongolem
Very nice! Now... here's the catch. Some of my arguments relied on dark arts techniques. Others very much don't. I can support a generally valid claim with an invalid or weak argument. I can do the same with an obviously invalid claim. Can you tell me what specifically I did? No status points for partially correct answers! Now, regarding learned helplessness. Yes, it's similar, though I'd put in an important caveat. I consider discerning reliable sources and trusting them to be a rational decision, so I wouldn't go as far as calling the whole ordeal of finding what is true a lost cause. But then in general I'm taking a similar position as Scott.  edit: oops, my bad, this was meant to be a response to above, I saw this pop up in the message feed without context

Thank you for this - this is not a book I would generally pick up in my limited reading time, but this has clarified a lot of terms and thinking around probabilities!

My experience is much like this (for context I've spoken about AIS to common public, online but mostly offline, to audiences from students to politicians). The more poetic, but also effective and intuitive way to call this out (while sacrificing some accuracy but I think not too much) is: "we GROW AI". It puts it in categories with genetic engineering and pharmaceutical fairly neatly, and shows the difference between PowerPoint and ChatGPT in how they are made and why we don't know how it works. It is also more intuitive compared to "black box" which is a more technical term and not widely known.

Hello Gabriel! We plan to run this group ~3 times a year, so you should be able to apply for next round, around January/February, which would start in Feb/March. (not confirmed, just estimates).

Other comments did a great job of thoughtful critique of content but I must say that I also highly enjoyed the style, along with the light touch of Russian character writing style.

Thanks Daniel! Most talks should be available soon (except the ones we do not have permission to post)

Even for humans - are my nails me? Once clipped, are they me? Is my phone me? I feel like my phone is more me than my hair, for example. Is my child me, are my memes me, is my country me, etc etc... There are many reasons why agent boundaries are problematic, and that problem continues in AI Safety research.

I agree, but AIS jobs are usually fairly remote-friendly (unlike many corporate jobs) and the culture is better than in most universities that I've worked with, so it has many non-wage perks. Question is, can people in cheap cost-of-living places find such high paid work? In Eastern Europe, usually no - there are other people willing/able to work for less so all wages are low, cost of living correlates with wages in that sense too. So giving generous salaries to experts that are in/are willing to relocate to lower cost of living places is cost-effective, i... (read more)

Perhaps not all of them are in the Bay Area/London? 150k per year can buy you three top professors from Eastern European Universities to work for you full time, and be happy about it. Sure, other jobs pay more, but when unconstrained from living in an expensive city, these grants actually go quite far. (We're toying with ideas of opening research hubs outside of most expensive hubs in the world, exactly for that reason)

3porby
It can indeed go far in lower cost of living areas- if the average salary is brought down by a bunch of willing and highly effective cheap talent, that would be perfectly fine and good. (And I endorse hubs in cheaper areas! I might have moved to SF, if not for it being... SF.) I do still worry about practical competitiveness in this case, though. For reference, housing in the DFW area is 3-5x cheaper than the bay area, so 150k/year buys you quite a bit of luxury... but you can find work for even more than that. A lot more, depending on specialty and experience. If we model researchers as simple economic agents, offers need to compete with other offers, not just the cost of living. Those top professors might have reasons to not take higher paying (in terms of real pay vs. cost of living) industry jobs. Maybe they don't want to move internationally, maybe they've got family, maybe they like the autonomy their current position has, maybe they believe in the cause sufficiently that they view the pay cut as a form of charity, and so on. In terms of funding strategy, though, I wouldn't want to rely on people accepting dramatically lower rates than they can demand.
7Thomas Larsen
Fwiw I'm pretty confident that if a top professor wanted funding at 50k/year to do AI Safety stuff they would get immediately funded, and that the bottleneck is that people in this reference class aren't applying to do this.  There's also relevant mentorship/management bottlenecks in this, so funding them to do their own research is generally a lot less overall costly than if it also required oversight.  (written quickly, sorry if unclear)

For those interested, PIBBSS is happening again in 2023, see more details here in LessWrong format, or on our website, if you want to apply.

Hello Ishan! This is lovely work, thank you for doing it!

Quick question - we (EA Serbia) are translating AGISF (2023) into Serbian (and making it readable to speakers of many related languages). Do I have your permission to translate your summary, to be used as keynotes for the facilitators in the region, or students after completing the course? We would obviously give credit to you and would be linking to this post as the original. We would not need to start now (possibly mid-February or so), and we would wait for the 2023 version to be up to date with th... (read more)

5markov
Hey Dusan! Yes, Of course, you have permission to translate these summaries. It's awesome that you are doing that! Thanks for your suggestion. Yeah, this comment serves as blanket permission to anyone who wants to translate to freely do so.

I have not read it, but it seems useful to come with that knowledge! :)

Thanks, the topic arose from the discussion we had last time on biorisks, if you have topics you want to explore, bring them to the meeting to suggest for January!