All of arisAlexis's Comments + Replies

no 2. is much more important than academic ML researchers which is the majority of the surveys done. When someone delivers a product and is the only one building it and they tells you X, you should belive X unless there is a super strong argument for the contrary and there just isn't.

I am kind of buffled about why this effort would go to be 0 votes or even downvoted? Is the message "the world needs to know or we should discuss about it" wrong or the styling of the document? 

good point. I do not personally think that knowing that there is a possibility you will die without you able to do anything to reverse course adds any value unless you mean worldwide social revolt against all nations to stop AI labs?

but how do we get this message accross ? it can reinforce the point of my article that not enough is being done, only in obscure LW forums. 

but you are comparing epochs before and after Turing test passed. Isnt' that relevant? The Turing test unanimously was/is an inflection point and arguably most experts think we have already passed it in 2023.

why would something that you admit that is temporary (for now) matter in an exponential curve? It's like saying it's ok to go out for drinks on March 5 2020. Ok sure but 10 days after it wasn't. The argument must stand for a very long period of time or it's better not said. And that is the best argument for why we should be cautious, because a) we don't know for sure and b) things change extremely fast.

1kwiat.dev
Because you could make the same argument could be made earlier in the "exponential curve". I don't think we should have paused AI (or more broadly CS) in the 50's, and I don't think we should do it now.

isn't German very easy to learn for Dutch people? most I met became quite fluent very fast due to similarities

1Lucg
Yeah I wish. I'm even from the region that has a lot of German influence in its dialect (as if Dutch itself wasn't Germanlike enough), but my innate comprehension is at the level where in high school I once looked through the list of words to study before a multiple-choice test, thought "yeah this looks doable, I'd know most of these and thus pass the test", and then went on to score worse than random on the actual test as the only student. (If only that teacher could see me living in Germany now; we'd have a good laugh.) And that's not even considering the language has fifty words for "the" as well as adjectives, which will take forever to become automated in my head. Didn't have much trouble with French in school -- not that I remember any of it by now -- and evidently English is no problem either. It's just German. I'm listening to German podcasts, posting in German subreddits (using tools like DeepL Write or, before that existed, just DeepL translate it back into English and see if it comes out right), speaking to the neighbors in German, reading German books, recently started chatting with some coworkers in German: the whole shebang. It has been five years and my German is functional now, but only barely... Ah, look on the bright side: I got the opportunity to work here in English so far and allow myself this time to get that foundation going. And at least I seem to have Zusammenschreibung down to about native level because it works exactly the same as in Dutch. (I think this is an objectively useful feature, which English kinda has it but takes them decades to decide "life saver" or "web site" are really just one word and it needn't use a space that also, ambiguously, separates things that are not part of the noun, so German has that going for it!) Generalizing, though, I cannot imagine there exist many languages easier than German for a Dutch native (Afrikaans, maybe English because it's so simple and Germanic-ish, and perhaps something like Swedish? So th

I do agree and I admire you because you learned very different languages which is more difficult. I kept the germanic-latin tree of languages.

1Jonathan Sheehy
<3

Exaggerated ofc, not sure how to write the pronunciation. Word s like graag (gladly) can be written in English like "hraah" for example. Or krachtig (strong) could be krahtih. Lots of repetition of these sounds.

1Lucg
Aha! Interesting that you see the Dutch g sound as an English h; to me, the English h is... I can see some resemblance but, at the same time, it seems as different as the vowel in "seems" and "says". (Then again, perhaps someone from a language without an a-as-in-says sound would approximate that with e-as-in-seems.) I indeed won't say Dutch is a great language to the ears, it's neutral to me. It's the one I'm verbally best in due to nativeness, but I'd just as soon it gets replaced with something more widely spoken! Much more practical for everyone. Thanks for writing this guide by the way, I expect these practical tips are going to help me with German :)

Sure thing you can contact me. In Greek we use several Spanish words in their original form and I was wondering was that because we had many shipping ties with Spain and was it the marineros that brought them? Word s like timón, barca, galleta etc.

I actually find my latín american Friends easier to understand, they do use some unknown words but speak muchuch slower especially vs the Andalusians

Very interesting. I suspect the effect would be much greater for Mediterranean languages because Esperanto itself is latin based.

Yes I didn't even know the difference :) I thought tap is only for pub beer ! Totally disconnected from the exams where you only dealt with essays

I think the fact that we have extinguished species is a binary outcome that supports my argument. Why would it be a count of how many? The fact alone says that we can be exterminated.

3TAG
The issue is what is likeliest, not what is possible.

I am really cautious of saying that there are only 2 things I am not doing and I got a weird feeling that I ticked most the boxes. Has anybody had this feeling? (OK I don't use pens and I don't have a consistent mentor).

can you explain why sub is the most likely since humans have made exticts thousands of animal species? not semi-extinct. We made them 100% extinct.

3TAG
That argument doesn't work well.in its own terms: we have extinguished far fewer species than we have not.
9TAG
It's not two things, risk versus safety, it's three things: existential risk versus sub-existential risk versus no risk. Sub existential risk is the most likely on the priors.

great article. I hope you realize your startup research/idea. One comment, I think the salaries derail the whole budget plan, afaik from startup world I have been involved, founders make big sacrifices to get their thing going in return for a big equity in the startup they believe someday will become a unicorn. 

how about texting vs calling? pros / cons? I frequently text people from my past but I find calling a bit more awkward/pervasive

We just "hope" that we will get first something that is dangerous but cannot outpower everyone, just trick some and then the rest will stop it. In your scenario, we are screwed yes. That's what this forum is about isn't it ;)

Regardless of content I would say that me among I suspect the majority of people have a natural aversion to titles starting with "No." It is confrontational and shows that the author has a strong conviction about something that is clearly not binary and wants to shove the negative word to start off in your face. I would urge everyone to refrain from having a title like that.

2Igor Ivanov
Thanks. I got a bit clickbaity in the title.

Has anyone seen MI7? I guess Tom is not the most popular guy in this forum but the storyline of a rogue AI as presented (within the limits of a mission impossible block buster) sounds not only plausible but also a great story to bring awareness to crowds about the dangers. It talks about the inability of governments to stop it (although obviously it will be stopped in the upcoming movie) and also their eagerness to control it to rule over the world while the AI just wants to bring chaos (or does it have an ultimate goal?) and also how some humans will be aligned with and obey it regardless if it takes them to their own doom too. Thoughts?

can you explain your calculations? isn't cryo around 50k right now? 

2Going Durden
Its in the ballpark of 50k. I support a family of 4 on 10k a year, round-ish. I can save about 1k-2k a year, If we live on a very, very tight budget. It would thus take me a century to pay for cryonics just for my immediate family, if the prices do not fall quickly enough.
6mruwnik
Which is a bit over 3 years of saving up every penny of the average wages where I live. If you subtract the average rent and starvation rations from that income, you're up to 5.5 years. The first info I could find on google (from 2018) claims the average person here saves around $100 monthly, which gives you over 40 years of saving. This is only for one person. If you have multiple children, a SO, etc., that starts ballooning quickly. This is in a country which while not yet classified as developed, is almost there (Poland).  50k is a lot for pretty much most of the world. It's the cost of a not very nice flat (i.e. middling location, or bad condition) here.

A trained psychologist on feelings of remorse and loss could help you. I know you know but it's good sometimes to be told.

Still, you can try to pursuade them so you do not feel remorse after.

Are you taking any steps to preserve your mother's data? Can you explain how?

1Ilio
I don’t, for complicated ethical reasons, but if I would my present choice would be: * a frozen DNA sample (or the corresponding genomic sequence) * a detailed biography from interviewing her and anyone still available who can testify how she was perceived at any time point of her développement * any available data from participating in various fMRI, EEG, MEG, NIRS & behavioral experiments in academia (it doesn’t matter it’s academia, but in practice it’s the only place where you can get paid to participate instead of paying 500/hour - just make sure they accept to share their own data to the interested participants) In the long run a detailed ECoG is probably the way to go, but I agree with Musk that it will likely need some surgeon robot to decrease costs and risks.

"I suspect that AGI is decades away at minimum". But can you talk more about this? I mean if I say something against the general scientific consensus which is a bit blurry right now but certainly most of the signatories of the latest statements do not think it's that far away, I would need to think myself to be at least at the level of Bengio, Hinton or at least Andrew Ng. How can someone that is not remotely as accomplished as all the labs producing the AI we talk about can speculate contrary to their consensus? I am really curious. 

Another example w... (read more)

After Hinton's and Bengio's articles that I consider a moment in history, I struggle to understand how most people in tech dismiss them. If Einstein wrote an article about the dangers of nuclear weapons in 1939 you wouldn't have people saying "nah, I don't understand how such a powerful explosion can happen" without a physics background. Hacker News is supposed to be *the place for developers, startups and such and you can see comments that despare me. The comments go from "alarmism is boring" to "I have programmed MySQL databases and I know tech and this can't happen". Should I update my view on the intelligence and biases of humans right now I wonder much. 

I think the stoic's (Seneca's letters, Meditations) talk a lot about how to live in the moment while awaiting probable death. Then the classic psychology book The Denial of Death would also be relevant. I guess The Myth of Sisiphus would also be relevant but I haven't read it yet. The metamorphosis of prime intellect is also a very interesting book talking about mortality being preferable to immortality and so on. 

I tink there is an important paragraph missing from this post about books related to Stoicism and existential philosophy etc. 

1DivineMango
Any books/resources on existentialism/absurdism you'd recommend? It seemed like a lot of the alignment positions had enough of that flavor to screen off the primary sources which I found less approachable/directly relevant. Though it does seem like a good idea to directly name that there is an entire section of philosophy dedicated to living in an uncaring universe and making your own meaning.

But sometimes something happens in the world and your "best man always fun forever" friends can't seem to understand reality. They think it's because God wanted it this way or because there is a world wide conspiracy of Jews. Then you feel really alone.

2Viliam
That sucks. I lost a good friend like this. He discovered religion, and... I hoped we could just "agree to disagree" on this topic, and talk about the many things we still had in common. Instead, he was even more annoying than all other religious people, because he assumed that he knows exactly how I think (he kept saying he also used to think the same way before he found Jesus) so he can show me the way. And he couldn't stop bringing up the topic. He became completely insufferable; we stopped interacting completely. (For the record, I do have a few friends who are religious or have other beliefs I don't share. The trick is, they are not trying to convert me. We discuss other things. Heck, we can even discuss religion, if they accept I am only doing it the same way I would discuss Tolkien.) When this happens, it's time to find new friends. Or maybe pay more attention to old low-intensity friends; sometimes the opposite thing happens and you find out that as you grew up, you have more in common.
1Gesild Muka
Not necessarily a bad thing. This has happened to me a few times with childhood friends, especially in our 20s, and we've usually reconnected as beliefs have changed or we consciously decided that any disagreements we have simply make for good conversation.

The metamorphosis of Prime Intellect is an excellent book

What is the duration of P(doom)? 

What do people mean by that metric? What is x-risk for the century? Forever? For the next 10 years? Until we figured out AGI or after AGI on the road to superintelligence?

To me it's fundamentally different because P(doom) forever must be much higher than doom over the next 10-20 years. Or is it implied that if we survive the next period means only that we figured out alignment eternally for all the next generation AIs? It's confusing. 

2JBlack
It does seem likely to me that a large fraction of all "doom from unaligned AGI" comes relatively soon after the first AGI that is better at improving AGI than humans are. I tend to think of it as a question having multiple bundles of scenarios: 1. AGI is actually not something we can do. Even in timelines where we advance in such technology for a long time, we only get systems that are not as smart as us in ways that matter for control of the future. Alignment is irrelevant, and P(doom) is approximately 0. 2. Alignment turns out to be relatively easy and reliable. The only risk comes from AGI before anyone has a chance to find the easy and safe solution. Where the first AGIs are aligned, they can quite safely self-improve and remain aligned. With their capabilities they can easily spot and deal with the few unaligned AGIs as they come up before they become a problem. P(doom) is relatively low and stays low. 3. Alignment is difficult, but it turns out that once you've solved it, it's solved. You can scale up the same principles to any level of capability. P(doom by year X) goes up higher than scenario 2 due to the reduced chance of solving before powerful AGI, but then plateaus rapidly in the same way. 4. Alignment is both difficult and risky. AGIs that self-improve by orders of magnitude face new alignment problems, and so the most highly capable AGIs are much more likely to be misaligned to humanity than less capable ones. P(doom by year X) keeps increasing for every year in which AGI plausibly exists, though the remaining probability mass is more and more heavily toward worlds in which civilization never develops AGI. 5. Alignment is essentially impossible. If we get superhuman AGIs at all, almost certainly one of the earliest kills everyone one way or another. P(doom by year X) goes quickly toward 1 for every possible future in which AGI plausibly exists. Only in scenario 4 do you see a steady increase in P(doom) over long time spans, and even that bundle
2Vladimir_Nesov
I think this is an important equivocation (direct alignment vs. transitive alignment). If first AGIs such as LLMs turn out to be aligned at least in the sense of keeping humanity safe, that by itself doesn't exempt them from the reach of Moloch. The reason alignment is hard is that it might take longer to figure out than developing misaligned AGIs. This doesn't automatically stop applying when the researchers are themselves aligned AGIs. While AGI-assisted (or more likely, AGI-led) alignment research is faster than human-led alignment research, so is AGI capability research. Thus it's possible that P(first AGIs are misaligned) is low, that is first AGIs are directly aligned, while P(doom) is still high, if first AGIs fail to protect themselves (and by extension humanity) from future misaligned AGIs they develop (they are not transitively aligned, same as most humans), because they failed to establish strong coordination norms required to prevent deployment of dangerous misaligned AGIs anywhere in the world. At the same time, this is not about the timespan, because as soon as first AGIs develop nanotech, they are going to operate on many orders of magnitude more custom hardware that's going to increase both serial speed and scale of available computation to the point where everything related to settling into an alignment security equilibrium is going to happen within a very short span of physical time. It might take first AGIs a couple of years to get there (if they manage to restrain themselves and not build a misaligned AGI even earlier), but then in a few weeks it's all going to get settled, one way or the other.
2the gears to ascension
I think it's an all-of-time metric over a variable with expected decay baked into the dynamics. a windowing function on the probability might make sense to discuss; there are some solid P(doom) queries on manifold markets, for example.

Thank you. This is the kind of post I wanted to write when I posted "the burden of knowing" a few days ago but I was not rational thinking at that moment.