All of artemium's Comments + Replies

Answer by artemium
10

The answer surely depends mostly on what his impact will be on AI developments, both through his influence on the policy of the new administration and what he does with xAI. While I understand that his political actions might be mind-killing (to say the least) to many of his former fans, I would much prefer a scenario where Elon has infuriating politics but a positive impact on solving alignment over one with the opposite outcome.

2mikbp
I'd agree. But he certainly does not seem to even be trying anymore to have positive impact on solving alignment, no?
artemium
338

A new open-source model has been announced by the Chinese lab DeepSeek: DeepSeek-V3. It reportedly outperforms both Sonnet 3.5 and GPT-4o on most tasks and is almost certainly the most capable fully open-source model to date.

Beyond the implications of open-sourcing a model of this caliber, I was surprised to learn that they trained it using only 2,000 H800 GPUs! This suggests that, with an exceptionally competent team of researchers, it’s possible to overcome computational limitations.

Here are two potential implications:

  1. Sanctioning China may not be effecti
... (read more)

DeepSeek-V3 is a MoE model with 37B active parameters trained for 15T tokens, so at 400 tokens per parameter it's very overtrained and could've been smarter with similar compute if hyperparameters were compute optimal. It's probably the largest model known to be trained in FP8, it extracts 1.4x more compute per H800 than most models trained in BF16 get from an H100, for about 6e24 FLOPs total[1], about as much as Llama-3-70B. And it activates 8 routed experts per token (out of 256 total routed experts), which a Feb 2024 paper[2] suggests to be a directiona... (read more)

Perhaps Randolph Carter was right about losing access to dreamlands after your twenties:

When Randolph Carter was thirty he lost the key of the gate of dreams. Prior to that time he had made up for the prosiness of life by nightly excursions to strange and ancient cities beyond space, and lovely, unbelievable garden lands across ethereal seas; but as middle age hardened upon him he felt these liberties slipping away little by little, until at last he was cut off altogether. No more could his galleys sail up the river Oukranos past the gilded spires of Thran... (read more)

4avturchin
Yes, I know them and read their blog.  I am now 51 and can remember dreams now only if I take B6. 

Still think it will be hard to defend against determined and competent adversaries committed to sabotaging the collective epistemic. I wonder if prediction markets can be utilised somehow? 

Answer by artemium
61

I am not sure if dotcom 2000 market crash is the best way to describe a "fizzle". The upcoming Internet Revolution at the time was a correct hypothesis its just that 1999 startups were slightly ahead of time and tech fundamentals were not ready yet to support it, so market was forced to correct the expectations. Once the tech fundamentals (internet speeds, software stacks, web infrastructure, number of people online, online payments, online ad business models etc...) became ready in mid 2000s the Web 2.0 revolution happened and tech companies became giants... (read more)

2Nate Showell
I picked the dotcom bust as an example precisely because it was temporary. The scenarios I'm asking about are ones in which a drop in investment occurs and timelines turn out to be longer than most people expect, but where TAI is still developed eventually. I asked my question because I wanted to know how people would adjust to timelines lengthening.

My dark horse bet is on 3d country trying desperately to catch up to US/China just when they will be close to reaching agreement on slowing down progress. Most likely: France. 

Why so? My understanding is that, if AGI will arrives in 2026 it will be based on the current paradigm of training increasingly large LLMs on massive clusters of advanced GPUs. Given that US has banned selling advanced GPUs to China, how do you expect them to catch up that soon?

1Akram Choudhary
Yh I dont get it either. From what I can tell the best Chinese labs arent even as good as the second tier American labs. The only way I see it happening is if the CCP actively try to steal it.

To add to this point, author in question is infamous for doxxing Scott Alexander and writing a hit piece on rationalist community before.

https://slatestarcodex.com/2020/09/11/update-on-my-situation/

 

Reply1111
0Blueberry
This is completely false, as well as irrelevant. * he did not "doxx" Scott. He was going to reveal Scott's full name in a news article about him without permission, which is not by any means doxxing, it's news reporting. News is important and news has a right to reveal the full names of public figures. * this didn't happen, because Scott got the NYT to wait until he was ready before doing so. * the article on rationalism isn't a "hit piece" even if it contains some things you don't like. I thought it was fair and balanced. * none of this is relevant, and it's silly to hold a grudge against a reporter for an article you don't like from years ago when what's more important is this current article about AI risk.
7trevor
I don't think that this specific comment is a very productive way to go about things here. Journalists count as elites in democracies, and they can't publicly apologize when they make a mistake because that embarrasses the paper, so if they ever change their mind about something (especially something really big and important) then their only recourse is to write positive articles to try to make up for the negative article they originally wrote. I'm not sure I agree with Razied on the whole "sempai noticed me" thing. I agree that it's important to wake up to that dynamic, which is silly; articles like these don't seem to have a track record of vastly increasing the number of alignment researchers, whereas mid-2010s publications like HPMOR and Superintelligence do (and those phenomenon may have failed to replicate in the 2020s, with WWOTF and planecrash). But there's tons of factors at play here that even I'm not aware of, like people at EA university groups being able to show these articles to mathematicians unfamiliar with AI safety, or orgs citing them in publications, which is the kind of thing that determines the net value of these articles.

I was also born in a former socialist country -Yugoslavia, which was notable for the prevalence of worker-managed firms in its economy. This made it somewhat unique among other socialist countries that used a more centralized approach with state ownership over entire industries. 

While it is somewhat different than worker-owned cooperatives in modern market economies it does offer a useful data point. The general conclusion is that they work a bit better than a typical state-owned firm, but are still significantly worse in their economic performance co... (read more)

Also agree about not promoting political content on LW but would love to read your writings on some other platform if possible.

Viliam
110

I do not post on other platforms (besides a very infrequent blog on Java game development). My commenting online is mostly Less Wrong and ACX, occasionally Hacker News.

I actually do not think I have much useful to say on the topic other than what I already wrote here; this was a dump of everything that was on my mind. Could generate some more text about what pro-Russian people in my country actually believe (a mixture of Putin admiration and conspiracy theories about our local politicians), but at the end you would see that this comment was the 20% of the ... (read more)

If it reaches that point, the goal for Russia would not be to win but to ensure another side loses too, and this outcome might be preferable (to them) to a humiliating conventional defeat that might permanently end Russian sovereignty. In the end, the West has far more to lose than Russia and the stakes aren't that high for us and they know it. 

No. I think everything else is in crappy shape cause the Nuclear arsenal was always a priority for the Russian defense industry and most of the money and resources went there. I've noticed that the meme "perhaps Russian nukes don't work" is getting increasingly popular which can have pretty bad consequences if the meme spreads and emboldens escalation.

It is like being incentivized to play Russian roulette because you hear bullets were made in a country that produced some other crappy products.

2Richard121
The main reason for everything being in a crappy state is almost certainly (>90%) widespread corruption. Everyone who can is creaming off a little bit, leaving very little for the actual materiél and training. So shoddy materials, poor to no training, missing equipment, components and spares. That said, while it is very likely that the Russian nuclear arsenal is in extremely poor state, and I'd possibly go as high as 50/50 that their ICBMs could launch but cannot be aimed (as that takes expensive components that are easy to steal/not deliver and hide that fact), missing the target by a hundred miles or more is basically irrelevant in the "ending the world" stakes. A 'tactical' device doesn't need much in the way of aiming, and on the assumption that it does in fact contain nuclear material there's not a huge civilian difference between it exploding 'as designed' or "just" fizzling. If only the initiator went off, the weapon disintegrated during launch/firing, or the weapon/aircraft was shot down, it would still spread radioactive material over a wide area. While that wouldn't be the "shock and awe" of a mushroom cloud, it's still pretty devastating to normal life.

Looks awesome!  Maybe there could be extended UI that tracks the recent research papers (sorta like I did here) or SOTA achievements. But maybe that would ruin the smooth minimalism of the page. 

You can also play around with open-source versions that offer surprisingly comparable capability to OpenAI models. 

Here is the GPT-6-J from EleutherAI that you can use without any hassle:  https://6b.eleuther.ai/

They also released a new, 20B model but I think you need to log in to use it: https://www.goose.ai/playground

I think there could be a steelman why this post is LW-relevant (or at least possible variants of the post). If this Canadian precedent becomes widely adopted in the West everyone should probably do some practical preparation to ensure the security of their finances.

P.S: I live in Sweden which is an almost completely cashless society, so a similar type of government action would be disastrous. 

7Brendan Long
I agree that that information would be useful, but I'd expect a post to be written differently to fit with LessWrong's frontpage content (more practical advice, less political discussion). I say this as someone who thinks this post is good and interesting, but didn't upvote because I think this kind of content would distract from what LessWrong is good at.

You can add Black Death to the list. Popular theory is that disease killed so many people (around 1/3 of Europe's population) that few remaining workers could negotiate higher salaries which made work-saving innovations more desirable and planted the seeds of industrial development.

 

This is very underrated newsletter, thank you for writing this. Events in KrioRus are kind of crazy. I cannot imagine a business where it is more essential to convince customers of robustness in the long run than cryonics and yet...ouch. 

Also, Russia deployed lasers Peresvet which blind American satellites used to observe nuclear missiles.

I thought Peresvet is more of a tactical weapon?

https://en.wikipedia.org/wiki/Peresvet_(laser_weapon) 

Are there any updates on nuclear powered missile, Burevestnik

7avturchin
Real propose of Peresvet was published only 2 days ago https://lenta.ru/news/2021/12/02/peresvet/ Events in Kriorus are beyond crazy, but it looks like that more people learned about cryonics then before. Wired is preparing a piece and even Netflix showed interest (according to rumours) in making TV series about the story. So people are still coming for cryopreservations, and there was a few last fall in Valeria facility.  Zirkon hypersonic missile was tested a couple of times recently, but no updates on Burevestnik. 

Even worse, that kind of move would just convince the competitors that AGI is far more feasible, and incentivize them to speed up their efforts while sacrificing safety.

If blocking Huwaei failed to work a couple of years ago with an unusually pugnacious American presidency, I doubt this kind of move would work in the future where the Chinese technological base would be probably stronger.

In a funny way, even if someone is stuck in a Goodhart trap doing Language Models it is probably better to Goodhart performance on Winograd Schemas than just adding parameters. 

I am not an expert in ML but based on some conversations I was following, I heard WuDao's LAMBADA score (an important performance measure for Language Models) is significantly lower than GPT-3. I guess a number of parameters isn't everything.

2[anonymous]
I don't really know a lot about performance metrics for language models. Is there a good reason for believing that LAMBADA scores should be comparable for different languages?

Strong upvote for a healthy dose of bro humor which isn't that common on LW.  We need more "people I want to have a beer with" represented in our community :D.

Thats interesting. Can you elaborate more? 

2ioannes
It's a big topic and I don't have a great articulation for it yet.  Some scattered points: * Generally higher model uncertainty than I used to have * Idealism now seems as plausible as materialism * Panpsychism seems plausible / not-crazy, and consciousness matters a lot * The many-worlds interpretation seems plausible. If physics is many-worlds, we may not be able to ever escape from ethical parochialism * Not about metaphysics, but I've been growing less confused about my motivations such that ethical considerations no longer feel as fraught. (I used to identify my ethics with my view of my self-worth such that acting ethically seemed super important)

None: None of the above; TAI is created probably in the USA and what Asia thinks isn't directly relevant. I say there's a 40% chance of this.

I would say it might still be relevant in this case. For example, given some game-theoretical interpretations, China might conclude that doing a nuclear first strike might be a rational move if the US creates the first TAI and suspects that will give their enemies an unbeatable advantage.  Asian AI risk hub might successfully convince Chinese leadership to not do that if they have information that US TAI is built in a way that would prevent usage just for the interest of its country of origin.

Not sure about anti-gay laws in Singapore, but from what I gathered from the recent trends, the LGTB situation is starting to improve there and in East Asia in general. 

OTOH the anti-drug attitudes are still super strong (for example you can still get the death penalty for dealing harder drugs), therefore I presume it's an even bigger deal-breaker giving the number of people who are experimenting with drugs in the broader rationalist community.

Not to mention a pretty brutal Anti-Drug laws.

What would be the consequence of Belarus joining the western military alliance in terms of Russia's nuclear strategy? Let's say that in the near future Belarus joins NATO, and gives the US free hand in installing any offensive or defensive (ABM) Nuclear weapon system on Belarus territory. Would this dramatically increase the Russian fear of a successful nuclear first strike by the US?

4avturchin
US could put the same capabilities now in Estonia or Ukraine, so not much change in nuclear strategy here. However, Russia has important long distance communication center with nuclear submarinies in Belarus. Also, Kaliningrad district will be much more vulnerable as well as export-import routes. In case of ground invasion, Belarus is also located strategically, and both Napoleon and Hitler quickly advanced through Minsk in Moscow direction. The biggest problem for Putin is that if Lukashenko fails, he will be next. So he is not interested in his demise, but he wants to make Lukashenko as weak as possible and then annex Belorussia. He tried to do it last year, and he then hoped to become a president of a new country consisting of Belarus and Russia. Lukashenko said no, and Putin had to use his plan B: the change of constitution to remain in power after 2024.
Answer by artemium
10

Excellent question! Was thinking about it myself lately, especially after GPT-3 release. IMHO, it is really hard to say as it is not clear which commercial entity will bring us over the finish line, and if there will be an investment opportunity at the right moment. It also quite possible that even the first company that does it might even bungle its advantage and investing there might be a wrong move (seems to be a common pattern in the history of technology).

My idea is just to play it safe and save money as much as possible until there is a clear example... (read more)

We haven’t managed to eliminate romantic travails

Ah! Then, it isnt utopia in my definition :-) .

Love it. It is almost like anti-Black Mirror episode where humans are actually non-stupid.

0Pattern
Do you consider eliminating that one thing a sufficient condition?

Amazing post!

Would be useful to mention examples of contemporary ideas that could be analogues of heliocentrism in its time. I would suggest String Theory to be one possible candidate. The part when Geocentrist is challenging Heliocentrist to provide some proof while Heliocentrist is desperately trying to explain away lack of experimental evidence kinda reminds me of debates between string theorist and their sceptics. (it doesn't mean String Theory is true just there seems to be a similar state of uncertainty).

This is great. Thanks for posting it. I will try to use this example and see if I can find some people who would be willing to do the same. Do you know of any new remote group that is recruiting members?

0ChristianKl
Over time there were multiple attempts at creating online-LW meetups that failed. On the other hand, the LW study hall is a success in providing an experience of people working together on their goals. Did you already check it out?
0Viliam
There seems to be a Less Wrong Slack. Maybe we could try making a new (private?) channel there. With rules like "to participate here, you have to regularly post what you actually do"; or perhaps uploading selfies of weightlifting and eating tofu (and polyamorous orgies)?
0b4yes_duplicate0.9924090729683128
No.

This is a good idea that should definitely be tested. I completely agree with the Duncan that modern society, and especially our community is intrinsically to allergic to authoritarian structure despite strong historical proof that this kind of organisations can be quite effective.

would consider joining in myself but given my location that isn't an option.

I do think that in order to build successful organisation based on authority the key factor are personal qualities and charisma of the leader and rules play smaller part.

As long as project is based on voluntary participation, I don't see why anyone should find it controversial. Wish you all the best.

1Duncan Sabien (Deactivated)
Thanks for the support. Hope to funnel some interesting data back to the rest of the world.

fixed.

2Elo
second suggestion: subheadings * facebook * transhumanism * others

We would first have to agree on what "cutting the enemy" would actually mean. I think liberal response would be keeping our society inclusive, secular and multicultural at all costs. If that is the case than avoiding certain failure modes like becoming intolerant militaristic societies and starting unnecessary wars could be considered as successful cuts against potential worse world-states.

Now that is liberal perspective, there are alternatives, off course.

7Richard_Kennaway
Nobody who says "at all costs" means "at all costs". It's a way of avoiding a discussion of what costs are worth paying and what paying them will look like.

I don't think that we should worry about this specific scenario. Any society advanced enough to develop mind uploading technology would have excellent understanding of the brain, consciousness and the structure of thought. In this circumstances retributive punishment would seem be totally useless as they could just change properties of the perpetrator brain to make him non-violent. and eliminate the cause of any anti-social behaviour.

It might be a cultural thing though, as america seems to be quite obsessed with retribution. I absolutely refuse to believe... (read more)

One possibility is to implement the design which will makes agent strongly sensitive to the negative utility when he invests more time and resources on unnecessary actions after he ,with high-enough probability , achieved its original goal.

In the paperclip example : wasting time an resources in order to build more paperclips or building more sensors/cameras for analyzing the result should create enough negative utility to the agent compared to alternative actions.

0Stuart_Armstrong
This has problems with the creation of subagents: http://lesswrong.com/lw/lur/detecting_agents_and_subagents/ You can use a few resources to create subagents without that restriction.

Recently I became active in EA (effective altruism) movement but I'm kind of stuck on the issue of animal welfare. While I agree that Animals deserve ethical treatment and that the world would be a better place if we found a way to completely eliminate animal suffering I do have some questions about some practical aspects.

  • Is there any realistic scenario where we could expect entire world population to convert to non-meat diet , considering cultural, agricultural and economic factors?

  • Would it be better if instead trying to convert billions of people to

... (read more)
0[anonymous]
1) hardly, but then again, what is the minimum % of world population do you expect to be convincable? It doesn't have to be everybody. 2) what are the minuses of this technology? Illegal trade in real meat will thrive, for example, and the animals would live in even worse conditions. 3) I think poverty might contribute to meat consumption, if we're speaking about not starving people but, say, large families with minimal income. Meat makes making nutritious soups easy.
artemium
230

Interesting talk at BOAO forum : Elon Musk, Bill Gates and Robin Li (Baidu CEO). They talk about Superintelligence at around 17:00 minute.

https://www.youtube.com/watch?v=NG0ZjUfOBUs&feature=youtu.be&t=17m

  • Elon is critical of Andrew Ng remark that 'we should worry about AI like we should worry about Mars overpopulation' ("I know something about mars" LOL)

  • Bill Gates mentioned Nick Bostrom and his book 'Superintelligence'. His seems to have read the book. Cool.

  • Later, Robin Li mentions China Brain projects, which appears to be Chinese

... (read more)

Bill Gates mentioned Nick Bostrom and his book 'Superintelligence'. His seems to have read the book. Cool.

He not only mentions it. He recommends it to a room of influential people.

I never thought of that, but that's a great question. We have similar problem in Croatian language as AI would be translated 'Umjetna Inteligencija' (UI). I think we can also use the suggested title "From Algorithms to Zombies" once someone decides to make Croatian/Serbian/Bosnian translation

One thing that might help you from my experience is to remove any food from your surroundings that could tempt you. I myself have only fruits, milks and cereals in my kitchen and basically nothing else. While I could easily go to supermarket or order food the fact I would need to do do some additional action is enough form me to avoid doing that. You can use laziness for your advantage.

One of the reasons is that a lot of LW members are really involved in FAI issues and they strongly believe that if they manage to succeed in building a "good" AI , most of earthly problems will be solved in an very short time, Bostrom said something like that we can postpone solving complicated philosophical issues after we solved AI ethics issue.

Agree.. The AI boxing Is horrible idea for testing AI safety issues. Putting AI in some kind of virtual sandbox where you can watch his behavior is much better option, as long as you can make sure that AGI won't be able to become aware that he is boxed in.

1Vaniver
1. What's the difference between the AI's text output channel and you observing the virtual sandbox? 2. Is it possible to ensure that the AI won't realize that it is boxed in? 3. Is it possible to ensure that, if the AI does realize that it is boxed in, we will be able to realize that it realizes that? As I understand it, the main point of the AI Box experiment was not whether or not humans are good gatekeepers, but that people who don't understand why it would be enticing to let an AI out of the box haven't fully engaged with the issue. But even how to correctly do a virtual sandbox for an AGI is a hard problem that requires serious attention.

Hmm I still think that there is incentive to behave good. Good, cooperative behavior is always more useful than being untrustworthy and cruel to other entities. There might be some exceptions, thought (simulators want conflict situation for entertainment purposes or some other reasons).

3tailcalled
Well, yeah, you should still be good to your friends and other presumably real people. However, there would be no point in, say, trying to save people from the holocaust, since the simulators wouldn't let actual people get tortured and burnt.

I had exactly the same idea!

It is possible that in that only few people are actually 'players' (have consciousness) and others are NPC-like p-zombies. In that case, I can say I'm one of the players, as I'm sure that I have consciousness, but there is no way I can prove it to anyone else ;-) .

One of the positive aspects of this kind of thought experiments is that usually gives people additional reasons for good behavior because in most cases it is highly likely that simulators are conscious creatures who will probably reward those who behave ethically.

5kingmaker
I admit that it serves my ego suitably to imagine that I am the only conscious human, and a world full of shallow-AI's was created just for me ;-)

Exactly. Also there are great number of possibilities that even the smartest persons could not even imagine, but powerful Superintelligence could.

2Bugmaster
I think that, if you want to discuss the notion of the Superintelligent in any kind of a rational way, it is useful to make a distinction between "the Superintelligent can do things we can't", and "the Superintelligence is literally omnipotent". If the latter is true, then any meaningful discussion of it is impossible -- for the same set of reasons that meaningful discussion of the omni-everything Christian god is impossible.

I stopped reading after the first few insults about excrement... I'm not sure where you were trying to get with that. If that was part of some strategy I'm not sure how you think that would have worked.

Agree. Hopefully I'm not the only one who thinks that AGI game in this example was quite disappointing. But anyway, I was never convinced that AI boxing is good idea as it would be impossible for any human to correctly analyze the intentions of SI based on this kind of test.

There is additional benefit of breaks while doing computer work: it helps reduce strain on your eyes. Watching into computer screen for too long reduces your blinking rate and may cause eye problems in future.

A lot of people who work in programming (including myself) have dry eyes condition.

There are good apps for chrome which can help you with this and most of them allow you to customize breaks depending on your schedule.

Yeah, I know that there are other filters behind us, but I just found it as a funny coincidence while I was in the middle of the facebook discussion about Great Filter and someone shared this Bostrom's article .

But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.

Ok I have one meta-level super-stupid question . Would it be possible to improve some aspects of the LessWrong webpage? Like making it more readable for mobile devices? Every time I read LW in the tram while going to work I go insane trying to hit super-small links on the website. As I work in Web development/UI design, I would volunteer to work on this. I think in general that the LW website is a bit outdated in terms of both design and functionality, but I presume that this is not considered a priority. However a better readability on mobile screens would be a positive contribution to its purpose.

artemium
-10

Horrible news!!! Organic molecules have just been found on Mars. It appears that the great filter is ahead of us.

0Lumifer
Anything other than methane which is the simplest organic molecule there is (CH4) and, as far as I remember, has been detected in interstellar gas..?
4Ander
This only means that the great filter is not due to the difficulty of creating organic compounds. (In fact, creating organic compounds was already very low on a list of things that might be the cause of an early filter, or maybe even already effectively ruled out.). It still could be a step between this and where we are now. For example, it could be the creation of Eukaryotes. Or it could be intelligence. Or other things.
1polymathwannabe
What's horrible about that?
9bramflakes
We've known for some time that Titan has plenty of organic molecules.
2JoshuaZ
In addition to the point made by ChristianKI that this may be non-biological, the origin of life is not the only possible Great Filter aspect which could be in our past. Other candidates for example include the rise of multicellular life, the development of complex brain systems (this one is not that likely since it seems to have developed multiple times in different lineages), the development of fire and the development of language.
4ChristianKl
The article says: There's also matter exchange from earth to mars that could have brought life that originated on earth to mars.

I think you posts was interesting., so why the downvote? I'm new here, and I'm just trying to understand Karma system. Any particular reason?

5ChristianKl
The post argues that a single instance proves that lack of tolerance holds back the singularity. That's a stupid argument. The kind of argument people make if the operate in the mental domain of politics and suddenly throw out their standards for rational reasoning. It also quite naive in that it thinks that having the singularity now would be a good thing. Given that we don't know how to build FAI at the moment having the singularity now might mean an obliteration of the human race.
0RowanE
It was already downvoted when I saw it so I didn't give it the most charitable reading, I thought it amounted to little more than a political cheer and not something that belongs here.
2TheOtherDave
I don't know, but a pattern I've noticed lately is that posts that can be understood as "soldiers for the progressive side" will often get two or three downvotes pretty quickly, and then get upvoted back to zero over the next few days. (If they are otherwise interesting they typically get lots more upvotes.) I suspect that pattern is relevant here.
Load More