Scrolling down this almost stream of consciousness post against my better judgement, unable to look away perfectly mimicked scrolling social media. I am sure you did not intend it but I really liked that aspect.
Loads of good ideas in here, generally I think modelling the alphabet agencies is much more important than implied by discussion on LW. Clown attack is a great term, although I'm not entirely sure how much personal prevention layer of things really helps the AI safety community, because the nature of clown attacks seems like a blunt tool you can apply to the public at large to discredit groups. So, primarily the vulnerability of the public to these clown attacks is what matters and is much harder to change.
Wow this really pulls together a lot of disparate ideas I've had at various times about the topic but wouldn't have summarised nearly as well. A note on the psyops point: if UAP are Non Human Intelligence, then we should still expect (a lot of) disinformation on the topic. Not just as a matter of natural overlap, but as a matter of incentive, it is reasonable to assume that muddying the waters with disinfo and making everyone out to be a 'crackpot' that Stephen Hawking can dismiss, would be a viable strategy to covering up the reality. Real issues are used...
Your post implies/states that would be a kind of straightforward explanation but I'm not sure it would be. For one, the idea that ball lightning is not only much more common than previously thought, which it would need to be to also explain UFOs, but also has a hallucination component would both be quite startling if true.
Secondly, there are aspects ball lightning cannot explain. What are we to make of the recent addition of "USO's" for instance? Unidentified Submerged Objects have consistently been part of this recent narrative, sometimes having bee...
It is a bizarre situation, but I think I disagree with you about the most likely prosaic explanation. Increasingly, especially with the latest events, the psyop explanation seems a relatively better explanation than the 'politicians are just fools' one. The reason being that politicians with higher clearances (and so more data) than us are making stronger and stronger commitments publicly taking UAP=ET seriously. That suggests to me there is a credible combination of evidence and people that had led them there. Further, the claims being made are so extreme...
But is it necessarily unlikely that they would be screwing with us if they existed? That's something I don't like about the bigfoot comparison, it's obviously laughable that large apes are evading camera detection at every turn, but with aliens, presumably it would be trivial to do so. We know that they would have the means, so that only leaves the reasoning to do this. I also don't necessarily agree with the assumption that our commercial sensor tech is good enough to detect hypothetical aliens. Try filming a drone from a distance with your phone. It will...
Given real aliens, they would need to either have capped tech or actively trolling to explain even low quality observations or pieces of craft. Nonintervention laws and incorrigible global anti-high-tech supervision constraining aliens are somewhat plausible, coordinated trolling less so.
So Grusch is another one of these Pentagon UAP investigatory program guys, which means he is claiming people have come to him from the compartmentalised Special Access Programs claiming they have recovered craft. That is important because unless he is saying somewhere he personally witnessed these craft, it is perfectly possible he fully believes his claim and is telling the truth in that yes, someone has come to him with these claims. Unfortunately I suspect whoever these first hand sources are will be shrouded entirely in classified red tape. I agree at ...
The best evidence that addresses both your claims would probably come from the military, since they have both state of the art sensors+ reliable witnesses. The recent surge in UFO coverage is almost all related to branches of the military (mostly Navy?) so the simple explanation is, it's classified to varying degrees. My understanding is that there is the publicly released stuff which is somewhat underwhelming, then some evidence Congress and the like has seen during briefings, and then probably more hush hush stuff above that for non civilians. The member...
I should have clarified a bit, I was using the term 'military industrial complex' to try to narrow in on the much more technocratic underbelly of the American Defence/Intelligence community or private contractors. I don't have any special knowledge of the area so forgive me, but essentially DARPA and the like or any agency with a large black budget.
Whatever they are doing does not need to have any connection to whatever the public facing government says in press briefings. It is perfectly possible that right now a priority for some of these agencies ...
Why is there so little mention of the potential role of the military industrial complex in developing AGI rather than a public AI lab? The money is available, the will, the history (ARPANET was the precursor to the internet). I am vaguely aware there isn't much to suggest the MIC is on the cutting edge of AI-but there wouldn't be if it were all black budget projects. If that is the case, it presumably implies a very difficult situation because the broader alignment community would have no idea when crucial thresholds were being crossed.
Disclaimer: I myself am a newer user from last year.
I think trying to change downvoting norms and behaviours could help a lot here and save you some workload on the moderation end. Generally, poor quality posters will leave if you ignore and downvote them. Recently, there has been an uptick in these posts and of the ones I have seen many are upvoted and engaged with. To me, that says users here are too hesitant to downvote. Of course, that raises the question of how to do that and if doing so is undesirable because it will broadly repel many new users some of whom will not be "bad". Overall though I think encouraging existing users to downvote should help keep the well-kept garden.
No, that was just a joke Lex was making. I don't know the exact timestamps but in most of the instances where he was questioned on his own positions or estimations on the situation Lex seemed uncomfortable to me, including the alien civilisation example. At one point I recall actually switching to the video and Lex had his head in his hands, which body language wise seems pretty universally a desperate pose.
There were definitely parts where I thought Lex seemed uncomfortable, not just limited to specific concepts but when questions got turned around a bit towards what he thought. Lex started podcasting very much in the Joe Rogan sphere of influence, to the extent that I think he uses a similar style, which is very open and lets the other person speak/have a platform but is perhaps at the cost of being a bit wishy-washy. Nevertheless it's a huge podcast with a lot of reach.
This is why I don't place much confidence in projections about how the population will be affected by TAI from people like Sam Altman either. You have to consider they are very likely to be completely out of touch with the average person and so have absolutely terrible intuitions about how they respond to anything, let alone forecasting long term implications for them stemming from TAI. If you get some normal people together and make sure they take the proposition of TAI and everything it entails seriously, (such as widespread joblessness), I suspect you would encounter a lot more fear/apprehension around the kind of behaviours and ways of living that is going to produce.
I think what stands out to me the most is big tech/big money now getting involved seriously. That has a lot of potential for acceleration just because of funding implications. I frequent some financial/stock websites and have noticed AI become not just a major buzzword, but even some sentiments along the lines of 'AI could boost productivity and offset a potential recession in the near future'. The rapid release of LLM models seems to have jump started public interest in AI, what remains to be seen is what the nature of that interest manifests as. I am per...
I think it may be necessary to accept that at first, there may need to be a stage of general AI wariness within public opinion before AI Safety and specific facets of the topic are explored. In a sense, the public has not yet fully digested the 'AI is a serious risk' or perhaps even 'AI will be transformative to human life' in the relatively near term future. I don't think it is very likely that is a phase that can simply be skipped and it will probably be useful to get as many people broadly on topic before the more specific messaging, because if they are...
I had never heard of Standpoint epistemology prior to this post but have encountered plenty of thinking that seems similar to what it espouses. One thing I can not figure out at all how this functionally differs to surveying a specific demographic on an issue. How, exactly, is whatever this is more useful? In fact to me it seems likely to be functionally worse in that for a survey the sample size is small and there is absolutely no control group, as someone else pointed out, we don't get any sense of what any other group responds with given the same questi...
I'm not sure about 75% but it is an interesting subject and I do think the consensus view is slightly too sceptical. I don't have any expertise but one thing that always sticks out to me as decreasing the likelihood of bigfoot's existence is the lack of remains. Ok, I buy encounters could be rare enough so that there isn't one within the advent of the smartphone. But where are the skeletons? Is part of the claim they might have some type of burial grounds? Very remote territory they stick to without exceptions?
I don't think AI in the long run will be comparable to events like the industrial revolution (or anything, historically) because AI will be less tool like and more agent like in my view. That is not a situation that draws any historical precedent. A famous investor, Ray Dalio made a point something along the lines that recessions, bubbles etc (but really any rare-ish economic event) are incredibly hard to model because the length of time the economic system has existed is actually relatively short so we don't have that large a sample size. That point can b...
Transformative AI will demand a rethink of the entire economic system. The world economy is based on an underlying assumption that most humans are essentially capable of being productive in some real way that generates value. Once that concept is eroded, and my intuition is that it will only take a surprisingly small percentage of people being rendered unproductive, some form of redistribution will probably be required. Rather than designing this system in such a way that 'basic' needs are provided/paid for, I think a percentage of output from AI gains sho...
To me it seems like the current zeitgeist is just not up to addressing this question without being almost entirely captured by bad faith actors on both sides and therefore causing some non trivial social unrest. There might be some positive gain to be had from changes in policies depending on the reality of genetic differences in IQ, however policy makers would have to be much more nuanced and capable than they appear to be. Even if this were possible it would have to be weighed against the social aspects.
Your last point seems like it agrees with point 7e becoming reality, where the US govt essentially allows existing big tech companies to pursue AI within certain 'acceptable' confines they think of at the time. In that case how much AI might be slowed is entirely dependent on how tight a leash they put them on. I think that scenario is actually quite likely given I am sure there is considerable overlap between US alphabet agencies and sectors of big tech.
I like this post a lot, partially because I think it is an underdiscussed area, partially because it expands beyond the obvious semiconductor type companies. One thing I would add, is that with almost no technology advancement, existing and soon to exist LLMs might make investing in social media (and internet adjacent) companies much more volatile. This is because as far as I can see, the bot problem for these companies should only become worse and worse as AI can more perfectly mimic a real user. This could lead to a kind of dead internet scenario where r...
Assuming this was the case, wouldn't it actually imply slightly more optimistic long term odds for humanity? A world where AI development actually resembles something like natural evolution and (maybe) throws up red flags that generate interest in solving alignment would be good, no?
I worry that the strategies we might scrounge up to avoid them will be of the sort that are very unlikely to generalise once the superintelligence risks do eventually rear their heads
Ok sure but extra resources and attention is still better than none.
- I’m somewhat surprised that I haven’t seen more vigorous commercialization of language models and commercial applications that seem to reliably add real value beyond novelty; this is some update toward thinking that language models are less impressive than they seemed to me, or that it’s harder to translate from a capable model into economic impact than I believed.
Minor point here, but I think this is less to do with the potential commercial utility of LLMs and more relating to the reticence of large tech companies to publicly release a LLM that poses a si...
State level actors don't want rapid disruption to the worldwide socioeconomic order.