All of memeticimagery's Comments + Replies

State level actors don't want rapid disruption to the worldwide socioeconomic order. 

Would slightly better remote work tech lead to a complete overturn of the world labor market?

Does the complete overturn of the world labor market strike you as something various people/institutions in those countries would want? Inertia, in this context is surely the desired state.

6cousin_it
I thought employers (and more generally the elite, who are net buyers of labor) would be happy with a remote work revolution. But they don't seem to be, hence my confusion.

I suspect a lot of people here think that a (unusually powerful) human technocracy is the least we have to worry about. 

2JoeTheUser
I believe you are correct about the feelings of a lot of Lesswrong. I find it is very worrisome that the lesswrong perspective considers a pure AI takeover as something that needs to be separated from either the issue of the degradation of human self-reliance capacities or an enhanced-human takeover. It seems to me that instead these factors should be considered together. 
1PhilosophicalSoul
Sure. I think in an Eliezer reality what we'll get is more of a ship pushed onto the ocean scenario. As in, Sam Altman or whoever is leading the AI front at the time, will launch an AI/LLM filled with some of what I've hinted at. Once it's out on the ocean though, the AI will do it's own thing. In the interim before it learns to do that though, I think there will be space for manipulation.

Scrolling down this almost stream of consciousness post against my better judgement, unable to look away perfectly mimicked scrolling social media. I am sure you did not intend it but I really liked that aspect. 

Loads of good ideas in here, generally I think modelling the alphabet agencies is much more important than implied by discussion on LW. Clown attack is a great term, although I'm not entirely sure how much personal prevention layer of things really helps the AI safety community, because the nature of clown attacks seems like a blunt tool you can apply to the public at large to discredit groups. So, primarily the vulnerability of the public to these clown attacks is what matters and is much harder to change. 

3trevor
Yes, the whole issue is that you need to see the full picture in order to understand the seriousness of the situation. For example, screen refresh rate manipulation is useless without eyetracking, and eyetracking is useless without refresh rate manipulation, but when combined together, they can become incredibly powerful targeted cognition inhibitors (e.g. giving people barely-noticeable eyestrain every time that a targeted concept is on their screen). I encountered lots of people who were aware of the power of A/B testing, but what makes A/B testing truly formidable is that AI can be used to combine it with other things, because AI's information processing capabilities can automate many tasks that previously bottlenecked human psychological research.

Wow this really pulls together a lot of disparate ideas I've had at various times about the topic but wouldn't have summarised nearly as well. A note on the psyops point: if UAP are Non Human Intelligence, then we should still expect (a lot of) disinformation on the topic. Not just as a matter of natural overlap, but as a matter of incentive, it is reasonable to assume that muddying the waters with disinfo and making everyone out to be a 'crackpot' that Stephen Hawking can dismiss, would be a viable strategy to covering up the reality. Real issues are used... (read more)

2Lord Dreadwar
Thanks! Very much agreed re. psyops, particularly given the context of the Cold War. (American IC actors were initially concerned that UFOs were Soviet psyops or secret technology, while Soviet IC actors were initially concerned that UFOs were American psyops or secret technology.)

Your post implies/states that would be a kind of straightforward explanation but I'm not sure it would be. For one, the idea that ball lightning is not only much more common than previously thought, which it would need to be to also explain UFOs, but also has a hallucination component would both be quite startling if true. 

Secondly, there are aspects ball lightning cannot explain. What are we to make of the recent addition of "USO's" for instance? Unidentified Submerged Objects have consistently been part of this recent narrative, sometimes having bee... (read more)

3Lord Dreadwar
The suggestion isn't exactly ball lightning, but similar classes of phenomenon (including things like the well-attested Hessdalen lights), possibly triggered by seismological activity and meteorite activity. The hallucination aspect is based on modulated magnetic fields allegedly producing abduction-like psychedelic experiences in Canadian medical studies. I agree this explanation doesn't account for USOs (including the infamous Nimitz UAP, which was allegedly recorded travelling underwater at implausible speeds via sonar), physical trace evidence of alleged UAP landings (e.g. the Zamora case), and other aspects, and seems like an attempt at rationalising away awkward evidence for exotic (read: extraterrestrial) UAP. Nonetheless, natural atmospheric plasma phenomena do represent a plausible explanation for many UAP, particularly atmospheric lights performing instantaneous accelerations and other erratic maneuvers. Metallic appearances can't be ruled out, either; there are reports of metallic and opaque/black ball lightning.

It is a bizarre situation, but I think I disagree with you about the most likely prosaic explanation. Increasingly, especially with the latest events, the psyop explanation seems a relatively better explanation than the 'politicians are just fools' one. The reason being that politicians with higher clearances (and so more data) than us are making stronger and stronger commitments publicly taking UAP=ET seriously. That suggests to me there is a credible combination of evidence and people that had led them there. Further, the claims being made are so extreme... (read more)

But is it necessarily unlikely that they would be screwing with us if they existed? That's something I don't like about the bigfoot comparison, it's obviously laughable that large apes are evading camera detection at every turn, but with aliens, presumably it would be trivial to do so. We know that they would have the means, so that only leaves the reasoning to do this. I also don't necessarily agree with the assumption that our commercial sensor tech is good enough to detect hypothetical aliens. Try filming a drone from a distance with your phone. It will... (read more)

3Going Durden
As for Bigfoot: while I don't believe it exists, I think Its wrong way to think of it as avoiding cameras. The more reasonable explanation is that cameras avoid the places where it could possibly live. Bigfoot, Sasquatch, Yeti, and similar Apemen are almost always reported to live in remote wilderness, and specifically the North of USA, Canada, Russia, China, and of course the Himalayas. It seems like we should be able to spot them, until you realize that the northern wilderness belt that stretches from Alaska to Greenland, and then around Eurasia and back to Alaska is astonishingly big, and almost completely empty of humans. We are talking about a strip of wilderness that has about the same surface area as the Moon, and the possible population of Bigfeet would likely be smaller than the population of chimps in Africa. If every researcher interested in finding Bigfoot went to explore the Big North with all the state of the art equipment they could carry, and they spread evenly to cover maximum area, they would not only not find Bigfoot, but not find each other, due to enormous distances through impassable woodland and mountains. 

Given real aliens, they would need to either have capped tech or actively trolling to explain even low quality observations or pieces of craft. Nonintervention laws and incorrigible global anti-high-tech supervision constraining aliens are somewhat plausible, coordinated trolling less so.

5dynomight
I don't think I have any argument that it's unlikely aliens are screwing with us—I just feel it is, personally. I definitely don't assume our sensors are good enough to detect aliens. I'm specifically arguing we aren't detecting alien aircraft, not that alien aircraft aren't here. That sound like a silly distinction, but I'd genuinely give much higher probability to "there are totally undetected alien aircraft on earth" than "we are detecting glimpses of alien aircraft on earth." Regarding your last point, I totally agree those things wouldn't explain the weird claims we get from intelligence-connected people. (Except indirectly—e.g. rumors spread more easily when people think something is possible for other reasons.) I think that our full set of observations are hard to explain without aliens! That is, I think P[everything | aliens] is low. I just think P[everything | no aliens] is even lower.

So Grusch is another one of these Pentagon UAP investigatory program guys, which means he is claiming people have come to him from the compartmentalised Special Access Programs claiming they have recovered craft. That is important because unless he is saying somewhere he personally witnessed these craft, it is perfectly possible he fully believes his claim and is telling the truth in that yes, someone has come to him with these claims. Unfortunately I suspect whoever these first hand sources are will be shrouded entirely in classified red tape. I agree at ... (read more)

The best evidence that addresses both your claims would probably come from the military, since they have both state of the art sensors+ reliable witnesses. The recent surge in UFO coverage is almost all related to branches of the military (mostly Navy?) so the simple explanation is, it's classified to varying degrees. My understanding is that there is the publicly released stuff which is somewhat underwhelming, then some evidence Congress and the like has seen during briefings, and then probably more hush hush stuff above that for non civilians. The member... (read more)

2mako yass
A little smidge of insight about what kinds of things they discuss behind closed doors can be seen here. Former ATIP guy Lou Elizondo (a government worker who was responsible for collecting and investigating reports, for a time) says he's seen some wild stuff that wasn't released but idk whether he's making it up or what.

I should have clarified a bit, I was using the term 'military industrial complex' to try to narrow in on the much more technocratic underbelly of the American Defence/Intelligence community or private contractors. I don't have any special knowledge of the area so forgive me, but essentially DARPA and the like or any agency with a large black budget. 

Whatever they are doing does not need to have any connection to whatever the public facing government says in press briefings. It is perfectly possible that right now a priority for some of these agencies ... (read more)

Why is there so little mention of the potential role of the military industrial complex in developing AGI rather than a public AI lab? The money is available, the will, the history (ARPANET was the precursor to the internet). I am vaguely aware there isn't much to suggest the MIC is on the cutting edge of AI-but there wouldn't be if it were all black budget projects. If that is the case, it presumably implies a very difficult situation because the broader alignment community would have no idea when crucial thresholds were being crossed. 

2Vladimir_Nesov
I'm guessing the state of government's attitude at the moment might be characterized by the recent White House press briefing question, where a reporter, quoting Yudkowsky, asked about concerns that "literally everyone on Earth will die", and got a reception similar to what you'd expect if he asked about UFOs or bigfoot, just coated in political boilerplate. "But thank you Peter, thank you for the drama," "On a little more serious topic..." The other journalists were unsuccessfully struggling to restrain their laughter. The Overton window might be getting there, but it's not there yet, and it's unclear if it gets there before AGI is deployed. It's sad the question didn't mention the AI Impacts survey result, which I think is the most legible two-sentence argument at the moment.

Disclaimer: I myself am a newer user from last year.

I think trying to change downvoting norms and behaviours could help a lot here and save you some workload on the moderation end. Generally, poor quality posters will leave if you ignore and downvote them. Recently, there has been an uptick in these posts and of the ones I have seen many are upvoted and engaged with. To me, that says users here are too hesitant to downvote. Of course, that raises the question of how to do that and if doing so is undesirable because it will broadly repel many new users some of whom will not be "bad". Overall though I think encouraging existing users to downvote should help keep the well-kept garden. 

0Legionnaire
I think more downvoting being the solution depends on the goals. If our goal is only to maintain the current quality, that seems like a solution. If the goal is to grow in users and quality, I think diverting people to a real-time discussion location like Discord could be more effective.  Eg. a new user coming to this site might not have any idea a particular article exists that they should read before writing and posting their 3 page thesis on why AI will/wont be great, only to have their work downvoted (it is insulting and off-putting to be downvoted) and in the end we may miss out on persuading/gaining people. In a chat a quick back and forth could steer them in the right direction right off the bat.

No, that was just a joke Lex was making. I don't know the exact timestamps but in most of the instances where he was questioned on his own positions or estimations on the situation Lex seemed uncomfortable to me, including the alien civilisation example. At one point I recall actually switching to the video and Lex had his head in his hands, which body language wise seems pretty universally a desperate pose. 

There were definitely parts where I thought Lex seemed uncomfortable, not just limited to specific concepts but when questions got turned around a bit towards what he thought. Lex started podcasting very much in the Joe Rogan sphere of influence, to the extent that I think he uses a similar style, which is very open and lets the other person speak/have a platform but is perhaps at the cost of being a bit wishy-washy. Nevertheless it's a huge podcast with a lot of reach. 

1Οἰφαισλής Τύραννος
Like when at 1:03:31 he suggested that he was a robot trying to play human characters? That kind of words make me think that there is something extremely worrisome and wrong with him.  

This is why I don't place much confidence in projections about how the population will be affected by TAI from people like Sam Altman either. You have to consider they are very likely to be completely out of touch with the average person and so have absolutely terrible intuitions about how they respond to anything, let alone forecasting long term implications for them stemming from TAI. If you get some normal people together and make sure they take the proposition of TAI and everything it entails seriously, (such as widespread joblessness), I suspect you would encounter a lot more fear/apprehension around the kind of behaviours and ways of living that is going to produce. 

I think what stands out to me the most is big tech/big money now getting involved seriously. That has a lot of potential for acceleration just because of funding implications. I frequent some financial/stock websites and have noticed AI become not just a major buzzword, but even some sentiments along the lines of 'AI could boost productivity and offset a potential recession in the near future'. The rapid release of LLM models seems to have jump started public interest in AI, what remains to be seen is what the nature of that interest manifests as. I am per... (read more)

1Igor Ivanov
I think, we are entering a black swan and it's hard to predict anything.

I think it may be necessary to accept that at first, there may need to be a stage of general AI wariness within public opinion before AI Safety and specific facets of the topic are explored. In a sense, the public has not yet fully digested the 'AI is a serious risk' or perhaps even 'AI will be transformative to human life' in the relatively near term future. I don't think it is very likely that is a phase that can simply be skipped and it will probably be useful to get as many people broadly on topic before the more specific messaging, because if they are... (read more)

I had never heard of Standpoint epistemology prior to this post but have encountered plenty of thinking that seems similar to what it espouses. One thing I can not figure out at all how this functionally differs to surveying a specific demographic on an issue. How, exactly, is whatever this is more useful? In fact to me it seems likely to be functionally worse in that for a survey the sample size is small and there is absolutely no control group, as someone else pointed out, we don't get any sense of what any other group responds with given the same questi... (read more)

5tailcalled
I would definitely find it interesting to survey people-in-general too. However, that seems quite difficult. First of all, the site I'm using to survey people mainly has people from USA and Britain. Secondly, most people don't speak any languages that I speak, so I cannot design the questions for them myself, nor can I interpret their answer myself. It would also be a much bigger project as I would need to put even more effort into understanding their local cultures in order to ask relevant questions. A control group is mainly relevant if one is interested in differences, e.g. if one wants to know how Caucasian-American problems differ from African-American problems. That may very well be a topic of interest, but I think African-American problems are also interesting in and of themselves. The necessary sample size to understand something depends heavily on the amount of variance in that thing and the precision to which you want to map out the variance.  For instance, if you want to estimate the mean value μ for a variable with standard deviation σ, then typically the accuracy (standard error) of the estimate ^μ will be proportional to σ√N. If σ is low - that is, if there is broad agreement, where there is a lesson that matches with all of the different narratives - then the needed sample size to get a small standard error is also low. This seemed to happen to a great degree in this survey: while the exact content of the different participants' responses differed a lot from participant to participant, the updates to my beliefs that seemed to be suggested by their experiences didn't differ hugely. So I feel relatively safe making those updates. (That said, ideally I should still cover the various biases I mentioned in the end of my post.) As for "surveying a specific demographic on an issue", do you mean something like opinion polls? Opinion polls tend to use questions that are more rigid/less open-ended, have lower information content, and focus on more "processe

I'm not sure about 75% but it is an interesting subject and I do think the consensus view is slightly too sceptical. I don't have any expertise but one thing that always sticks out to me as decreasing the likelihood of bigfoot's existence is the lack of remains. Ok, I buy encounters could be rare enough so that there isn't one within the advent of the smartphone. But where are the skeletons? Is part of the claim they might have some type of burial grounds? Very remote territory they stick to without exceptions? 

1leerylizard
It's discussed in the Reddit comments, if you want more details, but briefly: A rare species with a long life might leave on the order of ~100 dead a year. If each corpse has, say 1e-5 chance (low but still plausible number) of being found by a person, then it could take a while. I don't know of any claim that they would take care of their dead, but I don't see that as implausible.

I don't think AI in the long run will be comparable to events like the industrial revolution (or anything, historically) because AI will be less tool like and more agent like in my view. That is not a situation that draws any historical precedent. A famous investor, Ray Dalio made a point something along the lines that recessions, bubbles etc (but really any rare-ish economic event) are incredibly hard to model because the length of time the economic system has existed is actually relatively short so we don't have that large a sample size. That point can b... (read more)

1ponkaloupe
it’s not clear to me that this distinction is real, or would matter even if it is real. from my perspective, looking up, i am an agent within the company i work within. from the employer’s perspective, looking down, i am a tool to drive revenue. this relation exists all the way up through to the C-suite, and then the hedge funds and retirement fund managers, and back around to the employees who own those funds. in our capitalist system of ownership every agent is also someone else’s tool. our economic systems have weathered all these events. if your point is that AI is of the same class as bubbles/recessions, then shouldn’t the takeaway be that our economic systems can handle it — just expecting it to be as painful as any other economic swing? i suppose i probably just don’t understand what you mean when you speak of “rethinking” the economic system. that sounds like a revolutionary change, whereas for the dominant economic systems today, looking back i can trace what is more of an evolutionary path from the dawn of cities/trade up to the present day. the only time i can say we’ve “rethought” our economic system is when various countries tried to pivot from their established distributed system to a centrally managed system of production more or less “overnight”.

Transformative AI will demand a rethink of the entire economic system. The world economy is based on an underlying assumption that most humans are essentially capable of being productive in some real way that generates value. Once that concept is eroded, and my intuition is that it will only take a surprisingly small percentage of people being rendered unproductive, some form of redistribution will probably be required. Rather than designing this system in such a way that 'basic' needs are provided/paid for, I think a percentage of output from AI gains sho... (read more)

1ponkaloupe
i'm generally receptive to the idea that our economic systems could be changed significantly for the better, but looking historically i don't think any of this "demands" a rethink of the dominant economic system in play today. it will mutate in the same patchwork way it has ever since the invention of the printing press (an early device that turned a previously scarce resource into an abundant one). it somehow made it through both the explosive decrease in energy scarcity of the industrial revolution and the explosive decrease in information scarcity of the past 50 years. i think there's some argument here that the periods which experience outsized economic growth are largely those same periods which experienced rapid decrease in scarcity of some underlying resource, though i don't have the data to properly claim that. on the other hand, the industrial revolution while successful in economic terms had some pretty terrible social consequences at the time: generally poor living and working conditions for large segments of the population. the response to this was largely social and political: unions, regulation. the actual economic system has proven itself to be robust to these kind of changes and also faster at responding to them than the social/political systems, so i think the more appropriate focus is on the latter: are our social and political systems of today up to the task of handling another rapid decrease in scarcity?

Cultural differences explain racial differences a lot better than genetics, at least for now.

Where is the evidence for this? I am not really well versed with this topic but am under the impression if this were true it would be heavily promoted/reported on. 

1mruwnik
Isn't that the whole point behind all the stories about poor children ending up in Harvard? About increasing spending to schools in more disadvantaged areas? The problem with finding scientific evidence for this, is that the whole topic is radioactive. Whatever your results are, you'll be called an SJW, racist, woke and wanting to reintroduce eugenics. Much safer and easier to just choose a different subject for your grant proposal.
6mruwnik
Well, race is a very ill defined term from a genetic point of view. How would you define "white"? "Black" covers vastly more, genetically speaking, than all the other races put together. Which is to be expected, since humans originated in Africa. Where your ancestors are from is important in that it will make you more or less susceptible to various things (e.g. Africans are more resistant to malaria, Scandinavians are more resistant to HIV, pastoral people can tolerate lactose), but these tend to be single genes, or at most a few. In the case of features that are regulated by multiple genes, it's spread out a lot more, with lots of genes giving small boosts.  In the case of intelligence, it's a bit like asking whether there's a homosexuality or speech gene. The simple answer is no, the deeper answer is (as always in biology) "it depends". For example, in the case of speech, there is a family in England that has a genetic disorder that results in them not being able to speak. This doesn't mean that FOXP2 is responsible for speech. It just means that it's a critical element, the lack of which will break the speech system. Intelligence is (most likely) similar. There are lots (hundreds or thousands) of genes correlated with intelligence. These genes are spread all over the gene pool, which has always (except remote islands, e.g. Tasmania) been mixing itself around. If there was a single gene for intelligence, I'm pretty sure it would spread really fast, unless it was a recent (e.g. 1000 years) mutation. Unless it was somehow intrinsically connected to superficial characteristics like toenail size, beard length, or skull shape, the intelligence gene would spread to other places without changing how the recipients look. It's like eyesight. Having good eyes is very useful. So you'd expect most people to have similar levels of sight (sans various defects and with a normalish distribution) because natural selection will be pushing up to the Pareto limit. Intelligence is

To me it seems like the current zeitgeist is just not up to addressing this question without being almost entirely captured by bad faith actors on both sides and therefore causing some non trivial social unrest. There might be some positive gain to be had from changes in policies depending on the reality of genetic differences in IQ, however policy makers would have to be much more nuanced and capable than they appear to be. Even if this were possible it would have to be weighed against the social aspects.

Your last point seems like it agrees with point 7e becoming reality, where the US govt essentially allows existing big tech companies to pursue AI within certain 'acceptable' confines they think of at the time. In that case how much AI might be slowed is entirely dependent on how tight a leash they put them on. I think that scenario is actually quite likely given I am sure there is considerable overlap between US alphabet agencies and sectors of big tech. 

I like this post a lot, partially because I think it is an underdiscussed area, partially because it expands beyond the obvious semiconductor type companies. One thing I would add, is that with almost no technology advancement, existing and soon to exist LLMs might make investing in social media (and internet adjacent) companies much more volatile. This is because as far as I can see, the bot problem for these companies should only become worse and worse as AI can more perfectly mimic a real user. This could lead to a kind of dead internet scenario where r... (read more)

Assuming this was the case, wouldn't it actually imply slightly more optimistic long term odds for humanity? A world where AI development actually resembles something like natural evolution and (maybe) throws up red flags that generate interest in solving alignment would be good, no?

I worry that the strategies we might scrounge up to avoid them will be of the sort that are very unlikely to generalise once the superintelligence risks do eventually rear their heads

Ok sure but extra resources and attention is still better than none. 

2Artaxerxes
Yes, I do expect that if we don't get wiped out that maybe we'll get somewhat bigger "warning shots" that humanity may be likely to pay more attention to. I don't know how much that actually moves the needle though. This isn't obvious to me, it might make things harder. Like how when Elon Musk read Superintelligence and started developing concerns about AI risk but the result was that he founded OpenAI and gave it a billion dollars to play with, regarding which I think you could make an argument that doing so accelerated timelines and reduced our chances of avoiding negative outcomes. 
  • I’m somewhat surprised that I haven’t seen more vigorous commercialization of language models and commercial applications that seem to reliably add real value beyond novelty; this is some update toward thinking that language models are less impressive than they seemed to me, or that it’s harder to translate from a capable model into economic impact than I believed.

Minor point here, but I think this is less to do with the potential commercial utility of LLMs and more relating to the reticence of large tech companies to publicly release a LLM that poses a si... (read more)