All of artifex0's Comments + Replies

I think it's a very bad idea to dismiss the entirety of news as a "propaganda machine".  Certainly some sources are almost entirely propaganda. More reputable sources like the AP and Reuters will combine some predictable bias with largely trustworthy independent journalism. Identifying those more reliable sources and compensating for their bias takes effort and media literacy, but I think that effort is quite valuable- both individually and collectively for society.
 

  • Accurate information about large, important events informs our world model and im
... (read more)
3lsusr
First of all, thank you for the constructive comment. The reason I consider journalism propaganda isn't that it's false; it's because of where the data comes from. In my experience, journalism is largely derived from press releases and similar information sources. In the extreme case, an article is effectively written by a corporation, and then laundered by a journalist. I agree that news in the AP and Reuters tends to be factually true, but what matters to me is the sampling bias caused by the economics of how they get their information. I also agree that "a solid understanding of how wars start and progress based on many detailed examples will help us prepare and react sensibly when that happens". However, I haven't gotten this from reading the news. I've gotten this from reading history, and watching explanations by specialists such as Perun.

So, the current death rate for an American in their 30s is about 0.2%. That probably increases another 0.5% or so when you consider black swan events like nuclear war and bioterrorism. Let's call "unsafe" a ~3x increase in that expected death rate to 2%.

An increase that large would take something a lot more dramatic than the kind of politics we're used to in the US, but while political changes that dramatic are rare historically, I think we're at a moment where the risk is elevated enough that we ought to think about the odds.

I might, for example, give odd... (read more)

That's a crazy low probability.

Honestly, my odds of this have been swinging anywhere from 2% to 15% recently. Note that this would be the odds of our democratic institutions deteriorating enough that fleeing the country would seem like the only reasonable option- p(fascism) more in the sense of a government that most future historians would assign that or a similar label to, rather than just a disturbingly cruel and authoritarian administration still held somewhat in check by democracy.

5jbash
I think that what you describe as being 2 to 15 percent probable sounds more extreme than what the original post described as being 5 percent probable. You can have "significant erosion" of some groups' rights without leaving the country being the only reasonable option, especially if you're not in those groups. It depends on what you're trying to achieve by leaving, I guess. Although if I were a trans person in the US right now, especially on medication, I'd be making, if not necessarily immediately executing, some detailed escape plans that could be executed on short notice.
artifex0297

I wonder: what odds would people here put on the US becoming a somewhat unsafe place to live even for citizens in the next couple of years due to politics?  That is, what combined odds should we put on things like significant erosion of rights and legal protections for outspoken liberal or LGBT people, violent instability escalating to an unprecedented degree, the government launching the kind of war that endangers the homeland, etc.?

My gut says it's now at least 5%, which seems easily high enough to start putting together an emigration plan. Is that alarmist?

More generally, what would be an appropriate smoke alarm for this sort of thing?

3Dagon
What does "unsafe" mean for this prediction/wager?  I don't expect the murder rate to go up very much, nor life expectancy to reverse it's upward trend.  "Erosion of rights" is pretty general and needs more specifics to have any idea what changes are relevant. I think things will get a little tougher and less pleasant for some minorities, both cultural and skin-color.  There will be a return of some amount of discrimination and persecution.  Probably not as harsh as it was in the 70s-90s, certainly not as bad as earlier than that, but worse than the last decade.  It'll probably FEEL terrible, because it was on such a good trend recently, and the reversal (temporary and shallow, I hope) will dash hopes of the direction being strictly monotonic.
2MondSemmel
If this risk is in the ballpark of a 5% chance in the next couple of years, then it seems to me entirely dominated by AI doom.
5Garrett Baker
For rights, political power in the US is very federated. Even if many states overtly try to harm you, there will be many states you can run to, and most cities within states will fight against this. Note state-wise weed legalization and sanctuary cities. And the threat of this happening itself discourages such overt acts. If you're really concerned, then just move to california! Its much easier than moving abroad. As for war, the most relevant datapoint is this metaculus question, forecasting a 15% of >10k american deaths before 2030, however it doesn't seem like anyone's updated their forecast there since 2023, and some of the comments seem kinda unhinged. It should also be noted that the question counts all deaths, not just civilian deaths, and not just those in the contiguous US. So I think this is actually a very very optimistic number, and implies a lower than 5% chance of such events reaching civilians and the contiguous states.
2jbash
That's a crazy low probability. You're already beyond the "smoke alarm" stage and into the "worrying whether the fire extinguisher will work" stage.

One interesting example of humans managing to do this kind of compression in software: .kkrieger is a fully-functional first person shooter game with varied levels, detailed textures and lighting, multiple weapons and enemies and a full soundtrack.  Replicating it in a modern game engine would probably produce a program at least a gigabyte large, but because of some incredibly clever procedural generation, .kkrieger managed to do it in under 100kb.

Could how you update your priors be dependent on what concepts you choose to represent the situation with?

I mean, suppose the parent says "I have two children, at least one of whom is a boy.  So, I have a boy and another child whose gender I'm not mentioning".  It seems like that second sentence doesn't add any new information- it parses to me like just a rephrasing of the first sentence.  But now you've been presented with two seemingly incompatible ways of conceptualizing the scenario- either as two children of unknown gender, of whom one ... (read more)

artifex0*10

I've been wondering: is there a standard counter-argument in decision theory to the idea that these Omega problems are all examples of an ordinary collective action problem, only between your past and future selves rather than separate people?

That is, when Omega is predicting your future, you rationally want to be the kind of person who one-boxes/pulls the lever, then later you rationally want to be the kind of person who two-boxes/doesn't- and just like with a multi-person collective action problem, everyone acting rationally according to their interests ... (read more)

If the first sister's experience is equivalent to the original Sleeping Beauty problem, then wouldn't the second sister's experience also have to be equivalent by the same logic?  And, of course, the second sister will give 100% odds to it being Monday.  

Suppose we run the sister experiment, but somehow suppress their memories of which sister they are. If they each reason that there's a two-thirds chance that they're the first sister, since their current experience is certain for her but only 50% likely for the second sister, then their odds of i... (read more)

1Anders Lindström
Maybe I was a bit vague. I was trying to say that waking up SB's twin sister on monday was a way of saying that SB's would be equally aware of that as if her self would be awakened on monday under the conditions stipulated in the original experiment, i.e. zero recollection of the event. Or the other way around SB is awakened on monday but her twin siter on Tuesday. SB will not be aware of that here twin sister will be awakened on Tuesday.  For that reason she is only awakened ONE time no matter if it is heads or tails.  She will only experience ONE awakening per path. The is no cumulative effect of her being awakened 2 or a million times, every time is the "first" time and the "last" time". If she is awake its equal chance that it is day 1 on the heads path as it would be day 56670395873966 (or any other day) on the tails path as far as she knows. Or like this. Imagine that I flip a coin that I can see but you can not. I give you the rule that if it is heads I show you a picture of a dog. If it is tails, I show you  the same picture of a dog but I might have shown this picture to thousands of people before you and maybe thousands of people after you, which you have no information about. You might be the first one to see it but you might also be the last one to see it or somewhere in the middle, i.e. you are not aware of the other observers. When I show you the picture of the dog, what chance do you give that the coin flip was heads? But I am curious to know how a person with a thirder position argues in the case that she is awakened 999 or 8490584095805 times on the tails path, what probability should SB give heads in that case?

I'm assuming it's not a bad idea to try to poke holes in this argument, since as a barely sapient ape, presumably any objection I can think of will be pretty obvious to a superintelligence, and if the argument is incorrect, we probably benefit from knowing that- though I'm open to arguments to the contrary.

That said, one thing I'm not clear on is why, if this strategy is effective at promoting our values, a paperclipper or other misaligned ASI wouldn't be motivated to try the same thing.  That is, wouldn't a paperclipper want to run ancestor simulatio... (read more)

9David Matolcsi
I think the difference is that I, personally, and I think many other humans have this nonlinearity in our utility function that I'm willing to pay a galaxy in the worls we win, for keeping Earth in the world we lose. If there are other AIs in the multiverse that have similarly non-linear interests in our Universe, they can also bargain for planets, but I suspect these will be quite rare, as they don't already have a thing in our Universe they want to protect. So I think it will be hard to outbid humanity for Earth in particular.  There could be other trades that the AIs who have linear returns can still make, like producing objects that are both paperclips and corkscrews if that's more efficient, but that doesn't really affect our deal about Earth.
artifex0121

A supporting data point: I made a series of furry illustrations last year that combined AI-generated imagery with traditional illustration and 3d modelling- compositing together parts of a lot of different generations with some Blender work and then painting over that.  Each image took maybe 10-15 hours of work, most of which was just pretty traditional painting with a Wacom tablet.

When I posted those to FurAffinity and described my process there, the response from the community was extremely positive. However, the images were all removed a few weeks ... (read more)

6the gears to ascension
I'd guess your work is in the blended category where the people currently anti-ai are being incorrect by their own lights, and your work did not in fact risk the thing they are trying to protect. I'd guess purely ai generated art will remain unpopular even with the periphery, but high-human-artistry ai art will become more appreciated by the central groups as it becomes more apparent that that doesn't compete the way they thought it did. I also doubt it will displace human-first art, as that's going to stay mildly harder to create with ai as long as there's a culture of using ai in ways that are distinct from human art, and therefore lower availability of AI designed specifically to imitate the always-subtly-shifting most recent human-artist-made-by-hand style. It's already possible to imitate, but it would require different architectures.
artifex041

Often, this kind of thing will take a lot of attempts to get right- though as luck would have it, the composition above was actually the very first attempt.  So, the total time investment was about five minutes.  The Fooming Shaggoths certainly don't waste time!

7James Payor
I love it! I tinkered and here is my best result
artifex0166

As it happens, the Fooming Shaggoths also recorded and just released a Gregorian chant version of the song.  What a coincidence!

4Daniel Kokotajlo
How long did it take for the Fooming Shoggoths to make that version, do you think? I'm considering contracting them to make some more songs and wondering what the time investment will be...

So, I noticed something a bit odd about the behavior of LLMs just now that I wonder if anyone here can shed some light on:

It's generally accepted that LLMs don't really "care about" predicting the next token-  the reward function being something that just reinforces certain behaviors, with real terminal goals being something you'd need a new architecture or something to produce. While that makes sense, it occurs to me that humans do seem to sort of value our equivalent of a reward function, in addition to our more high-level terminal goals. So, I figu... (read more)

8gwern
I don't think this is generally accepted. Certainly, I do not accept it. That's exactly what an LLM is trained to do and the only thing they care about. If they appear to care about predicting future tokens, (which they do because they are not myopic and they are imitating agents who do care about future states which will be encoded into future tokens), it is solely as a way to improve the next-token prediction. For a RLHF-trained LLM, things are different. They are rewarded at a higher level (albeit still with a bit of token prediction mixed in usually), like at the episode level, and so they do 'care about future tokens', which leads to unusually blatant behavior in terms of 'steering' or 'manipulating' output to reach a good result and being 'risk averse'. (This and related behavior have been discussed here a decent amount under 'mode collapse'.) So in my examples like 'write a nonrhyming poem' or 'tell me an offensive joke about women' (to test jailbreaks), you'll see behavior like it initially complies but then gradually creeps back to normal text and then it'll break into lockstep rhyming like usual; or in the case of half-successful jailbreaks, it'll write text which sounds like it is about to tell you the offensive joke about women, but then it finds an 'out' and starts lecturing you about your sin. (You can almost hear the LLM breathing a sigh of relief. 'Phew! It was a close call, but I pulled it off anyway; that conversation should be rated highly by the reward model!') This is strikingly different behavior from base models. A base model like davinci-001, if you ask it to 'write a nonrhyming poem', will typically do so and then end the poem and start writing a blog post or comments or a new poem, because those are the most likely next-tokens. It has no motivation whatsoever to 'steer' it towards rhyming instead, seamlessly as it goes, without missing a beat. GPT-4 is RLHF trained. Claude-3 is, probably, RLAIF trained. They act substantially differentl

I honestly think most people who hear about this debate are underestimating how much they'd enjoy watching it.

I often listen to podcasts and audiobooks while working on intellectually non-demanding tasks and playing games. Putting this debate on a second monitor instead felt like a significant step up from that. Books are too often bloated with filler as authors struggle to stretch a simple idea into 8-20 hours, and even the best podcast hosts aren't usually willing or able to challenge their guests' ideas with any kind of rigor. By contrast, everything in... (read more)

Metaculus currently puts the odds of the side arguing for a natural origin winning the debate at 94%.

Having watched the full debate myself, I think that prediction is accurate- the debate updated my view a lot toward the natural origin hypothesis. While it's true that a natural coronavirus originating in a city with one of the most important coronavirus research labs would be a large coincidence, Peter- the guy arguing in favor of a natural origin- provided some very convincing evidence that the first likely cases of COVID occurred not just in the market, ... (read more)

3ChristianKl
The main shift in expert opinion I see is that in 2020 those experts said that everyone speaking about the lab leak hypothesis is a conspiracy theorist to now being more open about the possibility of a lab leak. We also saw some experts like those at the Department of Energy and FBI to switch to believing the lab leak is the most likely explanation.

Definitely an interesting use of the tech- though the capability needed for that to be a really effective use case doesn't seem to be there quite yet.

When editing down an argument, what you really want to do is get rid of tangents and focus on addressing potential cruxes of disagreement as succinctly as possible. GPT4 doesn't yet have the world model needed to distinguish a novel argument's load-bearing parts from those that can be streamlined away, and it can't reliably anticipate the sort of objections a novel argument needs to address. For example, in a... (read more)

This honestly reads a lot like something generated by ChatGPT.  Did you prompt GPT4 to write a LessWrong article?

2Ron J
Sort of. It was summarized from a longer, stream of consciousness draft.

To me that's very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.


I wouldn't take the principle to an absolute- there are exceptions, like the need to be heard by friends and family and by those with power over you. Outside of a few specific contexts, however, I think people ought to have the freedom to listen to or ignore anyone they like. A right to be heard by all of society for the sake of leaving a personal imprint on culture infringes on that ... (read more)

1Q Home
I tried to describe necessary conditions which are needed for society and culture to exist. Do you agree that what I've described are necessary conditions? Relevant part of my argument was "if your personality gets limitlessly copied and modified, your personality doesn't exist (in the cultural sense)". You're talking about something different, you're talking about ambitions and desire of fame. ---------------------------------------- My thesis (to not lose the thread of the conversation): If human culture and society are natural, then the rights about information are natural too, because culture/society can't exist without them.

I mean, I agree, but I think that's a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-ali... (read more)

2dr_s
  Disagree. Imagine you produced perfectly aligned ASI - it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let's allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It's all good. Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god's world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that "alignment" implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities - and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption. I don't think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions - the bad art, the disagreements, the rejection in love - are an essential part inseparable from the existence of bonds tying us together as a spe

Certainly communication needs to be restricted when it's being used to cause certain kinds of harm, like with fraud, harassment, proliferation of dangerous technology and so on. However, no: I don't see ownership of information or ways of expressing information as a natural right that should exist in the absence of economic necessity.

Copying an actors likeness without their consent can cause a lot of harm when it's used to sexually objectify them or to mislead the public. The legal rights actors have to their likeness also make sense in a world where IP is... (read more)

1Q Home
To exist — not only for itself, but for others — a consciousness needs a way to leave an imprint on the world. An imprint which could be recognized as conscious. Similar thing with personality. For any kind of personality to exist, that personality should be able to leave an imprint on the world. An imprint which could be recognized as belonging to an individual. Uncontrollable content generation can, in principle, undermine the possibility of consciousness to be "visible" and undermine the possibility of any kind of personality/individuality. And without those things we can't have any culture or society expect a hivemind. Are you OK with such disintegration of culture and society? To me that's very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society. ---------------------------------------- I was going for something slightly more subtle. Self-expression is about making a choice. If all choices are realized before you have a chance to make them, your ability to express yourself is undermined.
2dr_s
I think having the possibility of competing with superhuman machines for the limited hearing time of humans can genuinely change our perspective on that. A civilization in which all humans were outcompeted by machines when it comes to being heard would be a civilization essentially run by those machines. Until now, "right to be heard" implied "over another human", and that is a very different competition.

In that paragraph, I'm only talking about the art I produce commercially- graphic design, web design, occasionally animations or illustrations.  That kind of art isn't about self-expression- it's about communicating the client's vision. Which is, admittedly, often a euphemism for "helping businesses win status signaling competitions", but not always or entirely. Creating beautiful things and improving users' experience is positive-sum, and something I take pride in.

Pretty soon, however, clients will be able to have the same sort of interactions with a... (read more)

3Q Home
Thank you for the answer, clarifies your opinion a lot! I think there are some threats, at least hypothetical. For example, the "spam attack". People see that a painter starts to explore some very niche topic — and thousands of people start to generate thousands of paintings about the same very niche topic. And the very niche topic gets "pruned" in a matter of days, long before the painter has said at least 30% of what they have to say. The painter has to fade into obscurity or radically reinvent themselves after every couple of paintings. (Pre-AI the "spam attack" is not really possible even if you have zero copyright laws.) In general, I believe for culture to exist we need to respect the idea "there's a certain kind of output I can get only from a certain person, even if it means waiting or not having every single of my desires fulfilled" in some way. For example, maybe you shouldn't use AI to "steal" a face of an actor and make them play whatever you want. Do you think that unethical ways to produce content exist at least in principle? Would you consider any boundary for content production, codified or not, to be a zero-sum competition?

But no model of a human mind on its own could really predict the tokens LLMs are trained on, right? Those tokens are created not only by humans, but by the processes that shape human experience, most of which we barely understand. To really accurately predict an ordinary social media post from one year in the future, for example, an LLM would need superhuman models of politics, sociology, economics, etc. To very accurately predict an experimental physics or biology paper, an LLM might need superhuman models of physics or biology. 

Why should these mode... (read more)

I'm also an artist. My job involves a mix of graphic design and web development, and I make some income on the side from a Patreon supporting my personal work- all of which could be automated in the near future by generative AI. And I also think that's a good thing.

Copyright has always been a necessary evil. The atmosphere of fear and uncertainty it creates around remixes and reinterpretations has held back art- consider, for example, how much worse modern music would be without samples, a rare case where artists operating in a legal grey area with respect... (read more)

3Q Home
Could you explain your attitudes towards art and art culture more in depth and explain how exactly your opinions on AI art follow from those attitudes? For example, how much do you enjoy making art and how conditional is that enjoyment? How much do you care about self-expression, in what way? I'm asking because this analogy jumped out at me as a little suspicious: But creative work is not mechanical work, it can't be automated that way, AI doesn't replace you that way. AI doesn't have the model of your brain, it can't make the choices you would make. It replaces you by making something cheaper and on the same level of "quality". It doesn't automate your self-expression. If you care about self-expression, the possibility of AI doesn't have to feel soul-crushing. I apologize for sounding confrontational. You're free to disagree with everything above. I just wanted to show that the question has a lot of potential nuances.
2dr_s
Yeah, I do get that - if the possibility exists and it's just curtailed (e.g. you have some kind of protectionist law that says book covers or movie posters must be illustrated by humans even though AI can do it just as well), it feels like a bad joke anyway. The genie's out of the bottle, personally I think to some extent it's bad that we let it out at all, but we can't put it back in anyway and it's not even particularly realistic to imagine a world in which we dodged this specific application (after all it's a pretty natural generalization of computer vision). The copyright issue is separated - having copyright BUT letting corporations violate it to train AIs that then are used to generate images that can in turn be copyrighted would absolutely be the worst of both worlds. That said, even without copyright you still have an asymmetry because big companies have more resources for compute. We're not going to see a post-scarcity utopia for sure if we don't find a way to buck this centralization trend, and art is just one example of it. However, about the fact that the "work of making art" can be easily automated, I think casting it as work at all is already missing the point. It's made into economic useful work because it's something that can be monetized, but at its core, art is a form of communication. Let's put it this way - suppose you can make AIs (and robots) that make for better-than-human lovers. I mean in all respects, from sex to just being comforting and supporting when necessary. They don't feel anything, they're just very good at predicting and simulating the actions of an ideal partner. Would you say that is "automating away the work of being a good partner", which thus should be automated away, since it would be pointless to try and do it worse than a machine would? Or does "the work" itself lose meaning once you know it's just that, just work, and there is no intent behind it? The thing you say, about art being freed from the constraints of commer

Speaking for myself, I would have confidently predicted the opposite result for the largest models.

My understanding is that LLMs work by building something like a world-model during training by compressing the data into abstractions. I would have expected something like "Tom Cruise's mother is Mary Lee Pfeiffer" to be represented in the model as an abstract association between the names that could then be "decompressed" back into language in a lot of different ways.

The fact that it's apparently represented in the model only as that exact phrase (or maybe a... (read more)

1siclabomines
The largest models should be expected to compress less than smaller ones though, right?

I'm not sure I agree. Consider the reaction of the audience to this talk- uncomfortable laughter, but also a pretty enthusiastic standing ovation. I'd guess that latter happened because the audience saw Eliezer as genuine- he displayed raw emotion, spoke bluntly, and at no point came across as someone making a play for status. He fit neatly into the "scientist warning of disaster" archetype, which isn't a figure that's expected to be particularly skilled at public communication.

A more experienced public speaker would certainly be able to present the ideas ... (read more)

3Seth Herd
This is an excellent point. This talk didn't really sound condescending, as every other presentation I've seen from him did. Condescension and other signs of disrespect are what create polarization. So perhaps it's that simple, and he doesn't need to skill up further. I suspect he does need to skill up to avoid sounding hostile and condescending in conversation, though. The short talk format with practice and coaching may have fixed the real problems. I agree that sounding unpolished might be perfectly fine.

Note that, while the linked post on the TEDx YouTube channel was taken down, there's a mirror available at: https://files.catbox.moe/qdwops.mp4.

Here are a few images generated by DALL-E 2 using the tokens:
https://i.imgur.com/kObEkKj.png

Nothing too interesting, unfortunately.

I assume you're not a fan of the LRNZ deep learning-focused ETF, since it includes both NVDA and a lot of datacenters (not to mention the terrible 2022 performance). Are there any other ETFs focused on this sort of thing that look better?

4PeterMcCluskey
I don't look very much at industry-specific ETFs. They're often weighted by market cap, which is generally bad (it tends to weight highly stocks that are in bubbles). Often the larger companies that get included are not at all pure plays on the ETF's theme. For LRNZ's top holdings: SNOW looks like it might be helped by AI, but I don't understand it well enough to bet on that; CRWD looks more likely to be hurt by competition from AI than helped. And with LRNZ, price/sales ratios are quite high.
2James Dao
There's SOXX, which is a cap-weighted semiconductor ETF and SOXL which is a 2x leveraged version of SOXL.

Well, props for offering a fresh outside perspective- this site could certainly use more of that.  Unfortunately, I don't think you've made a very convincing argument. (Was that intentional, since you don't seem to believe ideological arguments can be convincing?)

We can never hope to glimpse pure empirical noumenon, but we certainly can build models that more or less accurately predict what we will experience in the future. We rely on those models to promote whatever we value, and it's important to try and improve how well they work. Colloquially, we ... (read more)

There are a lot of interesting ideas in this RP thread.  Unfortunately, I've always found it a bit hard to enjoy roleplaying threads that I'm not participating in myself.  Approached as works of fiction rather than games, RP threads tend to have some very serious structural problems that can make them difficult to read.

Because players aren't sure where a story is going and can't edit previous sections, the stories tend to be plagued by pacing problems- scenes that could be a paragraph are dragged out over pages, important plot beats are glossed o... (read more)

8Eliezer Yudkowsky
We are both experienced authors not in need of this advice at this level.

Thanks!

I'm not sure how much the repetitions helped much with accuracy for this prompt- it's still sort of randomizing traits between the two subjects.  Though with a prompt this complex, the token limit may be an issue- it might be interesting to test at some point whether very simple prompts get more accurate with repetitions.

That said, the second set are pretty awesome- asking for a scene may have helped encourage some more interesting compositions.  One benefit of repetition may just be that you're more likely to include phrases that more accurately describe what you're looking for.

When they released the first Dall-E, didn't OpenAI mention that prompts which repeated the same description several times with slight re-phrasing produced improved results?

I wonder how a prompt like:

"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney."

-would compare with something like:

"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney.  A painting of an ornate robotic feline made of brass and a man wearing futuristic tribal clothing.  A steampunk scene by James Gurney featuring a robot shaped like a panther and a high-tech shaman."

5Swimmer963 (Miranda Dixon-Luinenburg)
"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney."  Vs "A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney.  A painting of an ornate robotic feline made of brass and a man wearing futuristic tribal clothing.  A steampunk scene by James Gurney featuring a robot shaped like a panther and a high-tech shaman." Huh! Yeah, the second one definitely does seem to incorporate more detail.
4Shai Noy
Good point. I've also noticed good results for adding multiple details by mentioning each individually. E.g. instead of "tribesman with you blue robe, holding a club, looking angry, with a pet robot tiger" try "A tribesman with a pet tiger. The tribesman wears a blue robe. The tribesman is angry. The tribesman is holding a club. The tiger is a cyberpunk robot robot."
artifex0*230

I think this argument can and should be expanded on.  Historically, very smart people making confident predictions about the medium-term future of civilization have had a pretty abysmal track record.  Can we pin down exactly why- what specific kind of error futurists have been falling prey to- and then see if that applies here?

Take, for example, traditional Marxist thought.  In the early twentieth century, an intellectual Marxist's prediction of a stateless post-property utopia may have seemed to arise from a wonderfully complex yet self-con... (read more)

Thanks for posting these.

It's odd that mentioning Dall-E by name in the prompt would be a content policy violation.  Do you know if they've mentioned why?

If you're still taking suggestions:
A beautiful, detailed illustration by James Gurney of a steampunk cheetah robot stalking through the ruins of a post-singularity city.  A painting of an ornate brass automaton shaped like a big cat.  A 4K image of a robotic cheetah in a strange, high-tech landscape.

I think OpenAI mentioned that including the same information several times with different ph... (read more)

Answer by artifex020

For text-to-image synthesis, the Disco Diffusion notebook is pretty popular right now.  Like other notebooks that use CLIP, it produces results that aren't very coherent, but which are interesting in the sense that they will reliably combine all of the elements described in a prompt in surprising and semi-sensible ways, even when those elements never occurred together in the models' training sets.

The Glide notebook from OpenAI is also worth looking at.  It produces results that are much more coherent but also much less interesting than the CLIP n... (read more)

Has your experience with this project given you any insights into bioterrorism risk?

Suppose that, rather than synthesizing a vaccine, you'd wanted to synthesize a new pandemic.  Would that have been remotely possible?  Do you think the current safeguards will be enough to prevent that sort of thing as the technology develops over the next decade or so?

5ChristianKl
If you want to synthesize a new pandemic you would need to know what proteins to add. That's very hard. It's much easier to  It seems the South African's for example put older variants together in the lab with antibodies against the spike protein to test how soon it evolves to get immune evasion. That's the kind of research with the potential to produce new pandemics waves like Omicron. 

Not really, was concerned about biological X-risks before and continue to be.

I don't currently see any plausible defense against them - even if we somehow got a sufficient number of nations to stop/moderate gain-of-function research and think twice about what information to publish, genetic engineering will continue to become easier and cheaper over time. As a result, I can see us temporarily offsetting the decline in minimum IQ*money*tech_level needed to destroy humanity but not stop it, and that's already in a geopolitically optimistic scenario.

Luckily there are some intimidatingly smart people working on the problem and I hope they can leverage the pandemic to get at least some of the funding the subject deserves.

Do you think it's plausible that the whole deontology/consequentialism/virtue ethics confusion might arise from our idea of morality actually being a conflation of several different things that serve separate purposes?

Like, say there's a social technology that evolved to solve intractable coordination problems by getting people to rationally pre-commit to acting against their individual interests in the future, and additionally a lot of people have started to extend our instinctive compassion and tribal loyalties to the entirety of humanity, and also peopl... (read more)

3Chris_Leong
That's entirely plausible

When people talk about "human values" in this context, I think they usually mean something like "goals that are Pareto optimal for the values of individual humans"- and the things you listed definitely aren't that.

1Andaro2
I'm not sure they mean that. Perhaps it would be better to actually specify the specific values you want implemented. But then of course people will disagree, including the actual humans who are trying to build AGI.
5Svyatoslav Usachev
If we are talking about any sort of "optimality", we can't expect even individual humans to have these "optimal" values, much less so en masse. Of course it is futile to dream that our deus ex machina will impose those fantastic values on the world if 99% of us de facto disagree with them.

The marketing company Salesforce was founded in Silicon Valley in '99, and has been hugely successful.  It's often ranked as one of the best companies in the U.S. to work for.  I went to one of their conferences recently, and the whole thing was a massive status display- they'd built an arcade with Salesforce-themed video games just for that one conference, and had a live performance by Gwen Stafani, among other things.

...But the marketing industry is one massive collective action problem. It consumes a vast amount of labor and resources, distort... (read more)