So, the current death rate for an American in their 30s is about 0.2%. That probably increases another 0.5% or so when you consider black swan events like nuclear war and bioterrorism. Let's call "unsafe" a ~3x increase in that expected death rate to 2%.
An increase that large would take something a lot more dramatic than the kind of politics we're used to in the US, but while political changes that dramatic are rare historically, I think we're at a moment where the risk is elevated enough that we ought to think about the odds.
I might, for example, give odd...
That's a crazy low probability.
Honestly, my odds of this have been swinging anywhere from 2% to 15% recently. Note that this would be the odds of our democratic institutions deteriorating enough that fleeing the country would seem like the only reasonable option- p(fascism) more in the sense of a government that most future historians would assign that or a similar label to, rather than just a disturbingly cruel and authoritarian administration still held somewhat in check by democracy.
I wonder: what odds would people here put on the US becoming a somewhat unsafe place to live even for citizens in the next couple of years due to politics? That is, what combined odds should we put on things like significant erosion of rights and legal protections for outspoken liberal or LGBT people, violent instability escalating to an unprecedented degree, the government launching the kind of war that endangers the homeland, etc.?
My gut says it's now at least 5%, which seems easily high enough to start putting together an emigration plan. Is that alarmist?
More generally, what would be an appropriate smoke alarm for this sort of thing?
One interesting example of humans managing to do this kind of compression in software: .kkrieger is a fully-functional first person shooter game with varied levels, detailed textures and lighting, multiple weapons and enemies and a full soundtrack. Replicating it in a modern game engine would probably produce a program at least a gigabyte large, but because of some incredibly clever procedural generation, .kkrieger managed to do it in under 100kb.
Could how you update your priors be dependent on what concepts you choose to represent the situation with?
I mean, suppose the parent says "I have two children, at least one of whom is a boy. So, I have a boy and another child whose gender I'm not mentioning". It seems like that second sentence doesn't add any new information- it parses to me like just a rephrasing of the first sentence. But now you've been presented with two seemingly incompatible ways of conceptualizing the scenario- either as two children of unknown gender, of whom one ...
I've been wondering: is there a standard counter-argument in decision theory to the idea that these Omega problems are all examples of an ordinary collective action problem, only between your past and future selves rather than separate people?
That is, when Omega is predicting your future, you rationally want to be the kind of person who one-boxes/pulls the lever, then later you rationally want to be the kind of person who two-boxes/doesn't- and just like with a multi-person collective action problem, everyone acting rationally according to their interests ...
If the first sister's experience is equivalent to the original Sleeping Beauty problem, then wouldn't the second sister's experience also have to be equivalent by the same logic? And, of course, the second sister will give 100% odds to it being Monday.
Suppose we run the sister experiment, but somehow suppress their memories of which sister they are. If they each reason that there's a two-thirds chance that they're the first sister, since their current experience is certain for her but only 50% likely for the second sister, then their odds of i...
I'm assuming it's not a bad idea to try to poke holes in this argument, since as a barely sapient ape, presumably any objection I can think of will be pretty obvious to a superintelligence, and if the argument is incorrect, we probably benefit from knowing that- though I'm open to arguments to the contrary.
That said, one thing I'm not clear on is why, if this strategy is effective at promoting our values, a paperclipper or other misaligned ASI wouldn't be motivated to try the same thing. That is, wouldn't a paperclipper want to run ancestor simulatio...
A supporting data point: I made a series of furry illustrations last year that combined AI-generated imagery with traditional illustration and 3d modelling- compositing together parts of a lot of different generations with some Blender work and then painting over that. Each image took maybe 10-15 hours of work, most of which was just pretty traditional painting with a Wacom tablet.
When I posted those to FurAffinity and described my process there, the response from the community was extremely positive. However, the images were all removed a few weeks ...
Often, this kind of thing will take a lot of attempts to get right- though as luck would have it, the composition above was actually the very first attempt. So, the total time investment was about five minutes. The Fooming Shaggoths certainly don't waste time!
As it happens, the Fooming Shaggoths also recorded and just released a Gregorian chant version of the song. What a coincidence!
So, I noticed something a bit odd about the behavior of LLMs just now that I wonder if anyone here can shed some light on:
It's generally accepted that LLMs don't really "care about" predicting the next token- the reward function being something that just reinforces certain behaviors, with real terminal goals being something you'd need a new architecture or something to produce. While that makes sense, it occurs to me that humans do seem to sort of value our equivalent of a reward function, in addition to our more high-level terminal goals. So, I figu...
I honestly think most people who hear about this debate are underestimating how much they'd enjoy watching it.
I often listen to podcasts and audiobooks while working on intellectually non-demanding tasks and playing games. Putting this debate on a second monitor instead felt like a significant step up from that. Books are too often bloated with filler as authors struggle to stretch a simple idea into 8-20 hours, and even the best podcast hosts aren't usually willing or able to challenge their guests' ideas with any kind of rigor. By contrast, everything in...
Metaculus currently puts the odds of the side arguing for a natural origin winning the debate at 94%.
Having watched the full debate myself, I think that prediction is accurate- the debate updated my view a lot toward the natural origin hypothesis. While it's true that a natural coronavirus originating in a city with one of the most important coronavirus research labs would be a large coincidence, Peter- the guy arguing in favor of a natural origin- provided some very convincing evidence that the first likely cases of COVID occurred not just in the market, ...
Definitely an interesting use of the tech- though the capability needed for that to be a really effective use case doesn't seem to be there quite yet.
When editing down an argument, what you really want to do is get rid of tangents and focus on addressing potential cruxes of disagreement as succinctly as possible. GPT4 doesn't yet have the world model needed to distinguish a novel argument's load-bearing parts from those that can be streamlined away, and it can't reliably anticipate the sort of objections a novel argument needs to address. For example, in a...
This honestly reads a lot like something generated by ChatGPT. Did you prompt GPT4 to write a LessWrong article?
To me that's very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society.
I wouldn't take the principle to an absolute- there are exceptions, like the need to be heard by friends and family and by those with power over you. Outside of a few specific contexts, however, I think people ought to have the freedom to listen to or ignore anyone they like. A right to be heard by all of society for the sake of leaving a personal imprint on culture infringes on that ...
I mean, I agree, but I think that's a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-ali...
Certainly communication needs to be restricted when it's being used to cause certain kinds of harm, like with fraud, harassment, proliferation of dangerous technology and so on. However, no: I don't see ownership of information or ways of expressing information as a natural right that should exist in the absence of economic necessity.
Copying an actors likeness without their consent can cause a lot of harm when it's used to sexually objectify them or to mislead the public. The legal rights actors have to their likeness also make sense in a world where IP is...
In that paragraph, I'm only talking about the art I produce commercially- graphic design, web design, occasionally animations or illustrations. That kind of art isn't about self-expression- it's about communicating the client's vision. Which is, admittedly, often a euphemism for "helping businesses win status signaling competitions", but not always or entirely. Creating beautiful things and improving users' experience is positive-sum, and something I take pride in.
Pretty soon, however, clients will be able to have the same sort of interactions with a...
But no model of a human mind on its own could really predict the tokens LLMs are trained on, right? Those tokens are created not only by humans, but by the processes that shape human experience, most of which we barely understand. To really accurately predict an ordinary social media post from one year in the future, for example, an LLM would need superhuman models of politics, sociology, economics, etc. To very accurately predict an experimental physics or biology paper, an LLM might need superhuman models of physics or biology.
Why should these mode...
I'm also an artist. My job involves a mix of graphic design and web development, and I make some income on the side from a Patreon supporting my personal work- all of which could be automated in the near future by generative AI. And I also think that's a good thing.
Copyright has always been a necessary evil. The atmosphere of fear and uncertainty it creates around remixes and reinterpretations has held back art- consider, for example, how much worse modern music would be without samples, a rare case where artists operating in a legal grey area with respect...
Speaking for myself, I would have confidently predicted the opposite result for the largest models.
My understanding is that LLMs work by building something like a world-model during training by compressing the data into abstractions. I would have expected something like "Tom Cruise's mother is Mary Lee Pfeiffer" to be represented in the model as an abstract association between the names that could then be "decompressed" back into language in a lot of different ways.
The fact that it's apparently represented in the model only as that exact phrase (or maybe a...
I'm not sure I agree. Consider the reaction of the audience to this talk- uncomfortable laughter, but also a pretty enthusiastic standing ovation. I'd guess that latter happened because the audience saw Eliezer as genuine- he displayed raw emotion, spoke bluntly, and at no point came across as someone making a play for status. He fit neatly into the "scientist warning of disaster" archetype, which isn't a figure that's expected to be particularly skilled at public communication.
A more experienced public speaker would certainly be able to present the ideas ...
Note that, while the linked post on the TEDx YouTube channel was taken down, there's a mirror available at: https://files.catbox.moe/qdwops.mp4.
Here are a few images generated by DALL-E 2 using the tokens:
https://i.imgur.com/kObEkKj.png
Nothing too interesting, unfortunately.
I assume you're not a fan of the LRNZ deep learning-focused ETF, since it includes both NVDA and a lot of datacenters (not to mention the terrible 2022 performance). Are there any other ETFs focused on this sort of thing that look better?
Well, props for offering a fresh outside perspective- this site could certainly use more of that. Unfortunately, I don't think you've made a very convincing argument. (Was that intentional, since you don't seem to believe ideological arguments can be convincing?)
We can never hope to glimpse pure empirical noumenon, but we certainly can build models that more or less accurately predict what we will experience in the future. We rely on those models to promote whatever we value, and it's important to try and improve how well they work. Colloquially, we ...
There are a lot of interesting ideas in this RP thread. Unfortunately, I've always found it a bit hard to enjoy roleplaying threads that I'm not participating in myself. Approached as works of fiction rather than games, RP threads tend to have some very serious structural problems that can make them difficult to read.
Because players aren't sure where a story is going and can't edit previous sections, the stories tend to be plagued by pacing problems- scenes that could be a paragraph are dragged out over pages, important plot beats are glossed o...
Thanks!
I'm not sure how much the repetitions helped much with accuracy for this prompt- it's still sort of randomizing traits between the two subjects. Though with a prompt this complex, the token limit may be an issue- it might be interesting to test at some point whether very simple prompts get more accurate with repetitions.
That said, the second set are pretty awesome- asking for a scene may have helped encourage some more interesting compositions. One benefit of repetition may just be that you're more likely to include phrases that more accurately describe what you're looking for.
When they released the first Dall-E, didn't OpenAI mention that prompts which repeated the same description several times with slight re-phrasing produced improved results?
I wonder how a prompt like:
"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney."
-would compare with something like:
"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney. A painting of an ornate robotic feline made of brass and a man wearing futuristic tribal clothing. A steampunk scene by James Gurney featuring a robot shaped like a panther and a high-tech shaman."
I think this argument can and should be expanded on. Historically, very smart people making confident predictions about the medium-term future of civilization have had a pretty abysmal track record. Can we pin down exactly why- what specific kind of error futurists have been falling prey to- and then see if that applies here?
Take, for example, traditional Marxist thought. In the early twentieth century, an intellectual Marxist's prediction of a stateless post-property utopia may have seemed to arise from a wonderfully complex yet self-con...
Thanks for posting these.
It's odd that mentioning Dall-E by name in the prompt would be a content policy violation. Do you know if they've mentioned why?
If you're still taking suggestions:
A beautiful, detailed illustration by James Gurney of a steampunk cheetah robot stalking through the ruins of a post-singularity city. A painting of an ornate brass automaton shaped like a big cat. A 4K image of a robotic cheetah in a strange, high-tech landscape.
I think OpenAI mentioned that including the same information several times with different ph...
For text-to-image synthesis, the Disco Diffusion notebook is pretty popular right now. Like other notebooks that use CLIP, it produces results that aren't very coherent, but which are interesting in the sense that they will reliably combine all of the elements described in a prompt in surprising and semi-sensible ways, even when those elements never occurred together in the models' training sets.
The Glide notebook from OpenAI is also worth looking at. It produces results that are much more coherent but also much less interesting than the CLIP n...
Has your experience with this project given you any insights into bioterrorism risk?
Suppose that, rather than synthesizing a vaccine, you'd wanted to synthesize a new pandemic. Would that have been remotely possible? Do you think the current safeguards will be enough to prevent that sort of thing as the technology develops over the next decade or so?
Not really, was concerned about biological X-risks before and continue to be.
I don't currently see any plausible defense against them - even if we somehow got a sufficient number of nations to stop/moderate gain-of-function research and think twice about what information to publish, genetic engineering will continue to become easier and cheaper over time. As a result, I can see us temporarily offsetting the decline in minimum IQ*money*tech_level needed to destroy humanity but not stop it, and that's already in a geopolitically optimistic scenario.
Luckily there are some intimidatingly smart people working on the problem and I hope they can leverage the pandemic to get at least some of the funding the subject deserves.
Do you think it's plausible that the whole deontology/consequentialism/virtue ethics confusion might arise from our idea of morality actually being a conflation of several different things that serve separate purposes?
Like, say there's a social technology that evolved to solve intractable coordination problems by getting people to rationally pre-commit to acting against their individual interests in the future, and additionally a lot of people have started to extend our instinctive compassion and tribal loyalties to the entirety of humanity, and also peopl...
When people talk about "human values" in this context, I think they usually mean something like "goals that are Pareto optimal for the values of individual humans"- and the things you listed definitely aren't that.
The marketing company Salesforce was founded in Silicon Valley in '99, and has been hugely successful. It's often ranked as one of the best companies in the U.S. to work for. I went to one of their conferences recently, and the whole thing was a massive status display- they'd built an arcade with Salesforce-themed video games just for that one conference, and had a live performance by Gwen Stafani, among other things.
...But the marketing industry is one massive collective action problem. It consumes a vast amount of labor and resources, distort...
I think it's a very bad idea to dismiss the entirety of news as a "propaganda machine". Certainly some sources are almost entirely propaganda. More reputable sources like the AP and Reuters will combine some predictable bias with largely trustworthy independent journalism. Identifying those more reliable sources and compensating for their bias takes effort and media literacy, but I think that effort is quite valuable- both individually and collectively for society.
- Accurate information about large, important events informs our world model and im
... (read more)