I agree with your point in general of efficiency vs rationality, but I don’t see the direct connection to the article. Can you explain? It seems to me that a representation along correlated values is more efficient, but I don’t see how it is any less rational.
I would describe this as a human-AI system. You are doing at least some of the cognitive work with the scaffolding you put in place through prompt engineering etc, which doesn’t generalise to novel types of problems.
You seem to make a strong assumption that consciousness emerges from matter. This is uncertain. The mind body problem is not solved.
It is so difficult to know whether this is genuine or if our collective imagination is being projected onto what an AI is.
If it was genuine, I might expect it to be more alien. But then what could it say that would be coherent (as it’s trained to be) and also be alien enough to convince me it’s genuine?
You said that you are not interested in exploring the meaning behind the green knight. I think that it's very important. In particular, your translation to the Old West changes the challenge in important ways. I don't claim to know the meaning behind the green knight. But I believe that there is something significant in the fact that the knights were so obsessed with courage and honour and the green knight laid a challenge at them that they couldn't turn down given their code. Gawain stepped forward partly to protect Arthur. That changes the game. I asked ...
It’s useful in that it is a model that describes certain phenomena. I believe it is correct given the caveat that all models are approximations.
I did a physics undergraduate degree a long time ago. I can’t remember specifically but I’m sure the equation was derived and experimental evidence was explained. I have strong faith that matter converts to energy because it explains radiation, fission reactors and atomic weapons. I’ve seen videos of atomic bombs going off. I’ve seen evidence of radioactivity with my own eyes in a lab. I know of many technologies t...
Well I agree it is a strawman argument. Following the same lines as your argument, I would say the counter argument is that we don’t really care if a weak model is fully aligned or not. Is my calculator aligned? Is a random number generator aligned? Is my robotic vacuum cleaner aligned? It’s not really a sensical question.
Alignment is a bigger problem with stronger models. The required degree of alignment is much higher. So even if we accept your strawman argument it doesn’t matter.
I found this a useful framing. I’ve thought quite a lot about the offender versus defence dominance angle and to me it seems almost impossible that we can trust that defence will be dominant. As you said, defence has to be dominant in every single attack vector, both known and unknown vectors.
That is an important point because I hear some people argue that to protect against offensive AGI we need defensive AGI.
I’m tempted to combine the intelligence dominance and starting costs into a single dimensions, and then reframe the question in terms of “at what p...
Thank you for the great comments! I think I can sum up a lot of that as "the situation is way more complicated and high dimensional and life will find a way". Yes I agree.
I think what I had in mind was an AI system that is supervising all other AIs (or AI components) and preventing them from undergoing natural selection. A kind of immune system. I don't see any reason why that would be naturally selected for in the short-term in a way that also ensures human survival. So it would have to be built on purpose. In that model, the level of abstract...
Thanks for the reply!
I think it might be true that substrate convergence is inevitable eventually. But it would be helpful to know how long it would take. Potentially we might be ok with it if the expected timescale is long enough (or the probability of it happening in a given timescale is low enough).
I think the singleton scenario is the most interesting, since I think that if we have several competing AI's, then we are just super doomed.
If that's true then that is a super important finding! And also an important thing to communicate to people...
Here’s a slightly different story:
The amount of information is less important than the quality of the information. The channels were there to transmit information, but there were not efficient coding schemes.
Language is an efficient coding scheme by which salient aspects of knowledge can be usefully compressed and passed to future generations.
There was no free lunch because there was an evolutionary bottleneck that involved the slow development of cognitive and biological architecture to enable complex language. This developed in humans in a co-evolutionar...
I think I’m more concerned with minimising extreme risks. I don’t really mind if I catch mild covid but I really don’t want to catch covid in a bad way. I think that would shift the optimal time to take the vaccine earlier, as I’d have at least some protection throughout the disease season.
I am interested in the substrate-needs convergence project.
Here are some initial thoughts, I would love to hear some responses:
I’d like to add that there isn’t really a clear objective boundary between an agent and the environment. It’s a subjective line that we draw in the sand. So we needn’t get hung on what is objectively true or false when it comes to boundaries - and instead define them in a way that aligns with human values.
I agree but I don’t think that this is the specific problem. I think it’s more that the relationship between agent and environment changes over time i.e. the nodes in the Markov blanket are not fixed, and as such a Markov blanket is not the best way to model it.
The grasshopper moving through space is just an example. When the grasshopper moves, the structure of the Markov blanket changes radically. Or, if you want to maintain a single Markov blanket then it gets really large and complicated.
Regarding your study idea. Sounds good! Would be interesting to see, and as you rightly point out wouldn't be too complicated/expensive to run. It's generally a challenge to run multi-year studies of this sort due to the short-term nature of many grants/positions. But certainly not impossible.
An issue that you might have is being able to be sure that any variation that you see is due to changes in the general population vs changes in the sample population. This is an especially valid issue with MTurk because the workers are doing boring exercises for...
There are extra costs here that aren’t being included. There’s a cost to maintaining the pill box - perhaps you consider that small but it’s extra admin and we’re already drowning in admin. There’s a cost to my self identity of being a person who carries around pills like this (don’t mean to disparage it, just not for me). There’s also potentially hidden costs of not getting ill occasionally, both mentally and physically.
Much harder to put enough capital together to make it worthwhile,
Beat me to it. Yes the lesson is perhaps to not create prediction markets that incentivise manipulation of that market towards bad outcomes. The post could be expanded to a better question of, given that prediction markets can incentivise bad behaviour, how can we create prediction markets that incentivise good behaviour?
This reminds me somewhat of the potentially self-fulfilling prophecy of defunding bad actors. E.g. if we expect that global society will react to climate change by ultimately preventing oil companies from extracting and selling their oil f...
I’d ask the question whether things typically are aligned or not. There’s a good argument that many systems are not aligned. Ecosystems, society, companies, families, etc all often have very unaligned agents. AI alignment, as you pointed out, is a higher stakes game.
Your proofs all rely on lotteries over infinite numbers of outcomes. Is that necessary? Maybe a restriction to finite lotteries avoids the paradox.
Leinbiz's Law says that you cannot have separate objects that are indistinguishable from each other. It sounds like that is what you are doing with the 3 brains. That might be a good place to flesh out more to make progress on the question. What do you mean exactly by saying that the three brains are wired up to the same body and are redundant?
I’ve always thought that the killer app of smart contracts is creating institutions that are transparent, static and unstoppable. So for example uncensored media publishing, defi, identity, banking. It’s a way to enshrine in code a set of principles of how something will work that then cannot be eroded by corruption or interference.
There is the point that 80% of people can say that they are better than average drivers and actually be correct. People value different things in driving, and optimise for those things. One person’s good driver may be safe, someone else may value speed. So both can say truthfully and correctly that they are a better driver than the other. When you ask them about racing it narrows the question to something more specific.
You can expand that to social hierarchies too. There isn’t one hierarchy, there are many based on different values. So I can feel high status at being a great musician while someone else can feel high status at earning a lot, and we can both be right.
I think a problem you would have is that the speed of information in the game is the same as the speed of, say, a glider. So an AI that is computing within Life would not be able to sense and react to a glider quickly enough to build a control structure in front of it.
I’d say 1 and 7 (for humans). The way humans understand go is different to how bots understand go. We use heuristics. The bots may use heuristics too but there’s no reason we could comprehend those heuristics. Considering the size of the state space it seems that the bot has access to ways of thinking about go that we don’t, the same way a bot can look further ahead in a chess games than we could comprehend.
Why are we still paying taxes if we have AI this brilliant? Surely we then have ridiculous levels of abundance
I strongly disagree with your sentiments.
Advertising is bad because it’s fundamentally about influencing people to do things they wouldn’t do otherwise. That takes us all away from what’s actually important. It also drives the attention economy, which turns the process of searching for information and learning about the world into a machine for manipulating people. Advertising should really be called commercial propaganda - that reveals more clearly what it is. Privacy is only one aspect of the problem.
Your arguments are myopic in that they are all based o...
Advertising is bad because it’s fundamentally about influencing people to do things they wouldn’t do otherwise. That takes us all away from what’s actually important.
Advice is bad because it's fundamentally about influencing people to do things they wouldn't do otherwise. Giving and receiving advice takes us all away from what's actually important.
Sorry for the snark, but I think this is too general of an argument, proves too much, and therefore fails.
I feel the same way (and viscerally detest ads, and go to very great lengths to avoid exposure to them), but I'm not sure whether I actually agree.
Having an advertiser attempt to manipulate your brain so that you do a thing you otherwise wouldn't have done is, for sure, bad for you. But so is having less money, and at present the only available ways of getting Nice Things On The Internet that no one is choosing to supply out of sheer benevolence[1] are (a) that you pay them money and (b) that someone pays them for showing you ads.
So, how do the harm of bei...
Maybe look at GAMS
I find rationalist cringey for some reason and won’t describe myself like that. As you said, in seems to discount intuition, emotion and instinct. 99% of human behaviour is driven by irrational forces and that’s not necessarily a bad thing. The word rationalist to me feels like a denial of our true nature and a doomed attempt to be purely rational - rather than trying to be a bit more deliberate in action and beliefs
What I want to know is how bad an effect, exactly, will a solar storm be likely to have. It’s all very vague.
How long will it take to get the power back on? A couple of days? Weeks? Months? Those are very different scenarios.
And, can we do something now to turn the monty long scenario into a week? Maybe we can stockpile a few transformers or something.
Just a writing tip. Might help to define initialisations at least once before using them. EA isn’t self evidently effective altruism.
I’m in the UK. Rules are stricter than ever but also people are taking it seriously, more than the 2nd lockdown. And it’s January and freezing cold so no one wants to go out anyway.
Good point. I think it would depend on how useful the word is in describing the world. If your culture has very different norms between “boyfriend/girlfriend” and fiancé then a replacement for fiancé would likely appear.
I suppose that on one extreme you would have words that are fundamental to human life or psychology e.g. water, body, food, cold. These I’m sure would reappear if banned. Then on the other extreme you have words associated with somewhat arbitrary cultural behaviour e.g. thanksgiving, tinsel, Twitter, hatchback. These words may not come back...
You might be interested in this paper, it supports the idea of a constant information processing rate in text. "Different languages, similar coding efficiency: Comparable information rates across the human communicative niche", Coupe, Mi Oh, Dediu, Pellegrino.. 2019, Science Advances.
I would agree that language would likely adapt to newspeak by simply using other compound words to describe the same thing. Within a generation or two these would then just become the new word. Presumably the Orwellian government would have to continually ban these new words. ...
I was thinking something similar. I vaguely remember that the characteristic function proof includes an assumption of n being large, where n is the number of variables being summed. I think that allows you to ignore some higher order n terms. So by keeping those in you could probably get some way to quantify how "close" a resulting distribution is to Gaussian. And you could relate that back to moments quite naturally as well.
I like to think of advertising as commercial propaganda. That is technically what it is. Whereas political propaganda's purpose may be to influence people to support a political belief, commercial propaganda is to influence people to support a commercial enterprise.
People tend to think of political propaganda as something from World War 2 and authoritarian regimes. But it was used in the West and it never went away. It just became more sophisticated over time and a part of that was re-branding it to "spin" or "public relations". The original word is useful because it is accurate and it highlights the obvious negative consequences of the practice.
I have a meta-view on this that you might think falls into the bucket of "feels intuitive based on the progress so far". To counter that, this isn't pure intuition. As a side note I don't believe that intuitions should be dismissed and should be at least a part of our belief updating process.
I can't tell you the fine details of what will happen and I'm suspicious of anyone who can because a) this is a very complex system b) no-one really knows how LLMs work, how human cognition works, or what is required for an intelligence takeoff.
Howeve... (read more)