We don't have "aligned AGI". We have neither "AGI" nor an "aligned" system. We have sophisticated human-output simulators that don't have the generality to produce effective agentic behavior when looped but which also don't follow human intentions with the reliability that you'd want from a super-powerful system (which, fortunately, they aren't).
Thank you for the article. I think these "small" impacts are important to talk about. If one frame the question as "the impact of machines that think for humans", that impact isn't going to be a binary of just "good stuff" and "takes over and destroys humanity", there are intermediate situations like the decay of human abilities to think critically that are significant, not just in themselves but for further impacts; IE, if everyone is dependent on Google for their opinions, how does this impact people's opinion AI taking over entirely.
I don’t think “people have made choices that mattered” is a sufficient criteria for showing the existence of agency. IMO, to have something like agency, you roughly have to have an ongoing situation roughly like this:
Goals ↔ Actions ↔ States-of-the-world.
Some entity needs to have ongoing goals they are able to modify as they go along acting in the world and their actions also need to be able to have an effect. Agency is a complex and intuitive thing so I assume some would ask more than this to say a thing has agency. But I think this is one reasonable requirement.
Agency in a limited scope would be something like a non-profit that has a plan for helping the homeless, tries to implement it, discovers problems with the plan, and comes up with a new plan that inherently involves modifying their concept of “helping the homeless”.
By this criteria, tiny decisions with big consequences aren’t evidence of agency. I think that’s fairly intuitive. Having agency is subjectively something like “being at cause” rather than “being at effect” and that's an ongoing, not one-time thing.
This is an interesting question even though I'd want to reframe it to answer it. I'd see the question as a reasonable response to the standard refrain in science; "causation does not imply correlation." That is, "well, what does imply causation, huh?" is natural response to that. And here, I think scientists tend reply with either crickets or "you can not prove causation, what are you talking about".
Those responses seem satisfying. I'm not a scientist through I've "worked in science" occasionally and I have at times tried to come up with a real answer to this "what does prove causation" question. As a first step I'd note that science does not "prove" things but merely finds more and more plausible models. The more substantial answer, however, is that the plausible models are a combination of the existing scientific models and common sense understandings of the world and data.
A standard (negative) example is the situation where someone found a correlation between stock prices and sunspots. Basically not going to be pursued as a theory or causation because no one has a plausible reason why the two things should be related. Data isn't enough, you need a reason the data matter. This is often also expressed as "extraordinary claims require extraordinary evidence" (which also isn't explained enough as far as I can tell).
Basically, this is saying natural science's idea of causation rests on one big materialistic model of the world rather than scientists chasing data sets and then finding correlation between them (among other things, the world is full of data and given some data set, if you search far enough, you'll find another one with a spurious correlation to it). Still, the opposite idea, that science is just about finding data correlation, is quite common. Classical "logical positivism" is often simplified as this, notably.
Moreover, this is about the "hard" sciences - physics, chemistry, biology and etc. Experimental psychology is much more about chasing correlated and I'd say that's why much it amounts to bald pseudoscience.
I tried to create an account and the process didn't seem to work.
I believe you are correct about the feelings of a lot of Lesswrong. I find it is very worrisome that the lesswrong perspective considers a pure AI takeover as something that needs to be separated from either the issue of the degradation of human self-reliance capacities or an enhanced-human takeover. It seems to me that instead these factors should be considered together.
The consensus goals strongly needs rethinking imo. This is a clear and fairly simple start at such an effort. Challenging the basics matters.
Actually, things that are effectively prediction markets - options, futures and other "derivative" contracts - are entirely mainstream for larger businesses (huge amounts of money are involved). It is quite easy and common to bet on the price of oil by purchasing an option to buy it at some future time, for example.
The only thing that isn't mainstream are the things labeled "prediction markets" and that is because the focus on questions people are curious about rather than things that a lot of money rides on (like oil prices or interest rates).
But, can't you just query the reasoner at each point for what a good action would be?
What I'd expect (which may or may not be similar to Nate!'s approach) is that the reasoner has prepared one plan (or a few plans). Despite being vastly intelligent, it doesn't have the resources to scan all the world's outcomes and compare their goodness. It can give you the results of acting on the primary (and maybe several secondary) goal(s) and perhaps the immediate results of doing nothing or other immediate stuff.
It seems to me that Nate! (as quoted above about chess) is making the very cogent (imo) point that even a highly, superhumanly competent entity acting on the real, vastly complicated world isn't going to be an exact oracle, isn't going to have access to exact probabilities of things or probabilities of probabilities of outcomes and so-forth. It will know the probabilities of some things certainly but many other results will it can only pursue a strategy deemed good based on much more indirect processes. And this is because an exact calculation of the outcome process of the world in questions tends "blows up" far beyond any computing power physically available in the foreseeable future.
As Charlie Stein notes, this is wrong and I'd add it's wrong on several level and it's bit rude to challenge someone else's understanding in this context.
An LLM outputting "Dogs are cute" is outputting expected human output in context. The context could be "talk like sociopath trying to fool someone into thinking you're nice" and there you have one way the thing could "simulate lying". And moreover, add a loop to (hypothetically) make the thing "agentic" and you can have hidden states of whatever sort. Further an LLM outputting a given "belief" isn't going reliably "act on" or "follow that belief" and so an LLM outputting statement this isn't aligned with it's own output.