The program could identify where it has the lowest certainty of what the person would say or do, and directly ask the person to fill in those gaps. I wonder what the psychological impact of working with a program in this way would be. It seems like the program would likely discover inconsistencies and uncertainties in the actual person and force them to confront those, which could potentially be beneficial or detrimental depending on the circumstances.
If I noticed my coffee mug turning into a slinky, my first assumption would not be that I was in a simulation, but that I was lucid dreaming. I would react by attempting to reproduce whatever led to the glitch, and exploit it to recreationally violate the usual laws of physics, because that's a novel and fun thing to do when one finds it temporarily possible. This category of reaction, which I suspect I'm not alone in having, would certainly make life more interesting for whoever was running the simulation.
The program could identify where it has the lowest certainty of what the person would say or do, and directly ask the person to fill in those gaps.
...assuming the model's certainty model is itself accurate[1]. And that the resulting information is actually useful to the model.
(As an obvious example for the latter, me rolling a d20[2] and saying the result will likely have low confidence, but isn't particularly useful to the model...)
See also e.g. many adversarial attacks against computer vision systems, where the predictor predicts extremely confidently[3] that the perturbed apple is actually an ostrich.
or e.g. loading up random.org, if you feel a d20 isn't sufficiently random.
e.g. this classic attack https://openai.com/blog/multimodal-neurons/ where a 85.6% confidence of an apple being an apple turns into a 99.7% confidence that the apple with a handwritten label of 'iPod' is an ipod.
Not answering your question about what could be created today, but if you are interested in this topic some of Greg Eagan’s stories develop a similar concept.
In Schild’s Ladder he introduces the qusps, a quantum computer implanted in the brain at birth that records all brain activity to train itself to perfectly predict the persons thought. Once the qusp can perfectly simulate the person’s thoughts, it is effectively interchangeable to the individual. At this point in their lives future humans switch their mental process to the qusp, with all the benefits that a computer has over a brain (including avoiding quantum branching)
What's your definition of accuracy?
Something that predicts that I go on the same Saturday routine as usual and then at the end of it I suddenly explode is arguably fairly accurate, in the sense of predicting me well most of the time.
The first such programs would only predict a few common activities. Less common activities would require more software to predict. Deep Learning requires many specialized sub-programs working together in a hierarchy.
The first such programs would only predict a few common activities. Less common activities would require more software to predict.
This requires that human activities are enumerable ahead of time, no?
The question is very simple: could you train a powerful computer program to act like an accurate copy of yourself?
Some LW members may have already tried this, but it would require super powerful software. Even if it's only text based, GPT-3 wouldn't be enough. Maybe GPT-13.
If such a program could generate a perfect copy of someone's written responses to any query, a Super Turing Test, the original person would arguably no longer need to exist. That would be the whole point: such a program would be like a "mind backup", a solution to the problem of death.
This would take something like "Deeper Learning" software, requiring huge amounts of data. For starters, it could record its subject's digital activities. Then it might try to predict sleeping and working habits. To get more data it should take the form of an operating system, or virtual assistant, or digital mind extension.
In my case, it could analyze all my old drawings, sorted by date and themes, with description and genre tags added. It could look up style data from the comic books my art is inspired by. Then it would start making making similar art of its own. It could do the same thing even easier with my fan fictions and other written texts. Obviously there would be no demand for any of this, but it would be possible.
Even more advanced, a human subject might start wearing a "sensor hat" or other clothing to allow the software to perceive everything they do. Then it would start to predict what they WILL do and experience.
An important function would be knowing what data to ignore. It should not try to predict everything in the subject's environment, like whatever appears on screens or the actions of other people. It could predict common themes like colors and shapes in programs and web layouts etc. These same patterns already exist in your brain.
Such imitator software might only render a low bandwidth or text-based description of the simulated person's behavior. That wouldn't matter at all, as long as it's all-encompassing.
One application of such research would be software designed to understand what people are doing at any time. One problem of such research would be temporary glitches, like if your coffee mug suddenly dissolves into a slinkie you'd know you were actually the software simulation. That sort of thing is obviously many decades away.
The real question is: what is the most predictive mind imitator software that could be created today?