Hi, I am a tutor learning about this and other A.I. technologies, how they can be safely accessible for students without affecting their privacy, safety, skillsets and processing needs (especially students with special needs). Happy to chat to real people about this not a bot. P.s. the link no longer works.
I take it this is y'all? https://twitter.com/lisa_flourish https://www.facebook.com/Flourishtuition/
seems cool. what are your plans for how you'd like to try it out? keep in mind that chatgpt is high error rate still, and the kids will need to learn, above all else, that they need to use it as a hint but cannot trust anything it says ever. which... might be quite difficult.
"P.s. the link no longer works."
You mean the link to The Diamond Age? I've fixed it. Thanks for the catch.
We are getting started. I think ChatGPT has massive potential in terms of it being used as a core engine for powering education toolkit of the future, combined, of course with advances in other areas. The problem should be thought of more as building a successful product like Macbook, when you have an intel processor and associated components.
Soon in 2023.
Well, yes, that's where we have to go, or one of those Japanese robotic dog toys, or even a human. The Japanese are into it, deep. But I do think the issues are engineering issues, not fundamental scientific ones. This is more like the Apollo project than like coming up with a cure for cancer. See the section on Robots and Humans just around the corner.
Of course we already have sophisticated robot toys and companion robots for, e.g. older people, but I have no direct experience with any of these. Tutoring is quite different.
Here’s a dialog I had yesterday with ChatGPT:
I take that as provisional evidence that it can craft it’s dialog to a child’s level? If I were to ask it to speak to a 10-year-old, would it have been able to do so accurately? I don’t know, I didn’t think to ask. What I’m getting at is whether or not I just knows the difference between regular output and simple output, or is it more sophisticated than that. And, if so, how is it able to make the gradations? These questions are worth checking out.
Then there is the question in input-output. Three-year-olds can’t read or type. But I suspect current voice recognition and text-to-voice output is adequate. Moreover such a tutor could call on a wealth of videos at YouTube, Vimeo, and other sources.
There are problems, of course. ChatGPT doesn’t have any sense of ground truth and tends to hallucinate, a term of art. It’s still got the sorts of problems that Gary Marcus, among many others, talks about. And it’s not politically house-broken. Those problems are being worked on. I have no idea how long it will take to make this technology safe for children.
But I also note that, at the moment, I don’t think those problems will ever be solved, totally. Even when Marcus and others have succeeded in integrating symbolic tech with deep learning tech, even when Eric Jang’s robots have had a decade or three to collect detailed data through interacting with the world, there will be more to do. The world is messy and complex. It does not lend itself to being parceled out into neat categories. Any technology that has the power to meet and move around in that world, it must necessarily be messy and complex. I conclude that the process of aligning AIs with human values – which are hardly coherent among themselves and which vary across cultures – will be never ending.
With that in mind, who’s working on hooking up LLMs with speech input and output, with access to videos, and with the capacity to interact with children of all ages as well as with adults? If no one is, no university, no corporate R & D lab, then someone needs to get started. Note that I'm particularly interested in tutors for young children. As a target, think of young Sutan in this video.
For inspiration, read Neil Stephenson’s The Diamond Age: Or, A Young Lady's Illustrated Primer.