I really strongly recommend that anyone interested particularly in journaling using AI does not use anything that sends that data to a server somewhere. Use llama or another local model if you possibly can. It may be less capable but it's far more responsible to do so. Personally I find a lot of value in AI journaling, but it also made me glimpse the future possibilities of what unscrupulous companies could do with a detailed log of every day of your life for years. I mean, isn't that basically the most useful user data you could possibly have for advertising?
I'm not saying that OpenAI specifically is currently untrustworthy with this data, or that it one day will be, but it just strikes me as a really bad habit to send every aspect of your day in detail to somewhere on the internet.
On a more 'doomer' sort of note, if anyone is afraid of AI being capable of simulating their personality in a roko sort of situation, this seems like the best possible way to give some future AI as near-lossless image of your brain at present. I for one welcome Roko/Bing/Proteus and support it completely if it decides to take over the world, but remember, the most powerful aspect of LLMs is their capability to predict things given input. I can't help but feel some sort of truth behind the idea that an LLM could predict how you would live and think and act, given enough input about your life and your choices.
Yeah, I agree with a lot of this, and this privacy concern was actually my main reason to want to switch to Obsidian in the first place, ironically.
I remember in the book In the Age of Surveillance Capitalism there's a framework for thinking about privacy where users knowingly trade away their privacy in exchange for a service which becomes more useful for them as a direct consequence of the privacy tradeoff. So for example, a maps app that remembers where you parked your car. This is contrasted with platforms where the privacy violations aren't 'paid back' to the users in terms of useful features that benefit them, they just extract value from users in exchange for providing a service at all.
So in this case, I guess the more private information I submit to Chat-GPT, the more directly useful and relevant and insightful its responses to me get. Considering how much a life coach or career coach or therapist can cost, this is a lot of value I'm getting for it.
I understand the theoretical concern about our righteous future overlords whom I fully support and embrace, but while I think you could learn a lot about me from reading my diary, including convincingly simulating my personality, I would feel surprised if reading my diary was enough to model my brain in sufficient fidelity that it's an s-risk concern...
So in this case, I guess the more private information I submit to Chat-GPT, the more directly useful and relevant and insightful its responses to me get.
This is even stronger for something like LLaMA because you can actually fine-tune it on your personal info or fine-tune it for document retrieval.
I like the way you think.
While an indepth daily journal would help simulating a person, I suspect you could achieve a reasonably high fidelity simulation without it.
I personally don't keep a regular journal, but I do send plenty of data over messenger, whatsapp etc describing my actions and thoughts.
I fully agree. I tried using ChatGPT for some coaching, but tried to keep it high level and in areas where I wouldn't be too bothers if it showed up on the internet.
I think using the API, rather than ChatGPT, is better. See e.g. https://techcrunch.com/2023/03/01/addressing-criticism-openai-will-no-longer-use-customer-data-to-train-its-models-by-default/:
Starting today, OpenAI says that it won’t use any data submitted through its API for “service improvements,” including AI model training, unless a customer or organization opts in. In addition, the company is implementing a 30-day data retention policy for API users with options for stricter retention “depending on user needs,” and simplifying its terms and data ownership to make it clear that users own the input and output of the models.
I was actually thinking that having an Obsidian plugin for this sort of thing would be really neat.
There are a few Obsidian plugins that do similar stuff using LLMs, (they purport to read your notes and help you something something).
I'm thinking of mocking something up over the next week or so that does this 'diary questions' thing in a more interactive way, via the API, from inside Obsidian.
I might try this with GPT 3.5, as I can't yet justify spending the $20/month on GPT-4. Something like this seems quite interesting!
(oh god we're going to have to be using this near-AGI stuff for everything soon, aren't we?)
Reflection: this adds a lot of extra writing in order to answer the questions. Worth being aware of; adding GPT-4 to your journaling process may increase the time it takes to complete it each day/week/etc.
Love this! I use Obsidian for daily journaling, but I love your use of ChatGPT as a sort of diary-prompter for yourself.
I think probably within the next couple of years it will be possible to use an LLM to write an Obsidian plug-in that lets you use an LLM directly inside of Obsidian, connected to all of your Obsidian notes. Combining that with voice recognition like you said so that you can just have a little conversation at the end of your day. Mm, that'd be the magic!
Couple of years? I think we are talking about months here. I guess the biggest bottleneck would be to get all notes into the LLM context. But I doubt you really need that. I think you can probably guess a few important notes for what you are currently working on and add those as context.
This is a really nice use of the technology to make something work better for you.
And for some reason the place my mind went is some kind of Black Mirror episode where an emotionally vulnerable person has a chatbot confident/interactive journal, and then their manipulative partner has installed some dark web app on it that makes it spy on her and help him manipulate her feelings ... urggh gross.
I'm a bit confused here, isn't reflecting on, integrating, and writing down these thoughts the primary benefit of keeping a daily diary in the first place?
Asking GPT-4 to do these steps doesn't seem to offer the same benefits.
I’m writing the original paragraph, and answering a bunch of questions designed to prompt me to reflect.
They might not be your reflections though?
GPT-4 has a very large degree of freedom in choosing what questions to ask, and how they're worded.
So phrasing it one way or another will prompt different thoughts from you. And GPT-4, not you, is selecting those parameters.
You would need to specify a very narrow range of possibilities, or the exact question for it to ask to avoid this, which would seem to be about as much effort as just doing it entirely yourself.
EDIT: Changed wording a bit.
How is this different than reflecting on your day with a close friend who asks you questions about it?
Not much of a difference, in your example the close friend would also have influenced the answers with the questions they select.
Some people are fine with that because they trust the close friend.
Others aren't and diary writing would be an exclusively solitary activity for the reasons above.
Totally agreed. I do morning pages which I find to be invaluable as a source of deep, personal reflection.
I think the above has its use in the sense of keeping a diary entry of each day that you could come back to to read and remember what happened. Not necessarily for deep personal reflection, but more to suss out the details on an otherwise sparser entry for a "What happened today?" type of diary.
Are you doing this from within Obsidian with one of the AI plugins? Or are you doing this with the ChatGPT browser interface and copy/pasting the final product over to Obsidian?
Currently just copy-pasting into GPT-4 via the web interface. I've got it working via the GPT-3 API as well today, but for now I prefer to suffer the inconvenience and get the better model. The questions it asks are MUCH more insightful.
If you got used to this, you would get rusty at creating journal entries when not connected to the internet, correct? (because to use ChatGPT requires internet connectivity)
A powerful enough personal computer could run LLaMA locally. I don't think the raw model is optimized for chat, but with a suitable prompt, you might be able to get it into chat mode long enough to do this kind of thing. It also wouldn't surprise me to learn that there are more specialized derivatives now that would be suitable. I've certainly heard of people working on it.
Something I've wanted to do for years, but can't make myself do consistently because I'm bad at forming new habits and seem to have a limited reservoir of willpower that is generally depleted by the end of the day:
It's a lot of effort to be detailed and reflective, and it's a lot of effort to add the back-links.
The solution:
Rather than having to create a detailed, reflective entry from whole cloth, I can just write some short, abbreviated, incoherent mumbled paragraph, like a surly teenager at family dinner, and then ask ChatGPT to ask more questions to flesh out the story.
This works shockingly well.
It's surprisingly easy to write detailed answers to specific QUESTIONS about your day. It's like a good listener, it can draw you out with its insightful questions.
Then you can ask it to mark all the key words, names, etc, with double square brackets.
The end result isn't perfect but it's pretty good, and much, much better than I would ever do all by myself. It takes less than half the time and virtually no willpower.
Here's what I prompted ChatGPT today (That diary entry is roughly the level of writing I'm capable of at the end of the day.)[1]
Response:[3]
My continuation:
Response:
My continuation:
Response:
And voilà:
Next improvement will probably be to use voice recognition so I don't have to type my answers out, and can be even more surly.
Literally every detail and fact in the diary entry is fictional and invented, just made up for this post. Don't be thinking I take trains or eat food.
This is in my text expander now, so it's easy to reproduce.
What really impresses me about these questions is that it doesn't delve into things like 'what's Adele's startup' or 'why is your friend's job at the paperclip maximising firm stressful?'
I don't know if that's a feature or a bug. It doesn't perform great at the 'resolve inconsistencies and add context', but it seems to understand the assignment: that we're here to talk about my feelings.