I have kept extensive running notes on all the projects I’ve worked on since March 2020 on an outliner application. Yesterday, I exported all the notes belonging to one project - that’s 15k words over 18 months - and asked an LLM to pretend it is a senior software engineer and evaluate my work. I saw my work in a light entirely removed from the voice in my head telling me I’m not doing enough. I also asked the LLM for critical feedback and it gave me some actionable insights.
Using LLMs to summarise email, reports, and other text isn’t a good practice. These summaries can be quite lossy for short texts, and I will have a hard time trusting that it hasn't left out something important. But I am going to feed them a lot more of my own work to ask it to spot patterns I might not.
LLMs are a good tool to approximate the brain’s ability to synthesis events and abstract an outcome that my brain seems to lack.