3 Comments
User's avatar
juluc's avatar

What I find most important is to carefully build rich context at the beginning of a session. I often record myself for 5-10 minutes, transcribe these thought traces, and use them as rich contextual input.

I'm not just transferring the goal "research market X", but making the model aware of its persona, the meta-context (e.g., you are the systems architect and we're building a system that needs this market research to integrate into Part Y), and all the artifacts it will need. I think of these artifacts - repos, documents, research papers - as "packages" that I import, referencing them from the very first audio - Opus is INSANE at connecting the dots here (you know better than I do why probs - would love to hear your best guess.)

I.e., I create a context-rich waterfall where instead of dumping all context at once, I carefully curate it until I reach a state where the model is fully aligned. Then I, if needed for my task, let it generate a sample workflow (usually editing my prompt 3x to sample different approaches and select the best one). After that, I prompt it to execute.

Most of the time I'm just copy-pasting between AI systems, occasionally checking implementations and watching for drift. When context ends, I "export" the current state with nuanced variations:

Export state (general)

Export state focusing on high-level goal and role

Export state focusing on mental model of the task

Then I start a new session, building this cascading context again. Works like magic.

So if future models can handle this calibration themselves (and I just approve) and communicate with other system parts using a systematic approach (honestly, it would probably already work with the right scaffold), then my only role would be to pre-define critical checkpoints and tell the model "notify me when you complete X" or "I need to review output Y." I'd leverage my domain knowledge to identify what needs human validation, then check and steer as needed.

Right now, I'm the glue & bottleneck.

Worth noting: I work as a solo operator, which gives me the flexibility to rapidly iterate on these workflows and experiment with different approaches. This kind of human-in-the-loop orchestration might face different challenges in larger organizational contexts, but the principles should transfer (?)

Expand full comment
Nathan Lambert's avatar

You're way ahead of the curve. It's awesome that this works. I honestly should hone my AI use further by giving e.g. context of all my papers and blog posts when talking research ideas. I'm effectively leaving a model generation worth of abilities on the table I bet.

"Interconnects AI GPT" coming to ChatGPT soon (lol)

Expand full comment
juluc's avatar

Thanks Nathan! :) just 4.5 months ago I listened to your Pod with Lex and understood NOTHING haha; was the start of my AI journey, I owe you a lot! ;)

FYI:

I've actually taken this a step further with a "state management system" - I systematically store these rich context "states" so I can return to projects after days or weeks and instantly get back up to speed. Just load the last state, rebuild context as needed, work, then export and store again.

I actually recently completed a two-month project with 10 external stakeholders using this approach. The AI handled most strategy ideas & communications (Important: still with my oversight!), while I focused on state management, orchestration and validation.

I've reached an abstraction level where it's genuinely difficult to explain the way I "work" to people outside the AI space though, that's why your Substack and podcasts like this are invaluable for engaging with AI and developing these mental models :)

Expand full comment