<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Blog | Lei Zhang</title><link>https://zhanglei.page/blog/</link><atom:link href="https://zhanglei.page/blog/index.xml" rel="self" type="application/rss+xml"/><description>Blog</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 11 May 2026 00:00:00 +0000</lastBuildDate><item><title>Treat Me Like an Agent</title><link>https://zhanglei.page/blog/treat-me-like-an-agent/</link><pubDate>Mon, 11 May 2026 00:00:00 +0000</pubDate><guid>https://zhanglei.page/blog/treat-me-like-an-agent/</guid><description>&lt;p&gt;A few weeks ago, one of my PhD students needed Google Cloud credits for a project. The credits live on my billing account, so they needed me to provision the access. The message they sent me had the project name, the rough budget, the IAM role they needed, the exact email address to add — and at the bottom, a small numbered list of steps for me to follow in the Cloud Console. &lt;em&gt;Open the dashboard. In the left sidebar, click X. Find the project named Y. Add member, paste this email, assign this role.&lt;/em&gt; From message to provisioned access took about ninety seconds.&lt;/p&gt;
&lt;p&gt;A couple of weeks later, a different student needed essentially the same thing — different project, structurally identical ask. They sent me a one-liner: &lt;em&gt;&amp;ldquo;Hi Professor, could I use some of your Google Cloud credits for my project?&amp;rdquo;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I could see the whole back-and-forth this would require — which project, how much, what role, what email, where I would need to click — and I didn&amp;rsquo;t want to run it. So I wrote back and told them to go ask the first student how to prepare a request like this, and to come back when they had.&lt;/p&gt;
&lt;p&gt;That was the moment something clicked. The thought has been sticking with me ever since:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I&amp;rsquo;m an LLM agent for my students. The value I produce for them is bounded by the prompt they give me.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="same-model-different-prompts"&gt;Same model, different prompts&lt;/h2&gt;
&lt;p&gt;Anyone who has spent serious time with large language models knows this in their bones: the same model with two different prompts can produce wildly different outputs. The literature calls this prompt engineering, and lately context engineering — the discipline of constructing an input that gives the model the best shot at the answer you actually need.&lt;/p&gt;
&lt;p&gt;The students who get the most out of me are doing exactly this, whether or not they would describe it that way. They tell me what they want, supply the context, anticipate my follow-up questions, and remove every unit of friction from the path between their need and my action. They are, in effect, designing a careful prompt for a model whose attention they only have in limited windows.&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t about politeness or formality. The one-liner I got was perfectly polite. It just under-prompted me. It left me to assemble the context myself, and the result is the same thing that happens when you give an LLM a vague instruction: more turns, more clarifications, more compute wasted on both sides — or, if the model has limited patience that afternoon, a redirect back to someone who can teach you to prompt better.&lt;/p&gt;
&lt;h2 id="a-second-example"&gt;A second example&lt;/h2&gt;
&lt;p&gt;The contrast did not show up only in that one exchange. Around the same time, I asked both students to write a short retrospective on their recent paper submissions. I wanted to know what they had learned and what we should do differently next time.&lt;/p&gt;
&lt;p&gt;The first student — the one who had carefully prepared the credit request — sent me a four-page PDF. It had an introduction, a section on what they had learned, a section on what went wrong, a remediation and action plan, and a short conclusion.&lt;/p&gt;
&lt;p&gt;The second student typed a few bullet points directly into the shared Google Doc we use for meeting notes. Three or four lines. Most of what I would have wanted them to reflect on wasn&amp;rsquo;t there.&lt;/p&gt;
&lt;p&gt;I don&amp;rsquo;t need to tell you which one I preferred. But I want to be precise about &lt;em&gt;why&lt;/em&gt;, because it isn&amp;rsquo;t that the PDF was longer or prettier. The PDF was engineered to do a job. It anticipated the questions I would have asked, and answered them before I asked. It separated the diagnostic part (what went wrong) from the prescriptive part (what we&amp;rsquo;ll do about it), which is exactly the structure I would otherwise have had to pull out of the bullet-point version myself. It gave me leverage: I could read it, agree or disagree with specific claims, and we could spend our next meeting on the action plan instead of excavating context.&lt;/p&gt;
&lt;p&gt;The structure of the document &lt;em&gt;was&lt;/em&gt; the prompt.&lt;/p&gt;
&lt;h2 id="the-reciprocal-half"&gt;The reciprocal half&lt;/h2&gt;
&lt;p&gt;Here is the part I want to be careful about, because the &amp;ldquo;treat me like an agent&amp;rdquo; framing only works if it cuts both ways.&lt;/p&gt;
&lt;p&gt;I treat my students like agents too — but in a specific way that&amp;rsquo;s worth spelling out.&lt;/p&gt;
&lt;p&gt;I am not trying to develop them into well-trained question-answerers. I am trying to develop them into something more like autonomous research agents: the kind that can frame a problem, decompose it, run the steps, decide what to escalate, and come back to me only when something genuinely needs my judgment. That is a much harder design problem than getting any single task done well. It means that a great deal of what I do — in our meetings, in the way I reply to messages, in what I sometimes refuse to answer directly — is calibrated less for the immediate response and more for the agent I am trying to grow over time.&lt;/p&gt;
&lt;p&gt;I rewrite my own prompts often. I watch for places where a particular phrasing of mine pulls a student toward dependence rather than independence, and I adjust. The redirect I sent the second student — &lt;em&gt;go ask the first student how to prepare this kind of request, and come back when you have&lt;/em&gt; — was a prompt too, and a deliberate one. I could have just answered the question. I did not, because the answer was not what they needed.&lt;/p&gt;
&lt;p&gt;So when I say &amp;ldquo;treat me like an agent,&amp;rdquo; I am not asking for deference. I am pointing at a structural fact: the bandwidth between any two collaborators is narrow and expensive, and the construction of that bandwidth — the prompts in both directions — is real work that compounds over time. The students who get the most out of me are the ones doing that work on their side. The best version of me, on my side, is doing it for them.&lt;/p&gt;
&lt;h2 id="what-this-looks-like-in-practice"&gt;What this looks like in practice&lt;/h2&gt;
&lt;p&gt;If you&amp;rsquo;re a PhD student, or really anyone working with a more senior collaborator, here is the version of this I wish I had heard earlier in my own career.&lt;/p&gt;
&lt;p&gt;Your advisor is not a magic oracle. They are a model with finite context, finite attention, and a long queue of other inputs. The value you extract from them per unit of their time is a function of the prompt you hand them. Specify what you want. Supply the context. Anticipate the follow-ups. Pre-compute the parts they would otherwise have to compute. Lower the activation energy of the thing you are asking them to do.&lt;/p&gt;
&lt;p&gt;For requests: tell them the goal, the constraints, what you&amp;rsquo;ve already tried, and exactly what you need. If there&amp;rsquo;s a UI they have to click through, walk them through it.&lt;/p&gt;
&lt;p&gt;For deliverables: don&amp;rsquo;t hand them raw material and ask them to structure it. Hand them something already structured — even a rough outline of &lt;em&gt;intro / what I learned / what went wrong / what we&amp;rsquo;ll change&lt;/em&gt; beats a bulleted brain dump, almost every time.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;ve read this far, you&amp;rsquo;re probably already the kind of person who does this. The harder move is to notice when you aren&amp;rsquo;t, and to fix that before you blame the output.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;re all agents to each other. The good news is that prompt engineering, on both sides, is a skill you can practice.&lt;/p&gt;</description></item></channel></rss>