AI can do your writing, but not your thinking

Aug 8, 2025 - 07:48
 0  0
AI can do your writing, but not your thinking

As a partner at Theory Ventures, a VC firm built around deep technology and market research, I spend my days swimming in information: academic papers, market reports, interview notes, and written analyses. Our job is to synthesize these data points into a nuanced perspective to inform our investment decisions.

Reading the hype online, it’s tempting to think you can just delegate anything to AI. But for something so critical to our job, we don’t just want it done, we need it to be excellent. How much can AI really do for us?

In this piece, I will share: 

  • How we structure instructions to get the best analysis out of an AI model
  • Where I critically intervene and rely on my own thoughts
  • How you can get an AI to mirror the way you write

When relying on an LLM you often get something that only seems good at first glance: often the AI has missed details, or an important nuance. And for this core part of my job, decent isn’t enough—I need output to be excellent.

This AI accuracy gap creates a painful cycle where you spin in circles, trying to re-prompt the system to get what you want until you’re essentially left rewriting the entire output on your own. In the end, it’s unclear if AI actually helped at all. The more effective approach is understanding how you (the human) do the thinking and leave writing (i.e., formatting and synthesis) to the LLM. This simple separation is what elevates AI-augmented workflows from decent to exceptional.

Here’s an example of how we build those kinds of workflows at Theory Ventures, and how you can too. We’ll illustrate an example with the automation of our internal market research reports.

Step 1: Define the thinking process

Prepare a document with very detailed instructions on the underlying analysis/construction you seek to achieve—clearly outline the context & goals, then dive into all of the details on how you deconstruct a broad analysis: the specific questions you would ask, follow-up sub-questions, how they should be answered with data, and key callouts or exceptions.

You can use an AI assistant to help you generate a first draft of this, sharing completed documents and asking it to deconstruct the analysis. But these instructions are critical, so it’s important to finish writing it by hand and continue to update it over time when you tweak your analysis. 

Example analysis instructions included in the prompt (note: the full instructions will typically be 2 to 10 or more pages long)

  • Analyze the underlying market structure: Is it fragmented or consolidated? Why? (e.g., high specialization needs, regulatory barriers, network effects, legacy tech debt). How is fragmentation changing over time, and is it different across market segments?
    • Use the following data sources and analyses: . . .
  • Evaluate key market dynamics: What are the typical switching costs? How prevalent is tech debt? What are the typical sales cycles and buyer behaviors? How do incumbents maintain their position (moats)?
    • Use the following data sources and analyses: . . .

Step 2: Lay out your human-led analysis

Provide your primary analysis, along with raw notes and instructions to the AI. We set our systems up so they require the user to provide their key takeaways and analysis to guide the system towards what’s most important—highlighting areas to focus on, key opportunities, and potential concerns. These are typically four to five detailed bullet points of two to four sentences each. This is the crux of the analysis and should therefore never be AI-generated.

Example key takeaways provided to the system: 

  • This market has historically been small and fragmented without major software providers. We expect it will grow dramatically, primarily through currently automating labor spend and consolidating a set of point solutions. The underlying demand for this capability will also increase with XYZ challenges. We feel very confident in these two growth levers.
  • There’s substantial concentration at the upper end of the market. Major platforms control around X% of the market and have all invested heavily in their own technology. But below the top-n largest players, there is a healthy cohort of medium-large buyers that have the scale to need this solution but don’t want to build it. We think this is sufficient to build a sizeable company, although market concentration and build versus buy remains a key long-term risk.

Step 3: Run an interactive Q&A to hone the analysis

This dialogue is the most interesting and fun step: Have the system generate questions to clarify the contours of your analysis. Based on the primary analysis, along with the notes and general instructions, the system asks questions about things that either weren’t clear or had conflicting information/instructions. This helps sharpen the analysis and gives the user the opportunity to share more of their thought process and guidance.

Example Q&A:

  • Q from the AI: You said that major platforms have invested heavily in this technology, but conversations with some of those companies indicated an excitement to buy. Do you think that will be common, or were they exceptions?
  • A from the human: Good point. I do think that many of them will buy eventually, but because they’ve built a lot of technology internally they are more likely to need a new platform only for certain components, versus buying an end-to-end system. And the very largest companies (top three to five) will build everything in-house.

Step 4: Share past work to match tone, not ideas

Use previous examples of your work to replicate tone and style only after the scaffolding work is done. Most people skip immediately to this step, but we found (and research shows) that providing finished examples is most useful just to match tone and writing style, as opposed to shaping the analysis itself.

In researching the best AI-native products, we’ve seen that practically all of the work goes into defining the thinking and analysis portion of the problem—detailed instructions, guidelines, orchestration, and tooling—so the AI system knows what it should do and just executes on it. 

At Theory Ventures, we’ve started to mirror the same system by developing highly-constrained, human-in-the-loop workflows that direct the analysis, leaving the LLM to execute basic information extraction and synthesis. That’s how we—and our AI systems—have started working smarter. Not by asking AI to think for us, but by helping it think better.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0