For two years I'd been doing infrastructure and data platform work — IoT ingestion pipelines, ADX schema design, Bicep IaC, security hardening. I was comfortable. I knew the codebase deeply.

In late 2023 the team needed someone to take over the Copilot AI workstream at a critical moment. Copilot accuracy was stuck at 40% and Public Preview was weeks away. I was asked if I could step in.

I said yes. I had never worked on an LLM pipeline before.

The First Week: Read Everything

There was no time for a gentle ramp. My approach in week one was to read the entire Copilot Python codebase end-to-end before writing a single line.

I traced every step: how a user question became a prompt, how instructions and examples were retrieved from storage, how the prompt was assembled, how the KQL was generated, how it was validated, how errors were handled. I needed a complete mental model before I could make good decisions about what to change.

Approach When you're new to a codebase, resist the urge to start coding immediately. An hour reading beats a day of wrong assumptions.

The Second Week: Find the Real Failure Modes

Armed with understanding, I ran the system against a set of test questions and catalogued every failure. Not abstractly — concretely. What exact query was generated? What validation error came back? What was the model trying to do?

Three patterns emerged quickly:

Each was fixable. None required redesigning the whole pipeline. I started with the highest-impact one — typecast injection — and worked through the list.

What Made the Switch Possible

Looking back, a few things made this transition work:

Infrastructure knowledge transferred directly

I knew ADX, Cosmos DB, and the data model deeply from my previous work. The Copilot pipeline queried these same systems. Understanding the underlying data was a significant advantage — I could reason about why certain queries would fail based on the schema, not just the model output.

Debugging skills are universal

Good debugging — reproduce the problem, isolate the variable, form a hypothesis, test it — works the same whether you're debugging a null reference in C# or a failing KQL query from an LLM. The domain changes; the method doesn't.

Treat the model like a system component

I approached the LLM the same way I'd approach any other component in the pipeline: what are its inputs, what are its outputs, where are the failure modes, how do I instrument it? This framing was more useful than thinking of it as a magical black box.

And Then Again: Frontend in 2025

The same pattern repeated when I moved to the Power Automate Process Mining team in 2025. After 3+ years of backend C#/.NET work, I was now primarily in a React/TypeScript/Fluent UI codebase.

Same approach: read the codebase first, find the patterns, contribute incrementally, then start guiding others once you have depth. Within a few months I was unblocking teammates on frontend design questions and became one of the more experienced FE contributors on the team.

"The way he ramped up on learning FE development, React and Fluent, to the way he progressed in supporting other developers in FE design and work, unblocking them and guiding them towards completion is highly commendable." — Manager feedback

What I'd Tell Someone in the Same Spot

Career GrowthLearningLLMReactAzureEngineering