| 4 min read

How Copilot Agents Changed My Workflow at Dyson

Microsoft Copilot AI agents Dyson automation enterprise AI

The Promise of Enterprise AI Agents

When Microsoft rolled out Copilot agents (formerly Copilot Studio plugins), I saw an opportunity to automate some of the most tedious parts of my workflow at Dyson. These are AI agents that live inside Microsoft 365, access your organization's data, and perform actions on your behalf. I built several for my team, and the productivity gains were substantial.

What Copilot Agents Actually Are

A Copilot agent is a custom AI capability that extends Microsoft 365 Copilot. You define what the agent can do (its actions), what data it can access (its knowledge), and how it should respond (its instructions). Users interact with it through natural language in Teams, Word, or the Copilot sidebar.

Under the hood, agents use GPT-4 class models with access to your organization's Microsoft Graph data: emails, documents, calendars, SharePoint sites, and more.

My First Agent: The Test Results Finder

My first useful agent connected to the A/B test results library I had built in SharePoint. Instead of opening the Power App and filtering through results, team members could ask Copilot:

"Have we tested hero banner layouts on the Japanese market in the last year?"

The agent would search the SharePoint lists, find relevant tests, and summarize the results in natural language. Building it took about two hours in Copilot Studio:

  • Connected the SharePoint lists as a knowledge source
  • Defined the agent's instructions to focus on test results and learnings
  • Added example queries to improve response quality
  • Published to the team's Microsoft Teams environment

The Content Brief Generator

The second agent I built automated content brief creation. At Dyson, creating a content brief for a new product page involved gathering information from multiple sources: product specifications from the PIM system, competitor analysis from shared drives, brand guidelines from SharePoint, and previous test results.

The agent pulled from all these sources and generated a structured brief:

Agent Instructions:
- When asked to create a content brief, gather:
  1. Product specifications from the PIM SharePoint site
  2. Relevant A/B test results from the Results Library
  3. Brand guidelines for the target market
  4. Competitor page structures from the Analysis folder
- Output a structured brief with sections for:
  - Key product features to highlight
  - Recommended page structure based on test data
  - Market-specific considerations
  - Content requirements and word counts

What previously took 2-3 hours of manual research now took about 5 minutes of agent interaction plus 30 minutes of human review and refinement.

Practical Lessons

Start Small and Specific

My most successful agents had narrow, well-defined purposes. The test results finder does one thing well. A generic "help me with everything" agent would have been far less useful because the instructions would be too vague for the model to follow reliably.

Knowledge Source Quality Matters

The agent is only as good as the data it can access. I spent significant time organizing and cleaning SharePoint content before connecting it to agents. Poorly structured documents produce poor agent responses.

Instructions Need Iteration

Writing effective agent instructions is similar to prompt engineering. I went through multiple rounds of testing and refining instructions based on real user queries. Specific examples in the instructions dramatically improved response quality:

When summarising test results, always include:
- The test hypothesis
- Markets tested
- Key metric impact (conversion rate change with confidence interval)
- The recommendation (roll out, iterate, or abandon)

Example response format:
"We tested [description] in [markets] from [dates].
Results: [metric] changed by [X]% (95% CI: [range]).
Recommendation: [action]"

Users Need Training

Even though agents use natural language, users need to learn what the agent can and cannot do. I created a one-page guide showing example queries and common use cases. Usage increased significantly after sharing this guide.

Limitations I Encountered

  • Response accuracy: Agents sometimes hallucinate or misinterpret data. Human review is essential, especially for quantitative results.
  • SharePoint search quality: The agent relies on Microsoft's search indexing. If content is not properly indexed, the agent cannot find it.
  • Action limitations: Agents can read data effectively but performing write actions (creating documents, updating lists) requires more complex setup with Power Automate connectors.
  • Context window: When searching across many documents, the agent sometimes loses context or prioritizes recent content over relevant content.

Impact on My Workflow

Across the three agents I built, I estimate the team saved 10-15 hours per week in manual research and document preparation. More importantly, the test results agent surfaced relevant previous work that people would never have found manually, reducing duplicate testing and improving decision quality.

Enterprise AI agents are not magic. They require good data, careful instruction design, and user education. But when those pieces are in place, they deliver real productivity improvements that compound across teams.