In-Depth

Hands On: Getting Started with Microsoft Copilot Studio Agents

Microsoft Copilot Studio is positioned as a low-code way to build agents that can answer questions, take action, and work with connected data.

Docs say agents can be create for use cases including:

  • Sales help and support issues
  • Opening hours and store information
  • Employee health and vacation benefits
  • Public health tracking information
  • Common employee questions for businesses

For a hands-on proof-of-concept, I wanted to see how quickly that could translate into something useful for my needs dealing with editorial content -- specifically, a SharePoint-hosted Excel tracker used to keep tabs on my articles published across multiple sites. I usee the spreadsheet to quickly find content for site-related email newsletters. I thought it would be great to query that resource to narrow down selections for different newsletters, suggest ordering and so on, not to mention provide metrics to show my boss how productive and AI-savvy I was!

Getting Started
[Click on image for larger view.] Getting Started (source: Ramel).

The idea was simple enough: create a basic agent in Microsoft Copilot Studio, connect it to the spreadsheet, and test whether it could answer questions about recent content, which I could use for informed decisions to quickly populate newsletters. The actual experience turned out to be more revealing than that setup made it sound.

I was just aiming for a quick-and-dirty test to see how things worked, because Microsoft had just notified me that my Copilot Studio preview access was running out and I wanted to extend the preview and get a feel for the product. I did not want to spend a lot of time on configuration, and I did not want to involve my IT Admin in setting up connections or permissions. I just wanted to see how far I could get with a simple agent and a SharePoint-hosted Excel workbook as the knowledge source.

Starting With a Plain-Language Agent
The opening flow was straightforward. From the "What would you like to build?" page, I chose "Create an agent," the option described as a way to build "flexible solutions that take action, share knowledge, and handle tasks." From there, I named the agent Content Tracker and moved into the initial configuration screens.

Content Tracker
[Click on image for larger view.] Content Tracker (source: Ramel).

Once the agent shell was created, Copilot Studio presented the expected basics: model selection (default is Claude Sonnet 4.5), an instructions field, and a Knowledge input window for adding sources, such as enterprise or domain data. Microsoft documents this as the core starting point for building an agent and then expanding it with connected knowledge. Its quickstart guidance reflects that same flow.

Setup
[Click on image for larger view.] Setup (source: Ramel).

My instructions were:

You are an assistant for an editorial content tracker.

Answer only from the connected knowledge source when possible.
If the source does not contain the answer, say that it is not available in the source.
Prefer short factual answers.
When summarizing, group items by publication, date, or topic. Do not infer unpublished data.
Do not make up article titles, dates, publications, topics, or counts.
If asked about a specific publication, tab, or section, search for that exact name first.
If the source appears incomplete or only partially searchable, say so clearly.

Where SharePoint Added Friction
The first complication appeared when I tried to use my SharePoint-hosted Excel workbook as a knowledge source. At first, I used the kind of link many people would naturally copy from Excel or SharePoint -- a share-style URL from the Share button. That did not work.

Part of the issue was connection-related. Microsoft's connection documentation explains that Copilot Studio uses connections to securely access authenticated services such as SharePoint. In practice, that meant the agent environment needed a SharePoint connection before the source could be used properly.

SharePoint Source
[Click on image for larger view.] SharePoint Source (source: Ramel).

Once I created a SharePoint connection, the setup became more coherent. The connection manager showed SharePoint as connected, which was the first real sign that the knowledge-source path was viable.

Connecting SharePoint
[Click on image for larger view.] Connecting SharePoint (source: Ramel).

This was one of the first useful takeaways from the PoC. In this case, adding SharePoint content was not just a matter of pasting in a spreadsheet link. It depended on both the right kind of SharePoint source and a working environment-level connection.

The Workbook Was Reachable, But Not Spreadsheet-Aware
After the source was accepted, the agent could answer some questions about the content tracker. For example, it was able to summarize what was in the file and identify article themes from the material it could retrieve.

Connecting SharePoint
First Query (source: Ramel).

That success had an important limit, however. The source workbook contained multiple worksheet tabs, including separate sections for publications such as .NET Insight and VCRM (Virtualization & Cloud Review Magazine). In practice, the agent consistently surfaced content from the .NET Insight tab of the workbook while failing to "see" the VCRM tab.

That behavior lined up with how Microsoft describes knowledge sources in Copilot Studio. For SharePoint, the product connects to a SharePoint URL and uses GraphSearch to return results. In other words, the standard knowledge-source path behaves more like search-grounded retrieval than like a live Excel session that can navigate whichever worksheet a user currently has open.

For a first PoC, that turned out to be a major distinction. The agent was not completely blind to the workbook, but it also was not treating the spreadsheet the way a user would: as a structured file whose tabs could be queried naturally and directly.

Authentication Prompts Continued Into Testing
Even after the connection was in place, the test experience was not always seamless. At one point, asking the agent to identify publications or sections in the connected sources triggered an additional prompt to continue with credentials, this time under OneDrive for Business. However, no matter how many times I clicked that Allow button in the screenshot below, nothing happened.

Connecting SharePoint
Authentication Prompt That Didn't Work (source: Ramel).

Despite the unclickable button, this was another reminder that the agent was operating within Microsoft's authenticated enterprise data stack, not just reading a static local file. Microsoft notes that authenticated knowledge scenarios use configured authentication and only surface what the user can access. The SharePoint generative answers documentation describes that behavior in terms of user-based access and configured authentication.

A Workaround That Improved Retrieval
To see whether the problem was SharePoint itself or the multi-tab workbook structure, I tried a simple workaround: instead of relying on one workbook with multiple tabs, I downloaded each tab from desktop Excel as its own file and added those files as separate knowledge sources.

I'm not an Excel guy and I used ChatGPT and the agent itself to guide me though that, and the whole process in general.

That changed the outcome. Once VCRM and .NET Insight were represented as separate Excel files, the agent successfully handled a request to list the last five articles from each source. Here is the .NET Insight part of the response:

Connecting SharePoint
Successful Query (source: Ramel).

This was the clearest positive result of the exercise. The same agent that had trouble navigating one combined workbook became more useful when the content was split into cleaner, more distinct inputs. That does not make the workaround elegant, but it does suggest that source structure can have a significant impact on what the agent can retrieve successfully.

It also showed the separate files as referenced sources in the search experience, asking for most recent VCRM articles. However, it didn't do that correctly, as it listed a Vanguard Financial Advisor Services article that just happened to have "VCRM" in the title, which can be seen in the screenshot below.

Referenced Sources?
Referenced Sources? (source: Ramel).

What This First PoC Really Showed
One lesson is that creating an agent and getting useful results from enterprise content are not the same thing. Copilot Studio made the initial agent creation feel fast and approachable. The harder part was shaping the connected data into something the retrieval model could use well.

That is where this exercise became more interesting than a standard "hello world" walkthrough. A SharePoint-hosted Excel workbook looked like an obvious source for a lightweight editorial agent. In practice, it worked only partially in the standard knowledge-source flow. Separate files worked better. That points less to total failure than to a mismatch between the workbook's original structure and the retrieval model behind the agent.

Microsoft also documents a more advanced path for structured file work. The preview code interpreter capability for structured data is designed for tasks involving Excel and other structured files. That looks closer to what many users might expect from spreadsheet-aware agent behavior, though it was outside the scope of this initial no-frills test.

A More Promising Next Step
If there is an obvious next step from here, it may be moving away from a multi-tab workbook and into a more agent-friendly source such as a SharePoint list. Microsoft recently added support for using SharePoint URLs and lists as knowledge sources, and Microsoft has separately said that when a specific list is selected, the agent can access all rows and columns subject to permissions. That release-plan item suggests a more natural fit for structured trackers than a workbook divided across tabs.

Bottom Line
As a first look at Microsoft Copilot Studio agents, this PoC delivered both a quick start and a quick reality check. Building the agent itself was easy. Connecting SharePoint knowledge took more setup. And a SharePoint-hosted Excel workbook did not automatically translate into a smooth, spreadsheet-aware agent experience. And some responses were just weird.

Of course I could have done much more, but I didn't want to have to involve my Admin in things, which would be the first step for an enterprise solution.

At the same time, the exercise uncovered a practical path forward. Once the content was split into separate files, retrieval improved enough to make the agent feel more useful. That is a valuable takeaway in its own right: with Copilot Studio, the quality of the agent experience can depend as much on how the source material is shaped as on how quickly the agent can be created.

Featured

Subscribe on YouTube