Skip to main content

Microsoft Purview and Copilot: The perfect union to guarantee the security of your company

The advent of generative AI has opened up a new paradigm in business. Its numerous use cases, the possibilities for process improvement, and greater productivity or employee empowerment have led many companies to implement this technology, but the vast majority are not doing so properly.

As with any evolving technology, the benefits it brings can be affected by the security loopholes it can open if not implemented safely and consciously. Purview plays a key role in enabling the development and implementation of responsible AI at Copilot. Here’s how.

Microsoft Purview for Enterprise Security

Purview is an end-to-end data asset intelligence solution that assists in the protection and governance of data. It was created with the goal of providing comprehensive capabilities to help enterprises discover, protect, and manage information wherever it resides.

It offers capabilities to catalog, map, and monitor sensitive data across the organization’s entire data landscape. This gives professionals greater visibility and control to assess and mitigate potential ethical risks of AI when building and launching Copilot-powered applications.

Microsoft Purview enables you to manage and protect your data through benefits such as:

  • Prevention of data loss.
  • Information protection.
  • Data lifecycle management.
  • Communication compliance.
  • Internal risk management.
  • Request management.
  • Data awareness.

Microsoft Purview and Copilot: Overview

Purview’s data catalog maps where personally identifiable data, financial information, health data, and other sensitive data are located in on-premises, multi-cloud, and SaaS environments. This inventory identifies datasets that could lead to bias, fairness, or confidentiality issues if used to train AI models.

In addition, it applies confidentiality tags to correctly label and categorize the identified confidential data, which helps the proper handling of datasets when developing Copilot applications, as well as ensuring that they are anonymized or synthesized if necessary.

In addition, Purview’s data lineage feature provides visibility of upstream data flows from source to consumption. It shows how different data sources are interconnected and used in an organization. Combined with the catalog, it gives development teams complete visibility of data before launching Copilot-enabled applications.

In fact, in production, Purview’s continuous scanning and monitoring capabilities keep the AI data estate under control. Any new sensitive data that appears is immediately flagged through automated classification and tagging. It also features trainable classifiers, enabling customized identification of sensitive data types beyond the default patterns.

By using sample files to train the model, organization-specific data such as product codes, customer IDs, or unique content can be quickly detected to ensure comprehensive data governance across structured, unstructured, and customized data sources. Scanning can trigger notifications to data owners if unwanted data is detected in training, enabling rapid remediation to maintain AI ethics and compliance.

Access control and governance

Access control and data governance become even more critical as Copilot or other AI tools become more widely used. However, with the advent of Purview, the risk can be addressed thanks to:

  • Summarise alerts in DLP
  • Summarise internal risk management alerts
  • Summarise policy matches based on trainable classifiers for communication compliance
  • Get a contextual summary of eDiscovery cases

Reinforcing protection

Microsoft 365 Copilot uses existing controls to ensure that data stored in the tenant is never returned to the user or used by a large language model (LLM) if the user does not have access to that data. If the data has your organization’s confidentiality tags applied to the content, there is an additional layer of protection when:

  • A file is open in Word, Excel, or PowerPoint or an email or calendar event is open in Outlook, the confidentiality of the data is shown to the users of the application with the tag name and content flags that have been configured for the tag.
  • The confidentiality tag applies encryption, users must have the right to use EXTRACT and VIEW for Copilot to return data.
  • This protection extends to data stored outside of your Microsoft 365 tenant and is open in an Office application.

On the other hand, when Microsoft 365 Copilot is used to create new content based on an item that has a confidentiality tag applied to it, the source file’s confidentiality tag is automatically inherited, with the tag’s protection settings.

If multiple files are used to create new content, the confidentiality tag with the highest priority is used for tag inheritance. As with all automatic tagging scenarios, the user can always override and replace an inherited tag (or remove it, if not using mandatory tagging).

Copilot Compliance Management

Purview’s compliance capabilities can be used with enterprise data protection to support the risk and compliance requirements of Microsoft 365 Copilot and Microsoft Copilot:

  • Purview Classification
  • Content search
  • Communications compliance
  • Audit
  • Electronic document display
  • Automatic retention and deletion functionalities through retention policies

For communications compliance, you can analyze user requests and Copilot responses to detect inappropriate or risky interactions or the sharing of sensitive information.

For auditing, details are captured in the unified audit log when users interact with Copilot. Events include how and when users interact with Copilot, where the Microsoft 365 service occurred, and references to files stored in Microsoft 365 that were accessed during the interaction. If these files have a confidentiality tag applied, this is also captured.

For content search, since the user requests Copilot and Copilot’s responses are stored in a user’s mailbox, they can be searched and retrieved when the user’s mailbox is selected as the source of a search query.

Similarly, for eDiscovery, the same query process is used to select mailboxes and retrieve user requests to Copilot and Copilot responses. Once the collection is created and originated in the eDiscovery (Premium) review phase, this data is available to perform all existing review actions. These collections and review sets can be put on hold or exported.

For retention policies that support automatic retention and deletion, user messages and Copilot responses are identified by location Teams chats and Copilot interactions. Existing retention policies previously configured for Teams chats now automatically include user messages and replies to and from Microsoft 365 Copilot and Microsoft Copilot.

As with all retention and hold policies, if more than one policy for the same location is applied to a user, the retention principles resolve conflicts.

Microsoft Purview AI Hub

The Microsoft Purview AI Centre is in preview, providing easy-to-use graphical tools and reports to quickly gain insight into AI usage within your organization. One-click policies help you protect data and comply with regulatory requirements.

You can use AI Center in conjunction with other Purview functionality to strengthen data security and compliance for Microsoft 365 Copilot and Microsoft Copilot:

  • Confidentiality and content labels encrypted by Microsoft Purview Information Protection
  • Data classification
  • Customer Key
  • Communications compliance
  • Auditing
  • Content search
  • Electronic document display
  • Retention and disposal
  • Customer lockbox

This AI Hub provides a central management location to help you quickly secure AI application data and proactively monitor AI usage. You can learn more in this how-to video.

It also offers a set of capabilities so you can safely adopt AI without having to choose between productivity and protection:

  • Findings and analysis of AI activity in your enterprise
  • Out-of-the-box policies to protect data and prevent data loss with AI alerts
  • Compliance controls to enforce optimal data storage and control policies

Using the AI Centre at Purview

To help you get information faster, the AI Hub provides some pre-configured policies that can be activated with a single click. You will only have to wait 24 hours for these new policies to collect data to display the results in the center or reflect changes made to the default settings.

To get started, you can use the Microsoft Purview or compliance portal and have the appropriate permissions for compliance management.

    1. Depending on the portal, go to one of these locations:
      Microsoft Purview Portal Sign-in and Artificial Intelligence Centre (preview version)
    2. Sign in to Microsoft Purview > AI Compliance Portal Centre (preview version)
  1. In Analysis, review the Getting Started section to learn more about the AI center and the immediate actions you can take. Select each of these to display the floating panel for more information, and actions and to check the current status.
  2. Next, review the Recommendations section and decide whether you want to implement the options that are relevant to the tenant.
  3. Select Policies, where you can quickly activate default policies to help you protect sensitive data sent to third-party generative AI sites and protect data with privacy labels.
    1. Under Recommendations and Fortify data security for AI, select Getting Started to learn more, take action, and check the current status.
      When policies are created, including Analytics policies, you can monitor the status from this page. To edit it, you can use the corresponding management solution in the portal.
  4. Select Activity Explorer to view the details of the data collected from the policies. Under View details, you can access the analysis graphs or explore the activity, workloads, application, types of sensitive information detected, etc.
    1. Examples of activities include AI interaction, sealed classification, DLP rule matching, and AI visits. Copilot prompts and responses are included in AI interaction events when appropriate permissions are in place.

Implementing Microsoft Purview and Copilot

With its end-to-end visibility of sensitive data, automated insights, and policy enforcement, Purview is indispensable for the ethical and secure use of Copilot. With its capabilities to bring together Defender, Sentinel, Intune, and Entra in a single dashboard, it enables professionals to assess AI risks early, design appropriate controls, and maintain responsible oversight after implementation.

Purview will therefore help you ensure that Copilot respects your organizational values, regulations, and ethical AI best practices. This, in turn, will result in increased trust with customers, as well as the transparency and governance of data assets necessary for ethical and compliant innovation.

Plain Concepts’ security team is ready to help you implement Microsoft Purview into your enterprise security strategy, covering information protection, unified data governance, intelligent lifecycle management, internal risk management, auditing, compliance management, and NIS2. Don’t wait any longer contact our experts and transform the way you work securely!

 

 

 

Elena Canorea
Author
Elena Canorea
Communications Lead