AI Privacy & Security Overview
Updated
by Matt Linn
Artificial Intelligence (AI) Privacy & Security Overview
Service CoPilot
Earlier this year, Thread launched Service CoPilot: the first AI-enabled co(llaborative) service bot for MSP service teams. Its original use cases are designed to automate time entry and issue prioritization, freeing up technicians to focus on critical tasks and meaningful work and saving MSPs everywhere hours a day.
Data Privacy & Security
Thread understands the importance of privacy and security—especially when it comes to early use of emerging technology such as large language models (LLM), generative artificial intelligence (AI), and generative interfaces.
As such, we decided to build Service CoPilot with Microsoft Azure OpenAI Service to benefit from the enterprise-grade and highly-secure Azure cloud, and to build Service CoPilot to be governed by the foundational controls documented below:
No partner or customer data is stored or used for training or re-processing.
- No partner or customer data is stored in Microsoft Azure OpenAI Service.
- Data required for prompt execution (documented below) exists only in memory; it is not stored.
Data sent to Azure OpenAI Service is limited to the following:
- First Name
- Last Name
- Issue Summary
- Issue Initial Description
- Issue Conversation
- Contact Type
- Today's Date
Only authorized personnel are allowed to access our core AWS infrastructure, Azure OpenAI Service instance, Microsoft Azure portal, and underlying Microsoft services.
- All violations of Thread's Acceptable Use Policy result in disciplinary action, up to and including termination.
Azure OpenAI Service & Thread Reference Architecture
When using Azure OpenAI Service, there are a number of security features built in to help protect your data and AI models.

Service CoPilot reference architecture.
These features include but are not limited to:
- Isolation: Thread’s Azure OpenAI Service instance is isolated from every other customer and partner on the platform, ensuring that there is no risk of unauthorized access to your data or models.
- Content Filtering: When data is submitted to the service, it is processed through Microsoft’s content filters as well as those built in to the specified OpenAI model. The content filtering models are run on both the prompt inputs as well as the generated completions. No prompts or completions are stored in the model during these operations, and prompts and completions are not used to train, retrain or improve the models.
- Control: Microsoft hosts the OpenAI models within the Azure infrastructure, and all customer data sent to Azure OpenAI Service is encrypted and remains within Azure OpenAI Service. Microsoft does not use customer data to train, retrain or improve the models in the Azure OpenAI Service, and neither does Thread.
- Data protection: Your data is not used to train or enrich the foundation AI model that is used by others. Nor is any data shared by Microsoft to OpenAI for improvement of their models. This means that you can be confident that your data is only being used for your own purposes and that you have complete control over how it is used.

- Compliance and security: The Azure OpenAI Service is protected by the most comprehensive enterprise compliance and security controls in the industry. This means that your data and AI models are protected at every step, from storage to processing to destruction.