Microsoft Employees Banned from Using DeepSeek AI App Due to Data Security Concerns

During the recent hearings in the U.S. Senate, Microsoft President and Chief Legal Officer Brad Smith announced that employees of the American corporation are prohibited from using the Chinese AI model app DeepSeek. This decision is driven by concerns over data security and the issue of propaganda, Smith admitted.

He explained that Microsoft chose not to offer the DeepSeek app in its marketplace due to these concerns. The risk involves the possibility of data being stored in China, along with the potential for the model’s outputs to be influenced by local propaganda. DeepSeek has acknowledged that it stores user query data on Chinese servers.

Such information is subject to Chinese legislation that mandates cooperation with state security agencies. Additionally, DeepSeek heavily censors topics that the Chinese government deems sensitive.

Despite Smith’s critical comments regarding the Chinese firm, Microsoft offers the DeepSeek R1 model through its Azure cloud service.

During the hearings, the corporation’s president stated that Microsoft had managed to «penetrate» the DeepSeek AI model and «modify» it to eliminate «harmful side effects.» Smith did not specify what these modifications entailed. Prior to the launch of R1 on Azure, the model underwent a «rigorous security compliance check.»

DeepSeek is a direct competitor to Microsoft’s Copilot, yet the latter offers some competing chatbots in its Windows app store. Notably, Perplexity AI can be found in the Microsoft Store, while Google’s Gemini is absent.

The Senate hearings focused on the race for U.S. leadership in AI technology. The heads of OpenAI Sam Altman, AMD Lisa Su, and CoreWeave Michael Intrator also addressed questions from lawmakers.