Several weeks ago there were some claims that Microsoft uses files that its clients put on its cloud services to train their AI models that is contained in Microsoft 365 which includes Word and Excel and the like and Microsoft has been quite categorical in denying those allegations. In a statement issued this Wednesday, the company has pointed out that it does not employ such data for this purpose.
Microsoft Denies Misuse of User Data for AI Training
The fake news which was doing the rounds in the silicon valley was that Microsoft was training AI by mining data from user generated contents from popular applications. Microsoft counterclaimed, for instance, explaining that user privacy and data protection remain a top priority, and that the user information is not used without permission.
In response to these criticisms, Microsoft reiterated that all data which is fed into AI models is either aggregated or volunteered through features such as permissions and consents. They noted that customers’ personal and sensitive data continue to be secure due to robust privacy standards in the company.
Such denial comes at a time when unrealistic apprehensions about questionable use of AI training are being voiced. Most of the users and regulators are also more concerned on the ways that the giant tech companies are dealing with personal data as the artificial intelligence technologies are growing.
Microsoft has released a latest announcement to set right its client’s perception by declaring that the firm uses ethical artificial intelligence. But it remains dedicated to being as transparent as possible and meet global directives of data protection particularly GDPR.
Microsoft Refutes Claims of Using User Data for AI Training
Microsoft has quickly reacted to rumours on social media stating that the company harvests data from the users of its Microsoft 365 applications and feeds it to artificial intelligence. The stir was fuelled by some users when they deemed it necessary to turn off the ‘’connected experiences’’ tab leading to a debate about data consumption.
Microsoft dismissed these accusations in an e-mailed statement to Reuters as “completely untrue.” The spokesperson also explained that Microsoft does not feed data from Microsoft 365 consumer and commercial apps to its base LLMs.
This confusion came from the “connected experiences”, which offers individuals, personal and or cloud experience in Microsoft 365 apps. Some users assumed this would make it easy for Microsoft or third parties to train AI against users’ data; the feature is off by default and does not violate user privacy rights, according to Microsoft.
Microsoft also sought to comfort users by stating that its policies as respect to privacy are clear and meant to guard the data. The company also made it clear that in case of developing an artificial intelligence, the data collected from the application only concerns particular features for which users have activated their consent.
The statement is consistent with Microsoft’s ongoing initiatives to respond to increasingly pressing questions of artificial intelligence and artificial intelligence ethics. Thus, in anticipation of further advancements in AI application, Microsoft actively combats the posted risks and guarantees users strict data protection, and differences between the user database and an AI Model.
Microsoft Explains 'Connected Experiences' and AI Data Use
Microsoft’s further elaboration about its “connected experiences” feature informed that in case of its Microsoft 365 applications, it allows further options such as co-authoring and cloud saves. This feature which the user can turn off has been developed completely separately from the training of the larger language models that serve as the foundation of Microsoft products.
It was announced amid discussions on social media when some people thought that this data can be allowed AI-programs’ creators to enhance the programs with the data of said users without asking for permission. These fears are similar to the general state concern regarding privacy in the era of complex artificial intelligence.
The “connected experiences” according to the spokesperson are not in any way designed to spy on users, but as enabling collaboration tools and cloud integration. In addition to not being a game changer in achieving benchmarks across standard ML tests, it does not advance or train Microsoft’s initial AI frameworks.
However, as(nr) / The post-SGI conversations suggest that many users do not trust management’s promises, at least when it comes to #AI and other innovative #tech #privacyLaws #DataPrivacy #Gdpr. This suggests that companies in the technology sector are increasingly needing to talk about privacy more overtly.
The attempts made by Microsoft to clear the matter can therefore provide a clear explanation how trust is central in the digital world today. As AI develops, the company expects to remain responsible for its data practices, thus providing users with more privacy choices and better understanding of how their data is used.