ChatGPT and other third-party artificial intelligence tools are not permitted for use by Apple's staff members. The reason the corporation is doing this, according to a Wall Street Journal story, is that it is creating comparable technology itself. The story claims that Apple is worried about staff utilizing AI programs to expose private information. Because of this, the corporation has prohibited staff from utilizing Microsoft-owned GitHub Copilot. The writing of code is automated with this software.

Reason for not using ChatGPT

Actually, Apple is working on similar technology of its own. The business is concerned that in such a scenario, employees could use ChatGPT to divulge private information about their own products. According to a WSJ article, Apple has also instructed staff members not to utilize Copilot, a Microsoft-owned GitHub platform that generates software code automatically.

Apple's caution is due in part to the fact that when users of these AI models ask questions, information about the query is provided back to the developer so they may make the AI better. There is a chance that private or confidential information will be accidentally shared as a result. In March, a flaw in ChatGPT was also discovered. This error occurred because a user's conversation history was being displayed to another user.

Other companies that have banned

Apple is concerned that adopting ChatGPT may allow third parties to access their data because when employees enter confidential project information from OpenAI into the system, OpenAI moderators can view this confidential information. Additionally, it has been asserted in the study that some Language Models can have their Training Data extracted through the chat interface. How vulnerable to cyberattacks is ChatGPT, other than this? Regarding this, there is no precise information. You might be surprised to learn that similar limits have been placed on OpenAI by Morgan, Verizon, and Amazon in addition to Apple.