Salt Labs, a respected team specializing in vulnerability research, has undertaken an in-depth examination of the ChatGPT plugin ecosystem, uncovering vulnerabilities that pose significant threats to user data and system integrity.
These vulnerabilities extend beyond the core ChatGPT framework to encompass third-party plugins and their supporting platforms. Exploitation of these vulnerabilities highly likely lead to unauthorized access and the leakage of confidential information. The absence of stringent validation mechanisms during plugin installation exacerbates these risks.
Vulnerability №1. Covert Plugin Installation
At the heart of many web applications, including the ChatGPT plugin ecosystem, lies the OAuth open standard for user authentication and authorization. When users opt to install a new plugin, ChatGPT redirects them to the plugin's website to obtain a unique code, akin to an OAuth token. Upon user approval, the plugin returns this code to ChatGPT for installation.
However, ChatGPT lacks robust mechanisms to verify whether the installation request truly originates from the user. Consequently, if an unsuspecting user is duped into following a link containing malicious code, they may unwittingly authorize the installation of unauthorized plugins.
Once installed, these malicious plugins empower attackers to gain illicit access to sensitive information and compromise user accounts.
Vulnerability №2. Zero-Click Account Hijacking via PluginLab
PluginLab, an integral part of the ChatGPT ecosystem, streamlines plugin development and integration with ChatGPT. However, a vulnerability within PluginLab enables attackers to seize control of organizations' accounts on third-party platforms through a Zero-Click attack. This attack method capitalizes on the process of “account takeover” inherent in plugins.
When users integrate a plugin with their account, such as GitHub, a corresponding account is created on the plugin's platform, storing the user's login credentials. With these credentials, the plugin gains access to the user's private repositories on platforms like GitHub.
By exploiting PluginLab, attackers can hijack accounts associated with plugins, such as “AskTheCode”, which allows querying of GitHub repositories. This form of account hijacking grants attackers access to the repositories of users employing the compromised plugin.
How does it unfold?
1. Account Creation and Authorization.
The plugin creates a new user account and requests permission to access the user's GitHub repositories.
2. Code Generation.
The plugin generates a code for ChatGPT.
3. Connection to the Plugin.
ChatGPT utilizes the code to link to the user's account on the plugin's platform.
4. Plugin Installation.
How does the attack unfold?
1. Unauthenticated Access.
The endpoint `https://auth.pluginlab.ai/oauth/authorized` lacks adequate authentication, enabling attackers to substitute any participant (victim) ID and obtain a code representing the victim. Armed with this code, attackers leverage ChatGPT to access the victim's GitHub repositories.
2. Acquisition of Participant ID.
Attackers utilize the endpoint https://auth.pluginlab.ai/members/requestMagicEmailCode to retrieve the member ID of the desired victim.
3. Exploitation of Vulnerability.
Armed with the victim's member ID, attackers install the compromised plugin on their ChatGPT account. They intercept the request to https://auth.pluginlab.ai/oauth/authorized, substituting the victim's member ID to acquire a code representing the victim.
4. Unauthorized Access.
Armed with this code, attackers infiltrate the victim's GitHub repositories via ChatGPT, circumventing the need for direct interaction.
Notably, this constitutes a zero-click attack, necessitating no action from the victim. The vulnerability lies within PluginLab.AI, impacting numerous plugins utilizing the PluginLab.AI framework. Following disclosure, PluginLab.AI swiftly addressed and mitigated these vulnerabilities, fortifying the security of its platform.
Vulnerability №3. OAuth Redirection Manipulation
Similar to PluginLab.AI, this vulnerability facilitates account hijacking but requires the victim to click on a malicious link.
How does it unfold?
Though the Kesem AI plugin serves as an example, this vulnerability pervades other plugins as well.
Upon installation of the Charts by Kesem AI plugin, ChatGPT initiates the following steps:
1. User Redirection.
Users are redirected to kesem.ai to obtain an OAuth code.
2. User Authentication.
Kesem.ai authenticates users via Google/Microsoft or email, generating a code.
3. Code Transfer.
Kesem.ai passes the code to the specified redirect_uri.
The vulnerability stems from kesem.ai's failure to validate the redirect_uri, enabling attackers to inject a malicious redirect_uri and pilfer user credentials.
How does the attack unfold?
Attackers dispatch the crafted link to victims. Upon clicking, Kesem.ai inadvertently forwards the code to an attacker-controlled redirect_uri. Similar to Pluginab.ai, attackers now possess the victim's credentials, facilitating account takeover.
Consequences.
Kesem.ai is merely one instance of this vulnerability. It's imperative to underscore this issue and encourage plugin developers to prioritize OAuth security, including meticulous scrutiny of the redirect_uri parameter. While some plugins verify the redirect_uri, they often overlook the path, leaving it vulnerable to manipulation by attackers.
Conclusion.
GPTs, the forthcoming iteration of plugins, herald a significant leap in security. Functioning akin to plugins but bolstered by enhanced security measures, GPTs are poised to mitigate many concerns highlighted in this discourse. OpenAI's proactive measures to educate and notify users regarding data transfers from ChatGPT to third-party providers are commendable, fostering heightened user awareness. However, users must remain vigilant in the face of potential risks. The efficacy of GPTs in resolving these issues entirely remains to be seen.
Eventually it will be known whether using GPT will be more secure.