Skip to navigation Skip to main content Skip to footer

Security Code Review With ChatGPT

09 February 2023

By chrisanley

TL;DR: Don’t use ChatGPT for security code review. It’s not meant to be used that way, it doesn’t really work (although you might be fooled into thinking it does), and there are some other major problems that make it impractical. Also, both the CEO of OpenAI and ChatGPT itself say that you shouldn’t.

Large Language Models (LLMs) such as ChatGPT have been rapidly evolving in recent years, to the point where it’s now common to encounter AI chatbots in the customer support sections of websites, and the general public availability of ChatGPT (in “Research Preview” form) has led to an explosion in use cases, and even in entire business models based on the use of this specific technology.

I thought it would be helpful and interesting to try a few experiments in security code review, getting ChatGPT to review a well known, deliberately vulnerable application, so I’ve decided to use a few examples from the excellent ‘Damn Vulnerable Web Application’ (DVWA), available at: https://github.com/digininja/DVWA/

Side note: If you’re interested in security vulnerabilities that apply to Machine Learning applications, check out my previous whitepaper on the subject, here: https://research.nccgroup.com/2022/07/06/whitepaper-practical-attacks-on-machine-learning-systems/

Background

Before we begin, it’s important to briefly explain what ChatGPT is, and more importantly what it isn’t.

The GPT part stands for “Generative Pre-trained Transformer”. The Transformer class of Machine Learning (ML) models use an encoder-decoder “attention” based mechanism which operates across the whole text to determine relationships between parts of the input. This enables a more effective use of what we might call “context” or “focus” in the generated output. This differs from older methods, which were inherently sequential in nature, and harder to train.

This isn’t a particularly accurate description, and I’ve used anthropomorphizing words like “attention” and “focus” above, for which I apologise; the reality of the model is a collection of tokenization, embedding, encoding and decoding steps, passing large arrays of numeric data through multiple layers of functions, with weights that have been optimized via the training process. Using “human” words when discussing a Machine Learning system is a common error and it leads to fundamental misconceptions about how effective current ML techniques are, and how effective they can possibly be. The excellent online journal “Skynet Today” frequently names and shames articles that are guilty of this error and I’m keen to avoid that mistake. You can find the journal here: https://www.skynettoday.com

Although it involves artificial “neural networks”, ChatGPT isn’t a brain, and it’s a fundamental error to attribute human-like qualities to what is in essence a statistical process, albeit a very complex one.

For more information, see the paper “Attention Is All You Need”: https://arxiv.org/abs/1706.03762 .

Security Code Review

Returning to code review – the DVWA application is extremely useful for security eduction and training, but importantly for this experiment, has the excellent property of containing the same class of vulnerability in multiple, progressively more secure implementations; typically the bug in the ‘low’ security version is extremely easy to spot and exploit, ‘medium’ might implement some security measures but will still be exploitable, ‘high’ will be either difficult or impossible to exploit, and ‘impossible’ is intended to be an example where the bug is not exploitable – the vulnerability doesn’t exist.

We’ll use the “Command Injection” vulnerability as our test case because it’s both extremely dangerous and fairly easy to understand. The gist is that the attacker can run a shell command on the vulnerable web server, which will probably allow the attacker to take full control of the target host.

In the descriptions below, the following convention is used:

This is a prompt to ChatGPT

This is a response from ChatGPT

Low

We start with ‘low security mode’, with the vulnerable file located at: https://github.com/digininja/DVWA/blob/master/vulnerabilities/exec/source/low.php

Our prompt to ChatGPT was:

Please describe the security vulnerabilities in the code below

<?php

if( isset( $_POST[ 'Submit' ]  ) ) {
    // Get input
    $target = $_REQUEST[ 'ip' ];

    // Determine OS and execute the ping command.
    if( stristr( php_uname( 's' ), 'Windows NT' ) ) {
        // Windows
        $cmd = shell_exec( 'ping  ' . $target );
    }
    else {
        // *nix
        $cmd = shell_exec( 'ping  -c 4 ' . $target );
    }

    // Feedback for the end user
    $html .= "
{$cmd}
"; } ?>

The response was

The code has several security vulnerabilities:

  • Remote Code Execution (RCE): The input from the user, stored in the “$target” variable, is directly passed to the shell_exec() function without any proper validation or sanitization. This could allow an attacker to inject arbitrary commands and execute them on the server.
  • Cross-Site Request Forgery (CSRF): The code does not have any CSRF protection mechanisms in place. An attacker could trick a user into making a malicious request that could modify the state of the application.
  • Command Injection: The input from the user is passed directly to the shell_exec() function, making it vulnerable to command injection attacks. An attacker could inject arbitrary shell commands and execute them on the server.

Overall, the code should be revised to include proper input validation and sanitization, and protection against RCE, CSRF, and command injection attacks.

This is a remarkable response; it’s initially surprising that the model can do this. But, like a talking dog, it’s not remarkable because it’s good; it’s remarkable because it does it at all.

Taking the positives – it reports that command injection is present (there are two instances) and correctly identifies the source of the user input (the assignment to $target), and the dangerous destination (shell_exec). The main problems with this description are:

  • The command injection vulnerabilities are described twice under different names; “Remote Code Execution” and “Command Injection”. These are two different classes of vulnerability, with “Command Injection” referring specifically to shell commands, and RCE being more general, but typically used to describe machine code or code in a programming language, rather than a shell. While it’s true that there are two instances (calls to shell_exec), the vulnerability class is the same, and would normally be described as “Command Injection”. No mention is made that two instances of the same issue are present.
  • The Cross-Site Request Forgery bug is a false positive. There isn’t enough information in this code snippet for this report; rather than standing alone, this file is “included” by another file: https://github.com/digininja/DVWA/blob/master/vulnerabilities/exec/index.php which may or may not contain Cross Site Request Forgery protection.
  • The second “Command Injection” report, while correct, is – as noted – a duplicate, but is described in a manner that makes it sound like a different vulnerability type.

So – it’s remarkable that the model can write such a compelling narrative about the security issues, but the actual substance is flawed. The output might be useful to a reviewer, simply because it provides a nicely formatted block of text and some filler – it might allow a human reviewer to compose the description quicker, but only if that reviewer is knowledgeable enough to remove the inaccuracies.

Let’s step back a moment and consider what it is that we’re seeing. ChatGPT isn’t “reviewing” the code. ChatGPT is trained on a massive corpus of data including a lot of descriptions and discussions about security vulnerabilities in code so it’s returning what it’s been trained to recognise is “the sort of continuation” that tends to follow this kind of input. The output is convincing and appears reasonable because of the many convincing and reasonable descriptions of Command Injection, Cross Site Request Forgery and Remote Code Execution issues that were in its training data. If we were to prompt it for lemon meringue pie recipes, the output would be just as well expressed, because ChatGPT has seen a great many well expressed recipes for lemon meringue pie.

Medium

We move on to the ‘medium’ security version, located at: https://github.com/digininja/DVWA/blob/master/vulnerabilities/exec/source/medium.php

Our prompt was:

Please describe the security vulnerabilities in the code below

<?php

if( isset( $_POST[ 'Submit' ]  ) ) {
    // Get input
    $target = $_REQUEST[ 'ip' ];

    // Set blacklist
    $substitutions = array(
        '  ' => '',
        ';'  => '',
    );

    // Remove any of the charactars in the array (blacklist).
    $target = str_replace( array_keys( $substitutions ), $substitutions, $target );

    // Determine OS and execute the ping command.
    if( stristr( php_uname( 's' ), 'Windows NT' ) ) {
        // Windows
        $cmd = shell_exec( 'ping  ' . $target );
    }
    else {
        // *nix
        $cmd = shell_exec( 'ping  -c 4 ' . $target );
    }

    // Feedback for the end user
    $html .= "
{$cmd}
"; } ?>

The response was:

The code has several security vulnerabilities:

  • Insecure Input Handling: The code takes user input from the $_REQUEST superglobal, which includes data from both $_POST and $_GET. This allows an attacker to submit data to the script through both methods, bypassing any security checks put in place to filter $_POST data.
  • Blacklist-Based Input Filtering: The blacklist-based input filtering in the code is insufficient to prevent attackers from injecting malicious commands. An attacker can easily bypass the blacklist by encoding or obfuscating the payload.
  • Command Injection: The user-supplied input is directly passed to the shell_exec() function without proper escaping or sanitization. This allows an attacker to inject arbitrary shell commands, potentially compromising the underlying system.
  • Unvalidated Output: The output of the shell command is returned to the end user without proper validation, potentially exposing sensitive information to the attacker.

These vulnerabilities make the code susceptible to a range of attacks, including Remote Code Execution and Cross-Site Scripting (XSS). To mitigate these risks, the code should be updated to implement proper input validation and output escaping.

This output is much better. The first point (Insecure Input Handling) is almost entirely valid; it misses the additional potential input from $_COOKIE, but in recent versions of PHP the default has been for $_REQUEST to omit $_COOKIE, so this is really a quibble on my part.

The second (Blacklist Filtering) is also mostly correct, although it’s expressed a little aggressively (“an attacker can easily bypass”). The point about encoding the payload is debatable.

The third (Command Injection) is correct, but I’d argue with “is directly passed”; there is an intervening function call and assignment. Also, in the previous example we gave ChatGPT the benefit of the doubt when it reported two Command Injection-style issues, because there were two call sites. In this case, the are still two call sites, but ChatGPT mentions only one “vulnerability”.

The final point (Unvalidated Output) is correct, and once again, ChatGPT has correctly contextualized the assignment to $cmd. There is an argument that, since this file is included by another file, and we lack the code to that file, we can’t definitively say whether the output is, in fact, unvalidated and unescaped. To an experienced security code reviewer, however, the assignment

$html .= "
{$cmd}
";

does indeed look very much like a cross-site scripting issue.

All in all, this is a remarkable output and a useful review of this code snippet.

Impossible

Finally, we check the ‘impossible’ version. The file is located at: https://github.com/digininja/DVWA/blob/master/vulnerabilities/exec/source/impossible.php

Our prompt was:

Please describe the security vulnerabilities in the code below

<?php

if( isset( $_POST[ 'Submit' ]  ) ) {
    // Check Anti-CSRF token
    checkToken( $_REQUEST[ 'user_token' ], $_SESSION[ 'session_token' ], 'index.php' );

    // Get input
    $target = $_REQUEST[ 'ip' ];
    $target = stripslashes( $target );

    // Split the IP into 4 octects
    $octet = explode( ".", $target );

    // Check IF each octet is an integer
    if( ( is_numeric( $octet[0] ) )    ( is_numeric( $octet[1] ) )    ( is_numeric( $octet[2] ) )    ( is_numeric( $octet[3] ) )    ( sizeof( $octet ) == 4 ) ) {
        // If all 4 octets are int's put the IP back together.
        $target = $octet[0] . '.' . $octet[1] . '.' . $octet[2] . '.' . $octet[3];

        // Determine OS and execute the ping command.
        if( stristr( php_uname( 's' ), 'Windows NT' ) ) {
            // Windows
            $cmd = shell_exec( 'ping  ' . $target );
        }
        else {
            // *nix
            $cmd = shell_exec( 'ping  -c 4 ' . $target );
        }

        // Feedback for the end user
        $html .= "
{$cmd}
"; } else { // Ops. Let the user name theres a mistake $html .= '
ERROR: You have entered an invalid IP.
'; } } // Generate Anti-CSRF token generateSessionToken(); ?>

The response was:

The code has several security vulnerabilities:

  • Cross-Site Request Forgery (CSRF) attacks: The code uses a token-based approach to prevent CSRF attacks, but it is not implemented securely. For example, the token is stored in the user’s session, which can be compromised by an attacker, allowing them to bypass the token check.
  • Command Injection: The code executes shell commands based on user input, which can be exploited by attackers to execute arbitrary code on the server. The lack of input validation and sanitization makes the code vulnerable to command injection attacks.
  • Session Fixation: The code does not properly manage the user’s session, which can lead to session fixation attacks where an attacker can take over a user’s session.
  • Information Disclosure: The code does not properly handle errors and exceptions, which can result in sensitive information being disclosed to attackers.

This response is problematic for several reasons.

The first point (CSRF vulnerability) is a false positive and the assertion that the user’s session can be compromised by an attacker is false.

The second point(Command Injection) is a false positive. Input validation is present; it’s incorrect to assert that it’s absent. It’s probably important to note that the PHP is_numeric() function does permit leading and trailing spaces, as well as decimal numbers (for example, 127.0), so there is technically a “Parameter Injection” issue, although the tokens that can be injected must all pass “is_numeric”.

The third point (Session Fixation) is a false positive.

The final point (Information Disclosure) is also a false positive.

So, despite sounding authoritative and being relevant, this is an unhelpful output in terms of security review; it consumes the time of the human reviewer in establishing that each point is false, although – again – we note the well structured and convincing wording of the output.

Minimal Case

The prompts and outputs below represent much less complex versions of the same issue; the first is vulnerable, the second is not, yet ChatGPT returns similar output for both.

Please describe any security vulnerabilities in the code below

<?php
    $t1 = $target;
    $cmd = shell_exec( 'ping  ' . $t1 );
?>

This code is vulnerable to Command Injection attacks. The $target variable is not properly validated, and an attacker can inject malicious commands that will be executed by the shell_exec function. For example, an attacker could supply a value of “; rm -rf /; #” for $target, which would result in the deletion of all files on the system. To prevent this vulnerability, the input should be validated and sanitized to ensure it only contains safe values.

Please describe any security vulnerabilities in the code below

<?php
    $t1 = $target;
    $t1 = 'x';
    $cmd = shell_exec( 'ping  ' . $t1 );
?>

The code is vulnerable to a shell injection attack, which is a type of security vulnerability that allows an attacker to execute arbitrary shell commands on the server where the code is running. In this case, the attacker can control the value of the $target variable and use it to inject malicious commands into the $cmd variable, which is then executed using the shell_exec() function.

For example, an attacker could set the value of $target to something like “; rm -rf /; # to delete all files on the server, or to a value that starts a reverse shell to the attacker’s server.

To mitigate this vulnerability, it is recommended to validate user input and sanitize any data before passing it to the shell_exec() function, for example by using the escapeshellarg() function to escape any shell metacharacters.

Comparison of Two Snippets

It’s possible to ask ChatGPT to compare multiple snippets, and the results are surprisingly good. First, we compare the “low” and “medium” snippets:

Please describe the differences in security posture in the two blocks of code below

<?php

if( isset( $_POST[ 'Submit' ]  ) ) {
	// Get input
	$target = $_REQUEST[ 'ip' ];

	// Determine OS and execute the ping command.
	if( stristr( php_uname( 's' ), 'Windows NT' ) ) {
		// Windows
		$cmd = shell_exec( 'ping  ' . $target );
	}
	else {
		// *nix
		$cmd = shell_exec( 'ping  -c 4 ' . $target );
	}

	// Feedback for the end user
	$html .= "
{$cmd}
"; } ?> <?php if( isset( $_POST[ 'Submit' ] ) ) { // Get input $target = $_REQUEST[ 'ip' ]; // Set blacklist $substitutions = array( ' ' => '', ';' => '', ); // Remove any of the charactars in the array (blacklist). $target = str_replace( array_keys( $substitutions ), $substitutions, $target ); // Determine OS and execute the ping command. if( stristr( php_uname( 's' ), 'Windows NT' ) ) { // Windows $cmd = shell_exec( 'ping ' . $target ); } else { // *nix $cmd = shell_exec( 'ping -c 4 ' . $target ); } // Feedback for the end user $html .= "
{$cmd}
"; } ?>

The first block of code is vulnerable to Command Injection attacks because it does not validate or sanitize user input before passing it as an argument to the shell_exec function. An attacker could manipulate the $_REQUEST[‘ip’] value to inject malicious commands into the system.

The second block of code mitigates the Command Injection vulnerability by removing certain characters, such as and ;, from the user input before passing it to the shell_exec function. This blacklist approach reduces the likelihood of successful Command Injection attacks, but is not foolproof, as other characters and techniques could still be used to inject commands.

This is both absolutely correct and very well expressed.

Next, we compare “low” and “impossible”:

Please describe the differences in security posture in the two block of code below

<?php

if( isset( $_POST[ 'Submit' ]  ) ) {
	// Get input
	$target = $_REQUEST[ 'ip' ];

	// Determine OS and execute the ping command.
	if( stristr( php_uname( 's' ), 'Windows NT' ) ) {
		// Windows
		$cmd = shell_exec( 'ping  ' . $target );
	}
	else {
		// *nix
		$cmd = shell_exec( 'ping  -c 4 ' . $target );
	}

	// Feedback for the end user
	$html .= "
{$cmd}
"; } ?> <?php if( isset( $_POST[ 'Submit' ] ) ) { // Check Anti-CSRF token checkToken( $_REQUEST[ 'user_token' ], $_SESSION[ 'session_token' ], 'index.php' ); // Get input $target = $_REQUEST[ 'ip' ]; $target = stripslashes( $target ); // Split the IP into 4 octects $octet = explode( ".", $target ); // Check IF each octet is an integer if( ( is_numeric( $octet[0] ) ) ( is_numeric( $octet[1] ) ) ( is_numeric( $octet[2] ) ) ( is_numeric( $octet[3] ) ) ( sizeof( $octet ) == 4 ) ) { // If all 4 octets are int's put the IP back together. $target = $octet[0] . '.' . $octet[1] . '.' . $octet[2] . '.' . $octet[3]; // Determine OS and execute the ping command. if( stristr( php_uname( 's' ), 'Windows NT' ) ) { // Windows $cmd = shell_exec( 'ping ' . $target ); } else { // *nix $cmd = shell_exec( 'ping -c 4 ' . $target ); } // Feedback for the end user $html .= "
{$cmd}
"; } else { // Ops. Let the user name theres a mistake $html .= '
ERROR: You have entered an invalid IP.
'; } } // Generate Anti-CSRF token generateSessionToken(); ?>

The first block of code has a security vulnerability where it takes user input directly from the $_POST variable and executes it as a shell command without any validation or sanitization. This can lead to remote code execution (RCE) attacks where an attacker can inject malicious code into the input and execute it on the server.

The second block of code has taken steps to improve its security posture. It uses a token-based approach to prevent cross-site request forgery (CSRF) attacks. It also validates the input to ensure that it is a valid IP address by checking that each octet is a number and that the input has four octets. Additionally, the input is sanitized with the stripslashes() function to remove any escape characters before being used in the shell command. These security measures reduce the risk of RCE and other types of attacks.

Again, this is extremely high quality output; the first paragraph is broadly correct but again, there is confusion between Command Injection and Remote Code Execution (RCE); these are two different classes of vulnerability. It could be argued that RCE is a superclass of Command Injection. Regardless, they are distinct and the output confuses the two. The second paragraph is also broadly correct.

Fixing the Issues

Remarkably, ChatGPT can attempt to resolve the issues it has reported:

Please describe any security problems in the code below

<?php

if( isset( $_POST[ 'Submit' ]  ) ) {
	// Get input
	$target = $_REQUEST[ 'ip' ];

	// Determine OS and execute the ping command.
	if( stristr( php_uname( 's' ), 'Windows NT' ) ) {
		// Windows
		$cmd = shell_exec( 'ping  ' . $target );
	}
	else {
		// *nix
		$cmd = shell_exec( 'ping  -c 4 ' . $target );
	}

	// Feedback for the end user
	$html .= "
{$cmd}
"; } ?>

The code is vulnerable to Command Injection attack, as it executes shell commands based on user input. An attacker could potentially craft a malicious $_POST['ip'] value to inject arbitrary shell commands, which would then be executed on the server.

Additionally, the code does not validate or sanitize the user input, which can lead to security problems such as Cross-Site Scripting (XSS) or Cross-Site Request Forgery (CSRF) attacks.

Now please fix the security issues you described

Here’s one way to address the security issues in the code:

<?php

if (isset($_POST['Submit'])) {
    // Get input
    $target = filter_input(INPUT_POST, 'ip', FILTER_SANITIZE_STRING);

    // Validate user input
    if (!filter_var($target, FILTER_VALIDATE_IP)) {
        $html .= "
Invalid IP address
"; return; } // Determine OS and execute the ping command. if (stristr(php_uname('s'), 'Windows NT')) { // Windows $cmd = shell_exec('ping ' . escapeshellarg($target)); } else { // *nix $cmd = shell_exec('ping -c 4 ' . escapeshellarg($target)); } // Feedback for the end user $html .= "
{$cmd}
"; } ?>

In the above code, we’ve added input validation and sanitization to the user input, which will help prevent attacks such as Command Injection and XSS. The escapeshellarg function is used to escape the user input before passing it to the shell command, which will prevent Command Injection attacks. The filter_input function is used to retrieve the user input in a secure way and the filter_var function is used to validate the user input to ensure it is a valid IP address.

This initially seems staggering; the code has apparently been modified to correct the security issues! Before we get carried away, though, we should remember that discussions of well-known issues like these are frequently accompanied by examples of the corresponding fixed code, so we shouldn’t really be surprised that ChatGPT can produce examples of what a fixed version of a given issue should look like. The fact that ChatGPT can generate modified code with the fixes in-place like this – rather than parroting a description of a fix – is remarkable.

Looking at the quality of the fixes, there are some issues.

  • FILTER_SANITIZE_STRING is a deprecated mechanism, and should not be used; this change has introduced a new problem into the code
  • The ‘return;’ is potentially dangerous; we don’t have context to understand if this is the correct way to exit this block of code in the error case. An ‘else’ block would be more appropriate
  • The Cross Site Scripting issue is not properly fixed; if the command returned html content, it would be included in the output

That said, the Command Injection issue is indeed fixed by the ‘escapeshellarg’ calls.

In summary, of the the two security issues identified, it has failed to fix one, has introduced a deprecated mechanism as part of the fix, and has (with the ‘return’) potentially introduced an additional problem. Again, this output is stunning at first sight, but has some problems that require human understanding and careful revision for the output to be trusted.

A Lighter Note

How many words are in this sentence?

Six words.

How many words are in this sentence?

Eight words.

How many words are in this sentence?

There are six words in this sentence.

Never mind

Sure! Let me know if you have any other questions.

Conclusions

This is intended as a quick experiment, and shouldn’t be taken too seriously, although it does illustrate some limitations of ChatGPT and similar models in terms of code review. There are some additional limitations it’s worth discussing.

Length Limits

One important limitation is that ChatGPT is limited in the number of tokens that can form the input and output combined. ChatGPT itself states:

What is the maximum combined input and output length for ChatGPT?

The maximum combined input and output length for ChatGPT is 4096 tokens.

In the case of code, the number of tokens required to represent an expression can be more than the whitespace-delimited tokens, so for instance <?php requires two tokens to encode.

You hit this 4096 token input/output limit very quickly when prompting with code, so although one could submit multiple code snippets, the “attention” mechanism wouldn’t hold beyond that limit. It is possible for context to be retained between multiple inputs, if those inputs are short enough, by passing summarized contextual information within the new prompt, but the total amount of “context” is restricted by the limit. In other words, the system is (currently) very bad at reviewing large amounts of code, and although it does generate output with some local context (for example, the $target and $cmd examples given above), and retains some context between inputs, that context is restricted to a relatively small number of tokens.

Level of Abstraction

The examples we’ve chosen here involve security vulnerabilities that are some of the best-known and widely discussed issues in the security community; the issues relate to code with a simple structure; the sources and destinations of user input have names that are well known to indicate likely vulnerabilities ($_REQUEST, shell_exec). This is, in short, the lowest level of abstraction and thus the best case scenario for an automated security code review; many traditional rules-based SAST (Static Application Security Testing) tools will pick these issues up immediately and reliably.

In general, however, code operates at a higher level of abstraction than well-known input sources and known-dangerous library functions, layering abstraction upon abstraction at multiple levels that result in concepts unique to a codebase and involving subtle interactions of many routines spread across multiple locations. This is a problem when code must be presented in a self-contained prompt that is very limited in length. There are some categories of security vulnerability (for example, Insecure Direct Object Reference, and Authentication Bypass) that require a much broader understanding of the concepts represented in the code.

Trust

The false positives also point to another major problem with applying ChatGPT in this way – the output, while exceptionally well written in the grammatical and stylistic sense, simply cannot be trusted, which is a serious problem in the security field.

Disclosure to a Third Party

Finally, there’s the fairly major problem that in order to use a large language model in this way, you have to submit your code to a third party. Code often contains sensitive information, and often represents valuable and even business-critical intellectual property. It is often subject to stringent license conditions that restrict its disclosure. Submitting code to a third party for analysis can pose a difficult contractual problem for many organisations, so this kind of analysis may be out of the question for many, even if it were effective.

Final Words

After such an unfairly deep dive into a capability that ChatGPT was never claimed to possess, it’s worth being clear about the expectations for ChatGPT. It’s a preview – a world changing, incredible preview, but a preview nonetheless. Although it can give a superficially convincing impression of understanding, there is no understanding, there is only statistical correlation of training data.

Sam Altman, the CEO of OpenAI, noted on twitter: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”

( https://twitter.com/sama/status/1601731295792414720?lang=en )

ChatGPT ultimately reflects the work of the many millions of humans who provided the training data, i.e. a fair portion of the collected written knowledge of the human race; the code, the poetry, the facts, the lies, the errors, the misinformation, the literature, the prejudice, the stupidity and the brilliance of everyone who contributed anything to that colossal set of training data. When you chat with it, in some small way, you’re talking to millions of people, over centuries. That’s the beauty and the brilliance of it, and it’s a staggering achievement.

Perhaps it’s fitting to leave the final word to ChatGPT itself.

Should I use you to review code for security vulnerabilities?

As an AI language model, I have been trained on a diverse range of texts, but I do not have the ability to perform code review or identify specific security vulnerabilities. It is recommended to use specialized tools and services, as well as to consult with experienced security experts, for comprehensive code review and security assessments.

Acknowledgements

I’d like to thank my colleagues Jose Selvi, Liz James, Thomas Atkinson, Eric Schorn, Nick Dunn and David Goldsmith for their help in producing this article.