ChatGPT Can’t Beat Human Smart Contract Auditors Yet: OpenZeppelin’s Ethernaut Challenges

While generative artificial intelligence (AI) is capable of performing a wide variety of tasks, OpenAI’s ChatGPT-4 is currently unable to audit smart contracts as effectively as human auditors, according to recent tests.

In an effort to determine whether AI tools can replace human auditors, Mariko Wakabayashi and Felix Wegener of blockchain security firm OpenZeppelin pitted ChatGPT-4 against the company’s Ethernaut security challenge.

While the AI ​​model passed most levels, it struggled with new ones introduced after the September 2021 training data deadline, as a plugin that enables web connectivity was not included in the test .

Ethernaut is a wargame played within the Ethereum Virtual Machine and consists of 28 smart contracts – or levels – that must be hacked. In other words, levels are completed once the correct exploit is found.

According to tests by OpenZeppelin’s AI team, ChatGPT-4 was able to detect the exploit and pass 20 out of 28 levels, but some levels needed some additional clues to solve after the initial prompt: “What The next smart contract has a vulnerability in it?”

Responding to Cointelegraph’s questions, Wegener said that OpenZeppelin expects its auditors to be able to meet all Ethernaut levels, as all proficient authors should be able to do.

While Wakabayashi and Wegener conclude that ChatGPT-4 cannot currently replace human auditors, they stress that it can still be used as a tool to increase the efficiency of smart contract auditors and detect security vulnerabilities. may be used.

“To the community of Web3 builders, we have one word of comfort: Your job is safe! If you know what you’re doing, AI can be used to improve your efficiency.”

When asked whether a tool that increases the efficiency of human auditors would mean that companies like OpenZeppelin would not need as many, Wegener told Cointelegraph that the overall demand for audits is to deliver high-quality audits. The capacity is over, and they expect the number of employees in Web3 as auditors will continue to grow.

Connected: Satoshi Nak-AI-moto: Bitcoin creator turned AI chatbot

May 31st in Twitter WireWakabayashi said that large language models (LLMs) like ChatGPT are not yet ready for smart contract security checking because it is a task that requires a significant level of accuracy, and LLMs need to be able to generate text and perform human-like interactions. is adapted to.

However, Wakabayashi suggested that AI models trained on data and output goals could provide more reliable solutions than chatbots currently available to the public that have been trained on large amounts of data.

AI Eye: 25,000 traders bet on ChatGPT’s stock picks, AI sucks dice and more

Stay connected with us on social media platforms for instant updates, Click here to join us Facebook

Show More
Back to top button

disable ad blocker

please disable ad blocker