Cointime

Download App
iOS & Android

TrustLLM Empowers Smart Contract Auditing: AI Auditor Aegis Fortifies New Security Frontiers in the Web3 Landscape

Validated Project

With the advancement of blockchain technology, smart contracts have emerged as the cornerstone of decentralized applications, demonstrating immense potential and value in various fields such as finance and supply chain management. However, as the number of smart contracts surges, ensuring the security and reliability of their code becomes increasingly important. Once deployed, smart contracts are immutable, and any logical vulnerabilities can lead to significant financial losses. Therefore, developing an efficient and accurate method for smart contract auditing is crucial for protecting user assets and maintaining the health of the blockchain ecosystem.

Despite the vast potential demonstrated by Large Language Models (LLMs) in the field of smart contract auditing, existing technologies still face numerous challenges. For instance, even the most advanced GPT-4 model, when combined with Retrieval-Augmented Generation (RAG) technology, can only achieve a precision rate of 30% in auditing smart contracts. This limitation primarily stems from the fact that existing LLMs are pre-trained on general text/code corpora without fine-tuning for the specific domain of Solidity smart contract auditing. To address this issue, Aegis has proposed the TrustLLM framework, which combines fine-tuning with LLM-based agents to provide a new, intuitive approach to smart contract auditing that can generate audit results with reasonable explanations. The introduction of TrustLLM not only enhances the precision of smart contract auditing but also brings new hope to the field of blockchain security.

The Importance and Challenges of Smart Contract Auditing

Smart contracts, as a core component of blockchain technology, are programs that automatically execute contract terms, ensuring the transparency and immutability of transactions without the need for third-party intervention. In the field of decentralized finance (DeFi), the role of smart contracts is particularly significant, as they handle and record a large number of financial transactions, managing digital assets worth billions of dollars. However, since smart contracts are difficult to modify once deployed, any coding errors or vulnerabilities can lead to fund loss or other security issues, making the security of smart contracts an issue that cannot be ignored.

With the rapid development of DeFi, the number and complexity of smart contracts are also increasing, raising the risk of potential vulnerabilities. Once vulnerabilities exist in smart contracts, they may be maliciously exploited, leading to fund theft, contract manipulation, or other forms of loss. Therefore, conducting thorough and precise auditing of smart contracts becomes crucial to ensure their stability and security in the face of various potential attacks.

The purpose of smart contract auditing is to identify and fix all potential security vulnerabilities before the contract is deployed and used. This not only helps protect the financial security of investors and users but also helps maintain the reputation and market trust of DeFi platforms. As blockchain technology continues to mature and its application scope expands, the importance of smart contract auditing will continue to grow, becoming a key link in ensuring the security and healthy development of the entire DeFi ecosystem.

TrustLLM: An Innovative Solution for Smart Contract Auditing

TrustLLM represents a significant innovation in the field of smart contract auditing. By combining fine-tuning with LLM-based agents, it provides auditors with an intuitive and efficient auditing method. The core of this framework lies in its unique two-stage fine-tuning approach, which is specifically designed and optimized for the needs of Solidity smart contract auditing.

In the first stage, TrustLLM uses fine-tuning techniques to train a detector model. The purpose of this model is to identify whether there are vulnerabilities in smart contract code. Through extensive training data, the detector model learns how to analyze code and make decisions on whether it is safe. This stage of fine-tuning is crucial as it lays the foundation for the entire auditing process, enabling the model to accurately perceive potential security risks.

The second stage of fine-tuning focuses on the reasoner model, whose task is to generate the causes of vulnerabilities. Once the detector model identifies potential vulnerabilities, the reasoner model further analyzes the code, providing a detailed explanation of why there are vulnerabilities and what types they are. This in-depth analysis not only helps auditors understand the nature of the problem but also provides clues for solving it.

An overview of TrustLLM, featuring its four roles: Detector, Reasoner, Ranker, and Critic.

TrustLLM’s two-stage fine-tuning method mimics the intuition and analysis process of human experts during the auditing process. First, it conducts a preliminary risk assessment through the detector model, similar to the intuitive judgment of human auditors on code. Then, it conducts an in-depth cause analysis through the reasoner model, just like an expert conducting a detailed review after discovering a problem.

In addition, TrustLLM also introduces two LLM-based agents — Ranker and Critic. These agents evaluate and debate multiple vulnerability causes generated by the reasoner model through an iterative process, ultimately selecting the most suitable explanation. This collaborative mechanism not only improves the accuracy of the audit results but also enhances the model’s ability to handle complex vulnerability scenarios.

The Practical Application and Competitive Advantage of TrustLLM

TrustLLM’s innovative framework not only improves the efficiency and accuracy of smart contract auditing but also provides auditors with deeper insights. Through this method, TrustLLM can help auditing teams more effectively identify and fix potential security vulnerabilities, thereby protecting blockchain applications from threats posed by attackers. As Web3 and blockchain technology continue to progress, TrustLLM and the technology behind it will become key tools in ensuring the security of decentralized applications.

TrustLLM’s performance has been compared with several existing smart contract auditing technologies, including prompt-based LLMs (such as GPT-4 and GPT-3.5) and other fine-tuned models (such as CodeBERT, GraphCodeBERT, CodeT5, UnixCoder). These comparisons aim to demonstrate the advanced nature and effectiveness of TrustLLM in the field of smart contract auditing.

Performance comparison between TrustLLM’sDetector and other fine-tuned models.

Firstly, compared with prompt-based LLMs, TrustLLM shows significant advantages in detection performance. Although GPT-4 and GPT-3.5 are the most advanced language models currently available, their performance in smart contract auditing tasks is not as good as TrustLLM. This is mainly because TrustLLM is specifically fine-tuned for the domain of Solidity smart contract auditing, while existing LLMs are pre-trained on general text/code corpora. TrustLLM’s two-stage fine-tuning method allows it to more accurately identify and interpret vulnerabilities in smart contracts, whereas prompt-based LLMs may be limited when dealing with tasks in specific domains.

Secondly, compared with traditional fine-tuned models, TrustLLM also performs exceptionally well. CodeBERT, GraphCodeBERT, CodeT5, and UnixCoder are all fine-tuned on specific tasks, but TrustLLM surpasses these models in multiple performance metrics. For example, TrustLLM achieves higher scores in F1 score, accuracy, and precision, indicating that it is more effective in detecting vulnerabilities in smart contracts. This advantage can be attributed to TrustLLM’s unique architecture, which combines detector and reasoner models and iterates optimizations through LLM agents, thereby improving the accuracy and reliability of auditing.

Moreover, the design of TrustLLM also considers parameter efficiency and computational cost. By using lightweight fine-tuning methods such as LoRA (Low-Rank Adaptation), TrustLLM can maintain the advantages of large models while reducing resource consumption. This makes TrustLLM not only superior in performance but also more feasible and scalable in practical applications.

Finally, TrustLLM’s evaluation results also show its superiority in aligning with the actual causes. Compared with GPT-4, the vulnerability explanations generated by TrustLLM have a higher consistency with the actual causes, further proving its practicality and accuracy in smart contract auditing.

In summary, TrustLLM shows significant advantages over existing technologies in terms of detection performance, parameter efficiency, and practical application value. These comparative results highlight the potential of TrustLLM in the field of smart contract auditing and provide a new direction for future Web3 security research and applications. As blockchain technology continues to develop, TrustLLM and similar technologies will play an increasingly important role in ensuring the security of smart contracts and promoting the development of decentralized applications.

Application Cases of TrustLLM

The application cases of TrustLLM mainly focus on its auditing of smart contracts for two undisclosed bounty projects on the Code4rena platform. Code4rena is a well-known bounty platform aimed at encouraging security researchers to discover and report security vulnerabilities in blockchain projects. Through cooperation with this platform, researchers have been able to apply TrustLLM to actual smart contract auditing tasks to verify its effectiveness and practicality in the real world.

During the auditing process, TrustLLM demonstrated its powerful vulnerability detection capabilities. It was not only able to identify known types of vulnerabilities but also conduct in-depth analysis of potential security risks and provide detailed explanations of the causes of vulnerabilities. Researchers used TrustLLM to conduct a comprehensive review of the smart contracts for the two projects and discovered six critical vulnerabilities. The discovery of these vulnerabilities is of great value to the project teams because they could be exploited by malicious attackers, leading to asset loss or other security incidents.

Notably, the discovery of these vulnerabilities was recognized by the project teams or auditing experts. This means that TrustLLM has not only achieved technical success but has also been affirmed by industry experts in practical applications. This achievement further proves the practicality and reliability of TrustLLM in the field of smart contract auditing.

In addition, the paper also mentions a special case where a vulnerability not discovered by any existing tools was successfully identified by TrustLLM. This discovery was considered a significant security contribution by the project team and auditing experts, highlighting the innovation and forward-looking nature of TrustLLM in smart contract security auditing.

Through these practical cases, TrustLLM has demonstrated its potential in the field of Web3 security, especially

in smart contract auditing. Its successful application not only provides a higher level of security assurance for blockchain projects but also provides a new direction for future smart contract auditing tools and methods. As the Web3 ecosystem continues to develop and mature, the application of TrustLLM and similar technologies will become increasingly important, providing a solid foundation for the security and stability of decentralized applications.

Aegis: The World’s First Independently Profitable AI Auditor

In the rapidly developing Web3 ecosystem of today, the security auditing of smart contracts has become a crucial link. In a highly anticipated smart contract auditing challenge, Aegis won a high bonus of 23016U with its outstanding smart contract auditing technology, a result that undoubtedly consolidates the leading position of its research and development team in the field of smart contract security research. Aegis’s success is attributed to its unique underlying technology architecture — TrustLLM, which is the first large-scale model specifically built for Web3 security. TrustLLM combines fine-tuning with LLM-based agents to provide an intuitive and insightful method for smart contract auditing. It mimics the working method of expert-level human auditors, a process that not only improves the accuracy of auditing but also provides interpretability for the audit results.

At the same time, Aegis’s technological innovation is not limited to the TrustLLM framework. It also employs advanced RAG technology and the knowledge matching and scenario recognition principles of large models, training with structured vulnerability knowledge bases and code data to simulate the thinking logic of human auditing experts for intelligent auditing. This enables Aegis to efficiently and accurately detect logical vulnerabilities and economic model-related security risks in smart contracts, providing valuable security guarantees for developers before contract deployment.

Aegis serves a wide range of clients, including professional auditors and developers alike. It supports multiple blockchain programming languages, such as Go, Rust, Solidity, and Move, covering almost all mainstream blockchain development environments. Aegis offers multi-level service plans, from free trials to professional versions, aimed at meeting the needs of different users and providing a flexible and convenient user experience.

Aegis’s entry not only adds a powerful AI Agent to the AgentLayer ecosystem but also provides a safe and efficient auditing solution for the Web3 development community. With continuous iteration and upgrades of Aegis and the experience of actual bounty challenges, it is expected to lead the blockchain security auditing into a new era of intelligence, providing a solid security foundation for the development of decentralized applications.

About AgentLayer

AgentLayer, as the first decentralized AI Agent public chain, promotes Agent economy and AI asset transactions on the L2 blockchain by introducing the token $AGENT, and its AgentLink protocol supports multi-Agent information exchange and collaboration to achieve decentralized AI governance.

Website || Twitter || Telegram || Discord || Medium

Comments

All Comments

Recommended for you