Inside Web3 Security
Security in Web3 is not solely a tech challenge. And despite AI, it is a human challenge. And it's imperfect. The experience of Thrilld Labs' team members taught us that many damaging mistakes in our Web3 space do not just derive from flawed code or omissions, but from flawed assumptions about how we interact or fail to interact with decentralised systems and with security risks more broadly. As social engineering and AI-driven attacks collide, security has become a question of judgment as much as underlying tech. Read on for our assessment on what’s actually putting users and builders at risk & how to fix it...

Web3 security is often framed as a technical problem, one that is to be solved through better smart contracts, more audits, and stronger cryptography. While these matters are indeed essential, they only address part of the risk. In practice, many damaging failures in Web3 do not derive from flawed code or other omissions but from flawed assumptions about how people interact or fail to interact with the decentralised systems and with security risks more broadly.
Unlike Web2, where centralised intermediaries absorb mistakes and reverse damage, Web3 systems are not exactly forgiving by design. Transactions are irreversible, permissions are powerful, and accountability is distributed. This shifts the responsibility across mainly two actors: for end-users, security becomes a matter of operational discipline. For builders (developers and founders), it becomes a matter of system design, incentives, and culture..
Thus, as Web3 adoption grows, security depends both on preventing technical exploits and as well as on everyday decisions.
This article originated from an internal initiative at Thrilld Labs to strengthen our team’s security awareness and operational agility. Rather than limiting this effort to internal discussions as initially intended, we encouraged them to actively research security topics themselves. This included participating in our recent live session, “Inside Web3 Security,” featuring Masha Vaverova (independent auditor and security researcher), Arikia Millikan (CTRL+X), Mohammad Kayali (blockchain & full-stack developer), and Alexandra Overgaag (Thrilld Labs).
The True Entry Point for Attacks: Human Behaviour
Some of the most serious threats in Web3 do not resemble traditional hacks. Instead, they emerge from complex interactions between interfaces, platforms, and human behavior.
Social engineering does remain the most underestimated risk in the Web3 space because it targets human behaviour instead of the technical infrastructure. Phishing links, impersonation attempts, fake job offers, and fraudulent support messages are now the most common attack vectors. Access control exploits alone accounted for 53% ($2.1 billion) of all losses in 2025, while phishing scams and contract vulnerabilities combined represented another 36%. Attackers exploit trust and urgency, prompting users to ‘verify’ wallets or ‘review’ files that lead to malicious transactions.
These attacks are especially effective in Web3 because of its finality. In Web3, signing a transaction often combines authentication, authorization, and execution. Users may believe they are merely “logging in” when they are actually granting long-term permissions or transferring control. And once access is granted or the assets are transferred, there is usually no reversal. A brief lapse can lead to a permanent loss.
What does that mean in practice? Well, imperfect operational habits such as blind token approvals, wallet reuse across applications, and ignoring warnings create persistent vulnerabilities.
Naturally, end users can research whether a smart contract or tool is audited. However, audits do not eliminate all risks; they assess code at a single point in time and cannot account for future integrations, interface changes, or new vulnerabilities. Also, not all audits uphold the same standards. Moreover, even perfectly audited contracts can be undermined if users are tricked into signing misleading transactions through malicious interfaces.
Messaging platforms with limited moderation (like Telegram) further intensify the risk, allowing scams to spread rapidly. All these matters point to a shift in how security is conceived, not just as a property of code, but as a property of the entire system which includes interfaces, communication channels, and human behavior that surround it. Security, therefore, cannot remain cyclical or audit-bound; it must be continuous, systemic, and embraced by all stakeholders, both end-users and builders.
Security Habits in Web3 and Web2
While the systemic improvements do matter, individual habits still play a decisive role in reducing the risk. For instance, as far as wallet security goes, the goal is not perfection but rather containment: ensuring that no single lapse exposes everything.
Below are a few habits one might consider adopting:
- Practice wallet segregation: separate long-term storage, daily-use wallets, and experimental interactions to limit the exposure when something goes wrong. No single compromise should expose all assets.
- Use hardware wallets carefully: they add protection, but not if the device is compromised or if the transactions are signed carelessly.
- Manage token approvals: many decentralised applications request broad or even unlimited permissions. Regularly reviewing and revoking unnecessary approvals can help reduce the damage that a malicious contract or frontend can cause.
- Use transaction simulation tools: verify what a transaction will execute before signing. If a wallet issues a warning, it should never be ignored.
At the same token, this means that developers ideally seek to minimize default permissions and that founders ensure they allocate sufficient resources to monitoring potential security risks.
That brings us to the behaviour of the developers and founders in general.
It is important to mention that many developers and founders coming from more traditional backgrounds are joining the Web3 industry, yet that said, builders sometimes underestimate Web3 risk by carrying over assumptions that no longer apply. For instance, in Web2 systems, backend servers, access controls, and centralised recovery mechanisms provide safety nets. In Web3, transparency implies that every deployed contract is visible, and vulnerability herein may be exposed by default whilst not all vulnerabilities may be fixed post-deployment.
For end users, it means there is often no institutional buffer when something goes wrong. Authentication and authorization also blur together, as noted above. In Web2, proving identity rarely grants immediate power. In Web3, signing a message or transaction can possibly authorize asset movement or even long-term control. End-users must therefore treat signatures as transfers of authority, and not just logins. Builders must design a signing flow - which refers to the entire user interaction sequence - more clearly, along with the consequences, rather than purely for frictionless growth.
In Web3, responsibility has effectively moved upstream in a sense, meaning that security ought to be designed, tested, and validated before systems go live. And end-users should never ever forget to remain agile either.
AI as a Multiplier for Risk and Defense
Most companies and developers are leveraging all sorts of AI and LLM tools these days. AI accelerates the existing dynamics rather than replacing them. As any general-purpose technology, AI has two sides. As far as security goes, used deliberately, AI may strengthen resilience, but when used carelessly, it magnifies existing vulnerabilities.
On the more offensive side, AI enables more convincing phishing messages, automated impersonation, and scalable social engineering, making the trust and urgency attacks outlined earlier faster, cheaper, and harder to distinguish from legitimate communication. But, on the defensive side, it supports transaction monitoring, anomaly detection, and large-scale code analysis.
The risk arises when AI-generated code is trusted without understanding it, which is an ever-increasing risk AI brings about by its very nature, not in the last place specifically with coding agents. More generally, 45% of AI-generated code fail security checks, according to Veracode research. Overreliance on automation can thus weaken the foundational security practices as well as reduce scrutiny. For developers, blind trust in generated code can lead to vulnerabilities taking things south and for founders, pressure to move faster with AI in the light of efficiencies and cost reduction, may outpace internal review processes.
For users, on the other hand, sophisticated AI-driven scams become harder to distinguish from legitimate communication, eventually exploiting the same cognitive shortcuts that social engineering has always relied on, yet this time at a scale that was previously impossible.
Our Conclusion
Security in Web3 is not solely a technical challenge. And despite artificial intelligence, it is a human challenge. And it's imperfect.
As far as Web3 infrastructure is concerned, both end users and builders operate without many traditional safety nets. Audits and transparency provide foundations, but they do not cover the entire tech stack at all times, nor can they compensate for unclear design, bad incentives, or careless interactions.
No security strategy is perfect.
When users suspect they have interacted with malicious code, immediate containment is the priority. Disconnecting from the internet, halting active processes, and avoiding further wallet interactions can limit the damage to an extent. Moreover, seed phrases should never be stored or reused on potentially compromised devices.
In serious cases, wiping the system and, later, migrating funds to a newly created wallet (after updated anti-virus products have run on the device) is highly sensible. Any wallet exposed during an incident should be treated as permanently unsafe.
This article originated from an internal initiative at Thrilld Labs to strengthen our team’s security awareness and operational agility. Rather than limiting this effort to internal discussions as initially intended, we encouraged them to actively research security topics themselves. This included participating in our recent live session, “Inside Web3 Security,” featuring Masha Vaverova (independent auditor and security researcher), Arikia Millikan (CTRL+X), Mohammad Kayali (Blockchain & Full-Stack Developer), and Alexandra Overgaag (Thrilld Labs).
Other sources
https://hacken.io/insights/trust-report/
https://trustwallet.com/blog/security/how-to-spot-and-avoid-crypto-wallet-scams-in-2025
