Governments don't understand cyber warfare. We need hackers Review
Governments don't understand cyber warfare. We need hackers
www.ted.com
Governments Don’t Get Cyber Warfare — We Need Hackers: TED Talk Review + Actionable Guide
Are we still fighting 21st‑century cyber battles with 20th‑century playbooks?
That’s the uncomfortable question Rodrigo Bijou raises in his TED talk, “Governments don’t understand cyber warfare. We need hackers.” I’ve watched this gap up close: nation‑state tactics and criminal crews move in minutes and hours; bureaucracies move in quarters and years. In crypto, that mismatch isn’t academic — it’s your keys, your treasury, your runway.
Read on and you’ll walk away with clarity on cyber war vs. cyber warfare, why ethical hackers matter, how governments can work with them, and what you can do today to protect your assets and projects.
The real problem most institutions keep missing
Most governments still think about security like border control: build walls, guard perimeters, keep bad actors “out.” But modern attacks don’t respect borders, and the battlefield is software supply chains, cloud consoles, and mobile wallets.
Today’s attackers are fast, cheap, and global. Your defense loses if it’s slow, siloed, and paperwork‑driven.
Here’s what that mismatch looks like in the real world:
- Speed beats process. CrowdStrike’s 2024 report put the average “breakout time” (how fast an intruder moves after the first foothold) at around an hour. Procurement takes months. Patches take weeks. Attackers won’t wait for your change window.
- Small teams, big impact. Ransomware crews knocked out fuel deliveries in the Colonial Pipeline incident; NotPetya crippled global shipping and cost businesses billions. In crypto, lean teams pulled off the Ronin Bridge breach and the Wormhole exploit — both with outsized ripple effects.
- Supply chain blind spots. One compromised dependency can ship drainer code to thousands of apps. We’ve seen npm packages hijacked, CI/CD systems abused, and wallet libraries targeted — including the kind of package compromise that briefly injected malicious code into a widely used wallet connector in 2023.
- The human element stays king. Year after year, industry breach reports show most incidents involve social engineering or account misuse. Phishing, fake support, and “urgent” signature requests drain more wallets than flashy zero‑days.
- Public impact, private losses. When a major protocol or exchange goes down, it’s not just charts and headlines. It’s frozen withdrawals, cascading liquidations, and reputations torched — even for teams that didn’t do anything wrong.
For crypto users, builders, and investors, the result is simple: more exchange hacks, smart contract exploits, wallet‑drainer waves, and compliance headwinds — while policies and protections lag behind the threat.
What I’m bringing to this review and guide
I’m keeping this practical and easy to use. No jargon, no hand‑waving. Here’s the plan:
- Break down Bijou’s strongest points in plain English.
- Answer the common questions — like “What’s the difference between cyber war and cyber warfare?” — without a law degree.
- Layer in crypto‑specific takeaways you can use right now, even if you’re not a security engineer.
The goal isn’t to scare you. It’s to help you spot the patterns fast enough to act — whether you’re a retail trader, running a DAO, or shipping a new protocol.
What you’ll learn (and who this is for)
- A crisp summary of the talk’s big idea and why it matters now.
- Real examples from both the public sector and crypto — from bridge hacks to supply chain blowups — so the lessons stick.
- A quick glossary to keep terms straight (because words like “war,” “warfare,” and “crime” get mixed up online).
- A crypto security checklist you can run this week to reduce risk fast.
- A short FAQ that tackles the questions people actually search.
This is for traders, founders, DAO contributors, auditors, and anyone curious about how cyber power really works in 2025 — and how to stay one step ahead.
So, what does Bijou actually argue — and why are hackers central to fixing this? Let’s unpack that next and turn it into moves you can make right away. Ready to see how the “good hackers” fit into the picture?
What Rodrigo Bijou actually argues in his TED talk
Core thesis: governments don’t understand the speed, structure, and incentives of cyber warfare—so they need hackers at the table to defend the public and key systems.
Here’s the punchline I heard: attackers move like startups; governments move like committees. That mismatch lets small, motivated teams cause damage that looks “state-sized” without needing state budgets. Cyber warfare isn’t neat. It’s fast, fluid, incentive-driven—and if you’re not thinking like an attacker, you’re already late.
“Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win.”
— John Lambert (Microsoft)
That quote echoes through the talk. Lists are checkboxes. Graphs are how real attacks flow—one weak node, one exposed credential, one dependency, and the chain lights up.
Why hackers? Because they think in attack graphs, edge cases, and incentives.
Hackers aren’t valuable because they “break things.” They’re valuable because they model how things get broken in the real world—and then help you close those paths.
- Attack-graph thinking: Map initial access → lateral movement → impact. This is how real breaches unfold, whether it’s a DeFi protocol or a power grid.
- Edge cases first: Hackers obsess over the weird input, the odd chain of calls, the misconfigured role—exactly where exploits live.
- Aligned incentives: Bug bounties pay for impact, not paperwork. Hackers ship proofs-of-concept in days; procurement ships PDFs in quarters.
- Community signal: Thousands of skilled eyes beat one internal checklist. Public disclosure programs and open collaboration catch more than secrecy ever will.
There’s data behind this. Verizon’s DBIR repeatedly shows social engineering and stolen creds as top entry points, while exploitation of known vulnerabilities spikes during mass events (think MOVEit). Hacker-powered programs report faster discovery and remediation cycles than traditional audits alone, and IBM’s breach reports keep underscoring the cost gap between “found early” and “found in the wild.” Speed wins. Hackers bring speed.
Examples, signals, and what to listen for in the talk
If you want to catch the real heartbeat of the talk, tune your ear to these patterns—and keep a mental link to how we build and protect crypto systems today.
- Asymmetric power: Small teams, huge impact.
- 2017 NotPetya took out global shipping, pharma, and logistics—billions in losses—launched from a tiny supply-chain foothold. The lesson isn’t “Russia bad,” it’s “one poisoned update can snowball into world-scale damage.”
- In crypto, a handful of lines in a bridge contract or a compromised validator key can vaporize nine figures overnight. You don’t need a battalion; you need a path.
- Supply chain risk: Your dependencies are part of your attack surface.
- The 2024 XZ Utils backdoor attempt showed how a patient attacker can try to smuggle a remote access payload into core Linux tooling via maintainership—caught only because an engineer noticed odd CPU usage and traced it back. That’s the graph mindset at work.
- In late 2023, a malicious update to Ledger’s ConnectKit briefly put countless Web3 apps at risk via a hijacked NPM dependency. One package, many blast radii. Crypto builders felt this in real time.
- SolarWinds remains the classic lesson: compromise the build system, inherit the trust of everyone downstream.
- Economics of exploits: Offense is cheap; defense is fragmented.
- Exploit brokers and private markets set real prices for 0‑days. Bounties set competing prices for disclosure. If your bounty caps at $10k while the black market pays $200k, you’ve mispriced your risk.
- HackerOne and Bugcrowd reports consistently show that high‑severity issues are found fastest when bounty scopes are wide and rewards are meaningful. Pay for impact; get impact.
- Open collaboration beats silos: “Security through obscurity” keeps defenders blind.
- Coordinated disclosure lets the good guys move together. Closed agencies and opaque vendors leave everyone reinventing patches.
- In crypto, public post‑mortems and live patch pipelines (think rapid timelock changes, circuit breakers) reduce repeat mistakes across the ecosystem.
As you watch, notice how speed, shared incentives, and real‑world threat modeling keep coming up. When he talks about agile defense, think bug bounties, red teams, and small, empowered squads that ship fixes—today, not next quarter. When he calls out legacy thinking, think hard about your own “security by policy” moments that never translated into working controls.
I keep coming back to one simple, uncomfortable truth the talk makes hard to ignore:
“We don’t lose because attackers are smarter. We lose because they’re allowed to be faster.”
So here’s the question that sets up everything that follows: if we’re going to get faster, we need the right words for the fight we’re in. Are we dealing with crime, warfare, or war? The difference isn’t academic—it changes the playbook. Let’s get those definitions straight next.
Cyber war vs. cyber warfare vs. cybercrime: clear definitions that matter
Words matter in security, because they decide budgets, headlines, and who shows up to help. Mix up “cyber war,” “cyber warfare,” and “cybercrime,” and you’ll either scare people into the wrong response or sleep on a strategic threat. I keep a simple mental model that maps to real decisions crypto teams and users make every week.
“Words shape responses. Call a ransomware hit ‘cyber war’ and your insurer may walk; call a state campaign ‘just a hack’ and you’ll under-resource the defense.”
Cyber warfare: ongoing tactics, not necessarily a declared war
What it is: The continuous use of cyber capabilities—espionage, disruption, influence, sabotage—by states or state‑sized actors. Think playbook and campaigns, not treaties or declarations. It’s persistent and often deniable.
- Typical intent: Strategic advantage (steal intel, position access, shape public opinion, quietly pre‑position for later).
- Signals you’ll see: Supply‑chain compromises, living‑off‑the‑land techniques, long dwell time, careful OPSEC, tiered infrastructure.
- Examples:
- Stuxnet sabotaging industrial systems.
- SolarWinds-style supply‑chain espionage at scale.
- NotPetya spreading from a Ukrainian target to global collateral damage.
- Crypto angle: State‑aligned groups laundering stolen assets through mixers, bridges, and cross‑chain hops to bankroll operations while testing defenses.
Useful frame: “Cyber warfare” is the operating environment—what powerful actors do every day below the threshold of declared war. See the high‑level overview on Wikipedia for a shared baseline.
Cyber war: state‑on‑state conflict with war‑like effects
What it is: A state‑level conflict conducted in or primarily through cyberspace where the effects are comparable to traditional warfare—widespread disruption, significant economic damage, or threats to life. It’s rare, heavily debated, and usually entangled with kinetic conflict.
- Typical intent: Coercion and strategic degradation of an adversary during or instead of kinetic operations.
- Signals you’ll see: Coordinated, sustained campaigns tied to geopolitical objectives, cross‑sector targets (energy, finance, communications), and national‑level incident coordination.
- Examples to study:
- Russia‑Ukraine 2022+: overlapping cyber operations and kinetic war; cyber as a component of war rather than a standalone “cyber war.”
- Arguments that NotPetya crossed war‑like thresholds due to global economic impact, even though it wasn’t formally declared.
- Crypto angle: Emergency sanctions, fast‑moving blacklists, and cross‑border seizure attempts. Expect compliance shockwaves across exchanges, custodians, and stablecoin issuers.
If you want a simple academic explanation that separates “warfare” (the how) from “war” (the political state of conflict), this primer from the New England Institute of Technology is a useful starting point: What Is Cyber Warfare?
Cybercrime: profit‑driven offenses in and through cyberspace
What it is: Illegal acts by individuals or groups seeking money or monetizable data. No geopolitical objective required. In crypto, this includes exchange intrusions, wallet drains, SIM‑swap‑enabled theft, market manipulation, ransomware, and “economic exploits.”
- Typical intent: Profit. Fast cash‑outs, pressure tactics, negotiations.
- Signals you’ll see: Ransom notes, quick on‑chain laundering through mixers/bridges, commodity malware, phishing kits, affiliate programs (“RaaS”).
- Examples:
- Colonial Pipeline ransomware and partial crypto seizure by DOJ.
- Ronin Bridge theft attributed to Lazarus Group; shows how crime and state interests can blur.
- Mango Markets “price manipulation” case—illustrates why legal classification matters.
- Data point: Year after year, the Verizon DBIR finds that the majority of breaches are financially motivated. In other words, most incidents you’ll face aren’t nation‑state sabotage—they’re criminals chasing payouts.
Why these lines change your response (and save you money)
Getting the label right isn’t academic—it changes what you do in the first hour.
- Insurance and contracts: “War exclusions” can void coverage if you call something war or act-of-war. Treat a profit‑driven ransomware as cybercrime and keep your policy on your side.
- Law enforcement and regulators:
- Cybercrime: engage national cybercrime units, file reports, coordinate with crypto tracing firms like Chainalysis or Elliptic.
- State‑aligned campaigns: loop in national CERTs, follow CISA/Mandiant/Microsoft threat advisories, prepare for OFAC considerations if sanctioned entities appear on‑chain.
- Communications and markets: Overhyping a criminal exploit as “cyber war” can spook users and partners; underplaying a strategic campaign can damage trust when details emerge. Precision keeps credibility.
- Technical playbook:
- Cybercrime: contain, eradicate, recover; expect negotiation attempts and fast cash‑out via mixers/bridges—block and trace addresses quickly.
- Cyber warfare: assume persistence and multiple access paths; hunt for long‑term footholds, harden supply chain, rotate keys, and watch for follow‑on ops.
- Potential cyber war context: escalate internal command, prepare for sector‑wide impacts, coordinate disclosures across partners and infrastructure providers.
- Cost control: IBM’s latest Cost of a Data Breach report shows average breach costs keep rising. Misclassification slows response—and every hour inflates the bill.
A quick litmus test I use before I brief a team
- Who benefits? Fast money = crime. Strategic positioning or political timing = warfare/war context.
- How careful is the op? Commodity tools and loud exfil = crime. Custom tooling, patient access, clean infrastructure = warfare.
- How wide is the blast radius? Single organization = often crime. Cross‑sector cascading effects = warfare or war‑context.
- What do trusted sources say? Check advisories from CISA, Mandiant, Microsoft Threat Intelligence, and on‑chain attributions from Chainalysis.
- Any sanctions exposure? If yes, treat payment or contact with extreme caution and get counsel; read OFAC guidance.
Here’s the kicker: mislabeling isn’t random—it often happens because institutions still think in old categories and incentives. Want to know why that keeps happening and how ethical hackers consistently cut through the fog?
Why governments struggle—and how ethical hackers fill the gap
Bureaucracy vs. speed
Cyber attackers sprint. Governments file paperwork. That’s the gulf.
Procurement cycles take quarters or years—requests for proposals, compliance reviews, approvals, renewals. Attackers weaponize a new exploit over a weekend. The result is predictable: by the time a tool or policy ships, the threat has already pivoted. The U.S. GAO’s high‑risk reports have said this for years: critical systems remain exposed because modernization moves too slowly.
You can see the timing mismatch in the wild:
- Exchange/Email zero‑days (e.g., the 2021 ProxyLogon wave): exploitation happened within hours of public advisories. CISA even issued an emergency directive as agencies scrambled to patch.
- WannaCry hit with an already‑patched SMB flaw; the fix existed, but operational inertia left hospitals and agencies unprotected.
- SolarWinds proved how a single vendor update can quietly become a beachhead across dozens of agencies before any committee meeting even lands on the calendar.
Layer on the talent gap and the picture gets worse. The latest ISC2 Cybersecurity Workforce Study shows a multi‑million‑person shortage globally. Public salaries can’t compete with the private sector or crypto-native firms, so the people who think like attackers rarely sit inside government war rooms. Meanwhile, exploitation windows measured in hours slam into patch windows measured in weeks. The Verizon DBIR keeps repeating the same story: time to compromise is fast; time to detection is slow.
And then there’s secrecy. Classification and legal risk (hello, CFAA) chill outside help. Without explicit vulnerability disclosure policies, researchers hesitate—afraid that an honest report could turn into a legal mess. That’s changing with efforts like CISA’s VDP mandate, but adoption and execution are uneven.
“You can’t committee your way out of a zero‑day.”
I feel that line in my bones. If you’ve ever watched a live exploit chain unfold while waiting for sign‑offs, you know how helpless that delay feels.
Misaligned incentives
Agencies optimize for no headlines. Hackers optimize for find the bug path. Those goals collide.
- Blame avoidance vs. vulnerability discovery. No leader wants a breach on their watch. The safest bureaucratic move? Don’t touch anything risky. But vulnerability research—and the partial breakage that comes with testing—feels risky.
- Secrecy vs. collaboration. “Security through obscurity” is comfortable but weak. Open findings get reviewed and fixed faster. Closed findings rot.
- Compliance checklists vs. attacker reality. A system can pass audits and still fall to a well‑crafted phishing email or a dependency typo‑squatting package. Think of the Twilio/Cloudflare SMS phishing wave—no spreadsheet control stands up to live social engineering without real‑world practice.
This is why ethical hackers matter. They hunt where attackers hunt: the edge cases, the weird inputs, the economic incentives. They’re not trying to keep a quarterly report green; they’re trying to break the thing today so users don’t lose tomorrow.
Working models that actually work
We don’t need to guess. There are models with a track record—fast, transparent, and cost‑effective.
- Vulnerability disclosure policies (VDP) and coordinated disclosure.
Publish a clear “report here, safe harbor applies” policy. It sounds basic, but clarity unlocks free help. The U.S. federal government now requires VDPs (BOD 20‑01), and the UK’s NCSC guidance shows what “good” looks like. In crypto, teams that prominently link a VDP reduce “silent fixes” and get higher‑quality reports. - Bug bounty programs.
Pay for results, not promises. The DoD’s Hack the Pentagon found hundreds of real flaws quickly and cheaply compared to traditional assessments. In web3, Immunefi and HackerOne have routed thousands of reports. One standout: Polygon quietly fixed a critical consensus issue in 2021 after a white hat alert and paid multi‑million‑dollar bounties—averting a catastrophic loss of funds (coverage). That’s asymmetry working in your favor. - Independent red teams and purple teaming.
Simulate real attackers against your controls and people. CISA’s assessment services publish techniques that measurably improve detection and response. For crypto orgs, red teams that include social engineering and CI/CD intrusion often find the true “keys to the kingdom” faster than any code scan. - Public‑private threat intel sharing.
Information Sharing and Analysis Centers (like FS‑ISAC) and CISA’s JCDC reduce detection time when indicators of compromise move in hours, not weeks. In crypto, open Telegram/Discord intel rooms, RSS feeds from Chainalysis, and on‑chain monitoring services play the same role—if you actually wire them into alerts and playbooks. - Open‑source security investments and rapid patch pipelines.
Most stacks—government and crypto—run on open source. Funding maintainers and adopting tooling from OpenSSF (e.g., Sigstore, scorecards) plus canary releases and automatic rollback turns “patch paralysis” into a routine muscle. Security through obscurity loses; security through visibility and speed wins.
If you want proof that incentives beat bureaucracy, look at the numbers in bug bounty reports. Organizations repeatedly report faster median time‑to‑resolve and lower cost‑per‑critical compared to pentests, because hundreds of creative minds probe in parallel instead of a single contracted team on a schedule. And in crypto, the ROI is visceral: a single critical bounty can prevent a nine‑figure drain.
It’s not just tools—it’s posture. When a project signals, “We welcome reports, here’s safe harbor, here’s the bounty, here’s how we’ll credit you,” hackers show up. When a project signals, “Lawyers first,” attackers show up instead.
I’ve watched small teams out‑defend entire agencies by embracing this mindset. They run transparent open‑source practices, keep a living threat model, patch on Tuesdays and test on Wednesdays, and publish thoughtful postmortems. They move so quickly that even when they get hit, the blast radius stays small.
So here’s the uncomfortable truth: the gap isn’t going away. The question is whether we bridge it with incentives and community—or let it widen until the next headline writes itself.
Want the exact checklists I use for users, founders, DAOs, and exchanges—hardware wallets, bounties from day one, incident playbooks, circuit breakers? That’s next. Ready to harden what you control today?
How crypto people can apply the talk right now
Cyber threats don’t wait for committee meetings. Here’s exactly how to turn the big ideas into actions that protect your coins, your project, and your community—broken down by role so you can move today, not “after the next release.”
“In security, hope is not a strategy.”
For users
Your wallet is a one‑tap bank vault. Treat it like one.
- Use a hardware wallet for long‑term funds. Hot wallets are for spending, not storage. A $60 device can save a $60,000 portfolio.
- Kill phishing by changing the login game: turn on passkeys or security keys on exchanges and email. Google’s study found hardware security keys blocked 100% of targeted phishing attempts and stopped the vast majority of account takeovers. Source: Google Security Blog.
- Stop blind signing. Simulate every transaction. Use wallets that preview risks (try Rabby) and always click “simulate” before “confirm.” If you see a strange contract, pause.
- Revoke toxic approvals. Drainers thrive on unlimited token allowances. Audit and revoke approvals at revoke.cash monthly or after every new dApp.
- Lock down 2FA. Prefer passkeys or hardware keys over SMS codes. If SMS is all you’ve got, add a SIM‑swap PIN with your carrier and hide your phone number from public profiles.
- Seed phrase = crown jewels. Write it on paper or metal. Store in two physically separate places. Never type it into a website. Ever.
- Clamp app permissions on your phone and browser. No clipboard or screen recording for random apps. If a wallet asks for contact or file access, ask “why?” and say no.
- Assume the front‑end can lie. In 2023–2024, multiple front‑ends were hijacked to inject wallet drainers (think DNS/router attacks). If something feels off, type the URL by hand, verify on social channels, or interact with the contract from a trusted explorer.
Reality check: phishing remains the top retail killer. ScamSniffer tracked hundreds of millions in losses from wallet drainers and approval scams, with hundreds of thousands of victims in 2023 alone. The fix isn’t fancy—phishing‑resistant logins and transaction simulation stop most of it before it starts.
For founders and DAOs
Security is a product feature. Ship it from day one.
- Threat model before code. List assets (treasury, admin keys, oracles), actors (users, validators, MEV bots), and attack paths (price manipulation, governance capture, upgrade abuse). If it isn’t on paper, it won’t be in your tests.
- Independent audits are a starting line, not the finish. Budget for at least two firms and re‑audit after major changes. Rotate firms periodically to avoid shared blind spots.
- Launch a real bug bounty on day one. If you can’t patch fast, cap impact with time locks and pausers. Big targets like Wormhole now offer up to eight‑figure bounties via Immunefi—because paying researchers beats paying ransoms.
- Keys and roles, not heroes. Move admin power to multisigs (Safe) with role‑based access, spend limits, and timelocks. Rotate signers quarterly, require hardware keys, and enforce removal SLAs when someone leaves.
- Stage your rollouts with circuit breakers. Start with caps, rate limits, and kill‑switch modules. Maker’s delays, Compound’s pause guardian, and “canary deployments” are patterns that save treasuries in bad hours.
- Have an incident runbook you’ve actually rehearsed: who pages who, where you meet, what gets paused, and what is your public statement template. Include law enforcement and exchange contacts for fast blacklist coordination.
- Harden governance. Use delayed upgrades, optimistic governance with vetos, and delegate diversity. One rushed “emergency” vote has wrecked more protocols than zero‑days.
- Front‑end is part of your threat surface. Lock DNS, enable registry locks, enforce hardware‑key GitHub org policies, sign builds, and require reviews. The 2023 Ledger Connect Kit npm compromise showed how a single package update can flip your UI into a drainer.
- Publish a friendly disclosure policy (SECURITY.md, PGP, response timelines, safe harbor, reward ranges). Make it easy—and safe—for good hackers to help you.
Pro tip: scoreboard what matters—time to detect, time to contain, time to patch. If your median is measured in days, your next incident will be measured in headlines.
For exchanges and protocols
Act like you’ve already been breached. Then make it boringly hard to turn a foothold into a catastrophe.
- Assume breach. Zero‑trust access, device posture checks, hardware‑backed MFA for all employees, and short‑lived credentials. Production access must be just‑in‑time and recorded.
- On‑chain anomaly detection. Alert on unusual approvals, new privileged role calls, liquidity pool disturbances, oracle swings, and large outflows. Tools worth exploring: Forta, OpenZeppelin Defender, custom Tenderly alerts.
- Tabletop exercises every quarter. Run scenarios: DNS hijack of your front‑end, compromised signer, governance take‑over, dependency backdoor (think XZ Utils), and mass phishing of KYC users. Measure response time, not vibes.
- Supply chain security for real:
- Pin dependencies and monitor for typosquats.
- Adopt Sigstore signing and maintain SBOMs (CycloneDX).
- Aim for OpenSSF Scorecard best practices and SLSA levels.
- Harden CI/CD with OIDC, no long‑lived secrets, and mandatory reviewers.
- Cold/hot wallet segregation with automated drains on anomaly. Withdrawal rate limits, velocity rules, and human‑in‑the‑loop for flagged events.
- Resilience engineering. Chaos drills on key systems; backup and restore tests for wallets, order books, and node infra. Backups that aren’t tested are fiction.
- Clear, rewarding disclosure policy. Public security.txt, 24/7 contact, safe‑harbor language, and transparent bounty tiers (HackerOne/Immunefi). Projects that published fast, paid fairly, and credited researchers recovered reputationally faster after incidents.
- Comms plan for users. Pre‑written templates for pause notices, reimbursement steps, and phishing warnings. During Curve’s and Balancer’s front‑end incidents, the fastest, clearest comms saved users from second‑order losses.
One last thing before you click away: want a simple way to tell whether a threat is petty cybercrime or part of cyber warfare—and what that means for your next move? I’m answering that next, along with the questions I get daily about hackers, bounties, and whether you should worry right now.
FAQ: straight answers inspired by the talk (and common search questions)
What’s the difference between cyber war and cyber warfare?
Cyber warfare is the ongoing use of tactics like espionage, sabotage, and information ops. Think continuous operations: phishing campaigns, zero-days, supply‑chain tampering, data theft, DDoS, and infrastructure disruption—often run by or aligned with states.
Cyber war is when those tactics escalate to a state‑on‑state conflict with war‑level impact. That’s a full campaign with strategic objectives and coordinated waves of cyber operations that produce effects comparable to kinetic conflict.
Practical way to tell them apart:
- Scope and intent: Warfare is persistent activity; war is an escalated, strategic use of those tools between states.
- Impact bar: Warfare hurts; war reshapes national-level outcomes.
Example: NotPetya (2017) is classic cyber warfare—a state‑linked wiper that spilled globally, crippling Maersk’s shipping and Merck’s production with damages estimated in the billions. It wasn’t declared war, but the effect was nationwide and cross‑industry. By contrast, a hypothetical months‑long, state‑on‑state campaign targeting grid operators, telcos, and financial rails in coordinated fashion would hit the “cyber war” threshold.
What is cyber warfare and should we be worried about it?
Yes—because it hits what we all rely on: energy, finance, logistics, healthcare, communications, and elections.
- Real-world proofs: Ukraine’s power grid attacks (2015–2016) caused blackouts; WannaCry (2017) disrupted the UK’s NHS; Colonial Pipeline (2021) ransomware led to fuel shortages on the U.S. East Coast.
- Collateral damage is normal: Malware doesn’t respect borders. NotPetya started in Ukraine but wrecked networks worldwide within hours.
- The human factor is the front door: Verizon’s 2024 DBIR found the human element involved in the majority of breaches, with social engineering and stolen creds still dominant.
- It’s expensive: IBM’s 2024 Cost of a Data Breach report pegs the global average breach at around $4.88M—not counting market hits, regulatory fines, or reputational damage.
Bottom line: you don’t need to be a target to get hit. Service outages, tainted software updates, or compromised vendors can splash anyone, including crypto platforms and wallets.
Are hackers criminals—or can they help?
Hackers are problem-solvers. Some go rogue. Many operate ethically and are the reason critical bugs get fixed before they blow up.
- Proof from the field: The U.S. DoD’s first “Hack the Pentagon” (2016) uncovered 138 valid vulns in weeks—something bureaucracy couldn’t match on its own.
- Bounty programs work: Google’s Vulnerability Reward Program has paid tens of millions of dollars to researchers since 2010; Microsoft reports similar scale across its programs. Those payouts map directly to patched risks.
- Crypto’s best defenses: Immunefi and independent auditors have helped projects catch logic bugs and key‑management flaws before launch. When paired with staged rollouts and circuit breakers, that collaboration prevents nine‑figure disasters.
Hire hackers before you meet them as adversaries.
How does this relate to crypto?
Crypto runs on public, composable systems—open code, open infrastructure, open dependencies. That’s a superpower for innovation and an open window for attackers. The same asymmetric dynamics Rodrigo Bijou talks about show up everywhere in our space.
- State-linked actors: The Ronin Bridge hack ($625M, 2022) was attributed to North Korea–aligned groups—small team, massive impact.
- Supply-chain risk: The 2023 Ledger ConnectKit incident showed how a compromised dependency can poison countless apps in minutes. Trad tech has SolarWinds and 3CX; crypto has malicious npm/PyPI packages and compromised build pipelines.
- Protocol logic & tooling: Curve’s 2023 exploit traced to a Vyper compiler bug; Wintermute’s loss (2022) stemmed from a vanity address tool issue. Attackers think in edges and weird paths—exactly the mindset ethical hackers bring to prevention.
- User targeting at scale: Phishing and wallet-drainer kits evolve weekly. Per multiple 2024 industry reports, social engineering remains the cheapest, highest-ROI path to your assets.
Translation: If you hold keys, ship contracts, or run infra, you’re on the same chessboard as state actors, criminal crews, and bored teenagers with AI‑assisted tooling. Plan accordingly.
I want to watch the talk—where?
Search for “Rodrigo Bijou governments don’t understand cyber warfare TED” or head to ted.com/talks and pop his name and title into the search. I recommend turning on captions and taking notes—there are a few lines you’ll want to share with your team.
Further reading and tools
I keep a living list of resources I trust—threat intel, security tooling, audit references, and crypto‑specific checklists.
- Use case: Need to build a disclosure policy fast? You’ll find templates and bounty program examples.
- Use case: Want a threat model starter for a DeFi protocol or wallet? I linked battle‑tested frameworks you can adapt in an afternoon.
- Use case: Tight on time? There’s a one‑pager “minimum viable security” checklist for solo founders and small DAOs.
Quick question before you move on: if a stranger DMed you a “must‑update” wallet link right now, would your current setup stop you—or would you click? In the next section, I’ll show you how to make the safe choice automatic with a 10‑minute, zero‑theory checklist.
From big idea to next steps: turning the talk into action
Users: real protection you can set up in 60 minutes
I’ve seen too many portfolios wiped not by “elite” zero‑days, but by ordinary mistakes and rushed clicks. Here’s a fast, no‑BS setup that blocks most real‑world attacks.
- Split your funds: use a hot wallet for daily spending and a hardware wallet (Ledger, Trezor, Keystone) for savings. Treat the hardware wallet like your vault.
- Turn on transaction simulation: use tools like Rabby, Wallet Guard, or your wallet’s built‑in simulator to see what a signature will actually do before you approve it.
- Stop blind signing: if you can’t explain the permission, don’t sign. Most wallet drainers exploit generic approvals, not fancy exploits.
- Revoke risky allowances: check and remove stale token approvals with revoke.cash or Etherscan’s token approvals page. Do this monthly.
- Phishing armor:
- Use passkeys or a hardware security key for exchanges and email. Avoid SMS 2FA.
- Never connect wallets from links in DMs, Discord, or email. Go direct to the site or your saved bookmark.
- Use alias emails for different platforms to reduce targeted phish (e.g., SimpleLogin/AnonAddy).
- Backups that actually work: write your seed on paper or steel, store offline in two places, and never photograph it. Consider Shamir backups for higher value setups.
- Lock down devices: keep OS, browser, wallet, and firmware updated. Remove unused wallet browser extensions. Turn on auto‑updates.
- Withdraw allowlists: on exchanges, set withdrawal address allowlists and cooling‑off periods. Attackers hate waiting.
Phishing kills more portfolios than zero‑days.
Why this matters: independent researchers and incident responders keep showing the same pattern—wallet drainers and fake signing prompts account for a huge share of crypto losses each year. Public data from firms like Chainalysis and CertiK consistently point to social engineering and permission misuse as top loss drivers. One recent example: a compromised front‑end library in a popular wallet connector led users to approve malicious transactions across multiple dapps in minutes. The fix wasn’t “be a genius,” it was “don’t blind sign,” “simulate first,” and “keep approvals tight.”
10‑minute routine: update devices, revoke stale approvals, and test your seed phrase recovery on a spare device (offline). That habit alone can save your future self.
If you’re a builder or leader: turn security into a product feature
Security isn’t a compliance checkbox—it’s a market advantage. Teams that ship fast with guardrails win trust and survive volatility. Here’s a practical blueprint I recommend and use.
- Budget it: dedicate 5–10% of engineering time to security work every sprint. It’s cheaper than a single incident. (IBM’s latest Cost of a Data Breach report is a sobering reminder.)
- Launch a real bounty: publish scope, rewards, and SLAs on Immunefi or HackerOne. Pay fast. Signal that good hackers are welcome.
- Recurring audits, not one‑offs: audit before mainnet, after any upgrade, and after dependency changes. Pair specialist firms with community contests (e.g., Code4rena) to cover breadth and depth.
- Publish a disclosure policy: add security.txt at /.well-known/ and a clear vulnerability intake email or form. Thank and credit researchers by default.
- Keys and controls that scale:
- Use multi‑sig for admin with distinct devices and people. Add timelocks for risky ops.
- Build circuit breakers: pause guardians, rate limits, and per‑function caps. Practice pausing on testnets.
- Rotate keys on schedule and on role changes. No shared seed phrases ever.
- Ship safer by default:
- Canary releases and staged rollouts. Start with low caps and increment with monitoring.
- Formal checks for high‑impact code paths; add fuzzing with tools like Echidna and Foundry.
- Automated static analysis (Slither, Semgrep) in CI with break‑on‑fail rules.
- Monitor what matters: set on‑chain alerts with Forta, OpenZeppelin Defender, and Tenderly. Watch for abnormal outflows, new approvals, owner changes, and oracle swings.
- Secure the supply chain: pin dependencies, verify builds (consider SLSA levels), and sign artifacts with Sigstore. Lock down CI secrets and require code owner reviews.
- Practice the bad day: run quarterly tabletop exercises. Write an incident runbook with decision trees, comms templates, and legal/PR escalation paths. Fast, honest comms save reputations.
Why this works: independent data from platforms that track exploits shows the same trend—organizations that welcome ethical hackers, run bounties, and respond quickly lose less, recover faster, and keep user trust. Paying a researcher five figures is a bargain compared to a nine‑figure exploit and regulator heat.
Make your move today
Pick one step and do it now.
- Users: enable transaction simulation and revoke stale approvals today. Then set up a hardware wallet this week.
- Builders: publish a disclosure policy and security.txt, and open a bounty scope. Book your next audit with a concrete date—not “soon.”
We can’t secure modern systems with outdated methods. Bring skilled hackers into your process, turn security into a daily habit, and ship with guardrails. The difference between a close call and a catastrophe usually comes down to what you set up before the attack—not after.