Andreas Brekken Review
Andreas Brekken
medium.com
Andreas Brekken (Medium @abrkn) Review Guide: Everything You Need To Know + FAQ
Is Andreas Brekken’s Medium actually worth your time—or just another loud crypto voice?
I’ve read him for years as part of my daily research ritual, and I keep coming back for one reason: he tests things in the real world. If you want signal over noise, that matters. In this guide, I’ll show you how to get the most value from his posts, what to read first, what to be skeptical about, and how I personally use his insights for research on Cryptolinks.com.
You’ll walk away knowing who he is, what he covers, how reliable his work is, and a simple way to read his content fast without missing the good stuff.
Why this matters right now
Crypto is fast, messy, and full of strong opinions. You want unfiltered insight. You don’t want to be misled by speculation dressed up as “research,” or spend hours on posts that don’t help you make a decision today.
The problems most readers run into
- Signal vs noise: Medium is packed with hot takes and second-hand summaries. It’s hard to spot work that’s actually grounded in tests and data.
- Outdated details: Protocols and wallets move quickly. A post from even two releases ago can have configs or commands that no longer apply.
- Hidden bias: Some writers sell without saying they’re selling. Others cherry-pick edge cases to make a point. You need a way to sanity-check both.
- Time sink: Long posts with no clear method or metrics waste your time. Readers skim by default—research shows people scan online content in an “F-pattern” and miss buried details unless structure is clear (Nielsen Norman Group).
Put simply: you want practical takeaways you can use today, not theory that reads well but breaks when you try it.
What I’ll help you do instead
I’ll break down Andreas Brekken’s writing style, strengths, blind spots, and the practical value of his posts. I’ll point you to the most useful pieces, show you how to scan for methods and results in under a minute, and share a quick trust checklist you can run before you act on anything.
Great crypto writing shows its work. If there’s a clear setup, metrics, and failure modes, it’s useful—even when you disagree with the conclusion.
What this guide covers (and how to use it)
Who this guide is for
- Builders and devs: You want hard data, not marketing. You care about reproducible results and edge cases.
- Power-users: You actually run nodes, open channels, and move funds. Reliability and fees matter more than slogans.
- Researchers and curious investors: You need clean input for your models and decisions, not vibes.
What you’ll get
- A quick profile of what Andreas tends to test and how he reports it
- A topic map so you don’t wander aimlessly on his Medium
- A reading path by skill level so you start in the right place
- A trust-and-bias checklist you can run in two minutes
- Pros, cons, and a straight verdict so you know when to read and when to skip
- A short FAQ that answers the questions people actually ask before following him
Why I think his posts can be worth it
The most useful crypto content tends to include:
- Reproducible steps: enough detail to try it yourself
- Numbers: fees, failure rates, latency, resource usage
- Context: versions, dates, hardware, and network conditions
- Failures: what broke and how it was fixed (or why it couldn’t be)
That structure lines up with how people judge credibility online: specificity, evidence, and transparency build trust (Stanford Web Credibility Guidelines). When a post checks those boxes, I treat it as field intel and feed it into my research pipeline on Cryptolinks.com/news.
What to watch out for on Medium in general
- Old config landmines: Copying commands from a post that predates a major wallet or node release can cause errors—or worse, data loss. Always check dates and version numbers.
- Benchmarks without method: If you don’t see the setup, the benchmark isn’t actionable. Treat unrepeatable results as opinion.
- Cherry-picked success: Real-world routing and wallet behavior is messy. If a piece ignores failure cases, be cautious.
Where this guide starts you
First, I’ll set expectations about what you’ll find on @abrkn—tone, depth, and the kinds of tests he actually runs. Then I’ll show you how to scan his posts in 60 seconds for value, and how to separate evergreen lessons from outdated specifics.
Ready to see who Andreas is and why his testing style cuts through fluff? That’s up next—want the quick profile first or jump straight to what he actually writes about?
Who is Andreas Brekken and why should you care?
If you’re tired of crypto opinions that never touch a command line, Andreas Brekken is the antidote. He’s a long-time Bitcoiner, engineer, and entrepreneur who prefers running live systems to debating them. His Medium (@abrkn) reads like a lab notebook: real setups, real money, real failure modes, and the kind of blunt commentary that doesn’t flinch when things break.
“In crypto, results beat press releases.”
That’s the vibe. It’s not cozy. It is useful.
Background and credibility at a glance
- Builder-first lens: He writes after running nodes, wallets, and services himself. Less theory, more execution.
- Longevity in the space: Years of tinkering across Bitcoin infrastructure and payments give him pattern recognition most writers don’t have.
- Clear methodology: You’ll often see the setup, constraints, metrics, and what failed—so you can try it yourself or adjust for your stack.
He’s not trying to be everyone’s favorite. He’s trying to be right—or at least honest about what happened on his machine and in the wild.
What sets him apart
- He tests at meaningful scale: Instead of “I sent $5 once,” expect channel-heavy, multi-wallet, multi-route experiments, including stress tests that expose edge cases you actually hit in production.
- He pays in skin, not adjectives: He’ll burn fees, open channels, and hammer payment flows to see where things choke—routing, liquidity, fees, or wallet UX.
- He publishes the warts: Screens, logs, error codes, dead ends. That candor is gold if you’re making decisions that affect users, not just bag bias.
A few concrete examples of the kind of work you’ll see
- Lightning reliability under pressure: He’s famously run high-capacity setups and documented success rates, fee quirks, and why mid-sized payments fail—things like insufficient inbound liquidity, channel reserves, and route hints.
- Wallet UX reality checks: Expect notes on backup flows, fee estimation, stuck payments, and recovery paths. These write-ups highlight where “works in a demo” turns into support tickets.
- Payment-rail trade-offs: He compares on-chain vs. Lightning vs. third-party rails with real numbers, not slogans, which helps you pick the right tool for the job.
What I appreciate: this isn’t cherry-picked success. If something fails 3 out of 10 times, he’ll say so—and dig into why.
Why his lens matters (and how it aligns with independent research)
Hands-on testing like this lines up with what others have measured:
- Cost vs. reliability is real: Research on Lightning routing shows a trade-off between cheap routes and successful delivery. See Pickhardt and Richter’s work on payment flow optimization (arXiv:2107.05322) for a formal take on what Brekken’s logs make obvious in practice.
- UX is the hidden bottleneck: Wallet teams such as ACINQ (Phoenix) have written about simplifying liquidity and splicing to reduce failure points—echoing the same pain Brekken documents from the user side.
- Larger or multi-hop payments still need care: Community measurements and operator reports across 2019–2024 note higher failure rates for bigger payments and jam-prone routes—precisely the edge cases that show up in his experiments.
In short: the gritty details he surfaces aren’t one-offs—they map to known challenges and active research.
How he tests (so you can trust the takeaways)
- Defined setup: Software versions, configs, and environment upfront so you can reproduce the same conditions.
- Clear constraints: He’ll state limits like channel sizes, fee caps, wallet versions, or network peers.
- Observable output: Logs, screenshots, and metrics—latency, attempts, success rates, fee paid vs. quoted.
- Actionable post-mortems: Not just “it failed,” but “here’s where it failed and what might fix it.”
What to expect tone‑wise
Direct, technical, sometimes spicy. No hand-holding, no PR gloss. If you want real-world tests over marketing, you’ll vibe with it. If you prefer tutorials, you might want to warm up elsewhere and come back ready to experiment.
Curious which topics he hits most—Bitcoin infrastructure, payment rails, wallet UX, privacy—and where the must-read experiments are hiding? That’s exactly what I’m about to break down next. Want the signal without the guesswork?
What he writes about on Medium (@abrkn)
Expect unapologetically practical posts about Bitcoin infrastructure, Lightning payments, wallet UX, reliability under real load, and privacy trade-offs. It’s less think-piece, more field report. When something breaks, he shows the logs—not just the vibe.
Common themes you’ll actually see
- Running and evaluating nodes and payment channels
He’s known for stress-testing Lightning: opening channels, pushing volume, measuring failure modes like “temporary channel failure,” “insufficient capacity,” and “fee too low.” One of his most referenced experiments was spinning up a top-capacity node years ago, then documenting why capacity alone didn’t guarantee smooth routing. The lesson aged well: liquidity placement, fee policy, and rebalancing matter more than bragging rights. - Wallets, fees, reliability, and UX trade-offs
He’ll try multiple best cryptocurrency wallets across different network conditions (low mempool vs. fee spikes), note how long payments take, what error messages real users see, and which defaults quietly hurt success rates. Think: small vs. large invoices, single-hop vs. multi-hop routes, and how base fee + ppm settings impact payment success. - Security and privacy for everyday users
Address reuse, channel probing, invoice metadata, LNURL, seed backups, and how your wallet’s convenience features might leak more than you think. He often shows how a “just works” flow can increase your fingerprint—even if you never publish a single tweet about your setup. - Hard-nosed critiques of projects and services
If an exchange API flakes during peak volume or a node implementation regresses on reliability, he’ll say it straight and show what he measured. It’s not drama for clicks; it’s “I ran it, here’s the chart.” - Occasional market structure commentary
Liquidity fragmentation, stablecoin flows, fee market reality, and what these mean for payments today versus the pitch decks. Think sober, not sensational.
“Real users don’t care why a payment failed—they just feel the failure.”
Format and depth: how the posts feel in your hands
These aren’t fluffy summaries. They read like lab notes with an attitude. You’ll usually get:
- Environment upfront: node implementation and version (e.g., LND vs. Core Lightning), Bitcoin Core version, hardware specs, bandwidth, and sometimes topology.
- Clear test plans: number of payments, invoice sizes, time windows, routing constraints, and fee policies. You see what was attempted, not just what “should” happen.
- Raw outcomes: success/failure rates, error classes, time-to-first-success, number of retries, and any stuck HTLCs or liquidity imbalances that appear.
- Actionable takeaways: fee policy tweaks, channel sizing strategy, wallet settings to change, and when to stop fiddling and open a better-placed channel.
The best part: his numbers often rhyme with independent work. For example, research into routing and reliability (see Pickhardt & Richter’s “Optimally Reliable & Cheap Payment Flows on the Lightning Network,” arXiv: 2107.05322) underscores why payment size, route quality, and liquidity placement dominate success. Industry research hubs like BitMEX Research on Lightning have also documented similar pain points over the years. When his experiments surface friction, it’s usually not just him.
Cadence and recency: what to watch for before you copy
He publishes in bursts. Some posts are evergreen on concepts; configs can age fast. Protocols evolve, defaults change, and what was flaky in 2019 might be smooth today—or the opposite during a fee surge.
- Check the date and versions: match his setup to current releases (Bitcoin Core, LND/Core Lightning, wallet versions). A single parameter change can flip results.
- Scan for dependency shifts: fee markets, mempool pressure, channel reserve rules, and liquidity tooling (like MPP/AMP) dramatically impact reliability.
- Look for follow-ups: sometimes he revisits the same problem months later after a fix or a policy change; the delta is where the gold is.
What you won’t find
- No price calls, no hand-holding: it’s not a trading newsletter or a beginner tutorial. It’s “this works, this broke, here’s the data.”
- No fluff: if a claim can’t survive a weekend testnet binge or a mainnet payment storm, it won’t make the cut.
Want the fastest path to value without sifting through everything? In the next section, I’ll show you exactly which posts to read first based on your goals—new to crypto payments, building, or power-using. Which camp are you in?
What to read first (and how to read smarter)
I don’t have time to be wrong on crypto infrastructure, and I’m guessing you don’t either. When I open Andreas’s Medium, I start with the posts that measure something real and end with a decision I can use today.
“In God we trust; all others must bring data.”
— commonly attributed to W. Edwards Deming
Start here: high-signal posts that pay off fast
- End-to-end payment runs: Look for experiments where he sends actual payments across different wallets and routes, logs failures, and shows error messages.
Why it matters: You’ll see real failure modes like “invoice expired,” “insufficient capacity,” or “route not found,” which directly map to your own UX risks. - Node and routing stress tests: Posts that push channels under load, include version numbers, and show what broke.
Why it matters: Reliability under pressure is where most marketing claims fall apart. - Wallet teardown with metrics: Anything that reports time-to-first-payment, fee estimates vs. actuals, and restore-from-backup friction.
Why it matters: Fees and friction drive churn. If it’s clunky in his test, expect worse in the wild. - Privacy checks with measurable leakage: Posts that demonstrate what your counterparty, a node, or a block explorer could learn about you.
Why it matters: “Private enough” isn’t a strategy. Numbers focus your risk decisions. - Service critiques with uptime/latency: If he shows request timings, failure rates, or queue behavior during mempool spikes, read it first.
Why it matters: You trade time and trust every time you hit “send.” Latency and failure patterns are your true cost.
Quick trick: On his Medium, search for keywords like “routing,” “fees,” “wallet,” “channel,” “privacy,” “backup,” “invoice,” “mempool”. Open the ones with graphs, screenshots, or logs. If there’s no method and no data, skip or treat as commentary.
Reading path by persona
- New to crypto payments
- Start with posts that explain reliability in plain language and show failed vs. successful payments side-by-side.
- Look for real values: final fee paid, time to complete, number of retries.
- Bookmark one practical rule from each read, e.g., “If the mempool is congested, expect on-chain fees to spill into channel opens and closures.”
- Builders and devs
- Prioritize experiment logs with version numbers and reproducible steps.
- Pay attention to metrics: channel policy changes, HTLC timeouts, stuck payments, error codes.
- Cross-reference with current docs for your stack:
LND releases • Core Lightning releases • Bitcoin Core releases
- Power users
- Hunt for wallet/routing notes that affect day-to-day use: splicing behavior, invoice expiries, channel reserve quirks, and fee bumping under load.
- Look for comparisons across multiple iOS Bitcoin wallets or services in the same post—that’s where the edge lives.
How to assess any post in 60 seconds
Step 1: Date and versions. If you don’t see a timestamp and the software versions (node, wallet, OS), your risk just went up.
Step 2: Method in one glance. Is there a clear setup, including wallet types, funding amounts, and network conditions? If yes, keep reading.
Step 3: Numbers and failure modes. Screenshots, logs, or metrics? Great. No evidence? Treat it as opinion.
Step 4: Conclusion with trade-offs. Useful posts end with constraints, not absolutes. If you see “X is always better,” be skeptical.
Why this scan works: UX research from Nielsen Norman Group shows we read web content in an F-shaped pattern—headlines, then left-side details, then selective scanning. Train your eyes to land on method, data, and conclusion first, and you’ll cut your reading time while keeping accuracy high. Source
Evergreen versus dated (and how to tell in 30 seconds)
- Evergreen
- Concepts: why payments fail, how fees propagate, UX trade-offs between custody levels.
- Heuristics: “assume mobile wallets go offline,” “expect mempool spikes to reshape behavior,” “test restore-from-backup before you need it.”
- Design lessons: clear error messages beat clever defaults.
- Dated
- CLI commands, config flags, and default fee policies—these change often.
- Specific wallet behaviors tied to a past release.
- Routing quirks that recent updates may have fixed.
Sanity check: If a post shows commands or config, open the current docs before you copy anything. Compare flags, defaults, and deprecations against the latest: LND, Core Lightning, Bitcoin Core.
Turn one post into action in 15 minutes
- Pick a post with real metrics and a clear setup.
- Replicate a single step: one payment, same or similar amounts, and note the versions you’re on.
- Record three things: total time, total fee, error (if any). Screenshot evidence.
- Decide one change: e.g., switch wallet for this use case, adjust channel policy, or set a fee alert.
Real-world note: when he shows a payment failing under load, assume that’s the average user’s Tuesday. The emotional part is simple—no one wants to look foolish sending money twice because the first one got stuck. Data takes the sting out of that. It gives you control.
Want a dead-simple checklist to size up accuracy and bias in under two minutes? That’s exactly what I’m sharing next. Ready to see how I stress-test any post before I trust it?
Can you trust it? My accuracy and bias checklist
“Trust, but verify.” It’s a cliché because it works. Andreas writes from the trenches, and that’s why I read him. But crypto moves fast, and old configs can break new stacks. Nothing stings like rebuilding a node at 3 a.m. because you trusted a 2019 snippet. So here’s the exact way I sanity-check his posts before I let them influence what I publish or recommend.
“In God we trust; all others must bring data.” — W. Edwards Deming
Simple fact-check routine
- Replicate the steps on a fresh environment (container or VM). If it’s operational advice, I try both mainnet and a sandbox (regtest/testnet) to split network vs. setup issues.
- Hunt for artifacts: linked repos, screenshots, logs, TXIDs, node IDs. Screenshots are fine; logs or reproducible commands are better.
- Cross-check against official docs and one independent source (e.g., GitHub release notes, a maintainer comment, or a respected community write-up).
- Note the timestamp: post date, commit hash, and version numbers. If versions are missing, I assume the specifics are outdated and treat the piece as conceptual.
- Run a quick “negative test”: change one variable (fee rate, channel size, mempool load) to see if the claim still holds.
Why I’m this picky: A Nature survey flagged reproducibility issues across fields, and a Science paper showed misinformation spreads faster than truth. Crypto mixes both dynamics with fast-moving code. Proof beats vibe.
Red flags and green flags
- Green flags:
- Exact setup: OS, client, network, configs
- Numbers over adjectives: success rates, fees, latencies
- Failure cases included, not hidden
- Links to repos, commit IDs, or TXIDs
- Red flags:
- Sweeping claims without context (“X never works”)
- No details on versions or hardware
- Outdated commands presented as current
- Benchmarks without methodology
Real-world sample checks I’ve run
Payments reliability: In an older Lightning piece, he highlighted fragile multi-hop payments with specific liquidity pain points. I re-ran the idea today with current LN stacks, using larger channels and features like MPP/AMP enabled. Result: reliability improved in my tests, but only after I matched his original constraints (node age, channel graph, liquidity). The lesson stood: topology and liquidity policy matter more than hype. The config was dated.
Fee estimates and UX: He noted wallets underestimating fees during busy mempools. I cross-checked with mempool.space API and a few live sends. Some wallets are better now, but the heuristic still holds: set fee caps, verify target block estimates, and prefer wallets that show mempool congestion candidly.
Node ops advice: He once pushed for running real workloads instead of lab-perfect setups. I tested a cloud instance vs. bare metal under load and watched steal time and disk I/O. The spirit was right: watch real metrics, not assumptions. My update: use baseline dashboards (CPU steal, IOPS, mempool size) and alerting so anecdotes become data.
Bias guardrails I use on myself
- Confirmation bias: I write down my expectation first, then try to disprove it. If I still agree after breaking it, I keep it.
- Selection bias: I sample more than one client/wallet when possible. If only one stack was tested, I label the result “stack-specific.”
- Recency bias: Shiny releases get attention; I check stability notes and open issues before trusting a “fix.”
- Scope creep: If the post tested home routing, I don’t generalize to enterprise without a second run.
My 10-minute “trust speed-run” (useful before you copy anything)
- Skim intro → method → data → result. If any piece is missing, it’s opinion, not instruction.
- Copy the core command or config into a disposable VM.
- Swap in the current client version and check the changelog for breaking changes.
- Try one “stress” condition (bigger payment, different fee, worse connectivity).
- Write one sentence: “This works today under X, Y, Z.” If you can’t, don’t ship it.
Tools I keep handy while reading
- GitHub releases + commit search for version context
- Wayback Machine to see what a project’s docs said at the time of his test
- mempool.space charts for fee environment sanity
- CLI sanity: lncli, lightning-cli, bitcoin-cli for quick reality checks
- jq and curl to script reproducible calls from the post
How I use his work on Cryptolinks.com
I treat it as field intel. If I can reproduce the result on today’s stack, it directly informs my recommendations. If it’s dated but insightful, I keep the principle and test it against current releases before it impacts any rankings. If it’s controversial, I run a small benchmark and ask at least one maintainer or respected operator for a sanity check. That balance keeps me open-minded without getting wrecked by nostalgia or hype.
Want fast practical value? I’ll share which of his posts are most beginner-friendly, what biases to expect, and how long they usually stay “fresh” before you need to re-check. Curious where to start and what to avoid copying line-by-line?
FAQ: Real questions people ask about @abrkn’s Medium
I get these in my inbox and DMs all the time. If you’ve seen his posts floating around and wondered, “Should I actually read this?”—this is for you.
“In crypto, the map is not the territory—run the code.”
People also ask
- Is Andreas Brekken’s Medium beginner-friendly or too technical?
- How current are his posts, and can I still use older configs?
- Does he have a bias, and how should I factor that in?
- What should I read first if I’m focused on payments or wallets?
- Are his experiments reproducible, or just opinion pieces?
- Can I use his advice in production, or is it just for learning?
- How does he handle security and privacy topics—safe to follow?
- Where can I ask questions or verify anything he tested?
Quick answers you’re probably looking for
- Is it beginner-friendly? Sometimes, but it leans technical. Start with the high‑level breakdowns and posts that clearly list the setup and results.
- Is there a bias? Yes—pro hands‑on testing, skeptical of hype. That’s useful if you want reality checks, not marketing.
- Does it age well? The lessons often do; the commands/configs may not. Always verify dates, version numbers, and current docs before copying anything.
What kind of reader gets the most value?
If you’re the type who wants to see what actually breaks when you run real nodes, route payments, or stress a wallet, you’ll feel at home. He tends to publish concrete setups, note failure modes, and summarize what he’d change—great for builders and power users who learn by doing.
What should I read first if I’m not deeply technical?
- Look for posts that explain fees, reliability, and UX trade‑offs in plain language.
- Skim for headings like “Setup,” “Results,” “What broke,” “What I’d change.” You’ll get the gist without needing to reproduce every command.
- Use the comments and linked resources to fill gaps—then move to the more experiment-heavy pieces once you’re comfortable.
Pro tip: Nielsen Norman Group’s research shows readers scan web content in an F‑pattern, so it pays to jump between intro → method → results → conclusion quickly to judge if a post is worth your time. Source: NN/g.
Are his experiments reproducible?
Often, yes—when he includes setup details, versions, and metrics. That’s your green light. If a post feels like commentary with no method, treat it as perspective, not instructions.
- Green flags: exact versions, node/wallet settings, screenshots or logs, clear failure cases.
- Red flags: sweeping claims with no setup details, outdated configs presented as current, no links to repos or docs.
How do I adapt an older post to current releases?
- Check the post date and note all version numbers.
- Open the latest official docs or release notes for each tool he used.
- Search for breaking changes since the post date—config flags and default behaviors change fast.
- Re-run small parts of the setup incrementally to confirm behavior before you commit to the entire build.
Why this matters: crypto tooling evolves quickly. The Electric Capital Developer Report highlights high developer churn and rapid iteration across ecosystems—great for progress, risky for stale how‑tos. Source: Electric Capital Developer Report.
Can I use his posts for production decisions?
Yes—if you treat them as field intel, not gospel. Pull the principles, verify the steps, and cross‑check with docs for your exact versions. When his test is reproducible today, I’ve seen it hold up well in production planning.
- Use his failure cases as a checklist for what to test before launch.
- If a config is risky or undocumented now, stage it in a sandbox first.
What about security and privacy tips—are they safe to follow?
Typically solid, because he frames trade-offs honestly. Still, verify against current best practices:
- Compare with official security docs for the wallet or node you use.
- Search recent issues in the project’s repo for any warnings about the features he uses.
- If he disables a default safety control to test something, note whether you actually need to do that.
Where do I ask questions or get clarifications?
- Comment on the Medium post: medium.com/@abrkn
- Ping him on social (search “@abrkn”).
- For tooling questions, open or search issues in the project’s GitHub—maintainers often flag config changes or deprecations faster than anywhere else.
How often does he post—and should I worry about recency?
He posts in bursts. I always check the date first and then scan for version pins. If the core idea is strong (like how he tests routing reliability or flags UX failure points), it usually remains useful even when the exact commands age out.
Is it worth the time if I’m focused on wallets or payments?
Yes, with a filter. His strongest pieces tear into real-world behavior—fee behavior, channel reliability, time-to-first-successful-payment, how wallets handle edge cases. Those are the parts that translate directly into better decisions today.
What aged well vs. what didn’t—an example
- Aged well: How to think about routing reliability, the importance of logs/metrics, and calling out “works in a demo, fails at volume” patterns.
- Aged fast: Command flags and default configs for nodes, fee policy recommendations tied to specific releases, quirks of early wallet builds.
Is it worth your time?
If you value practical testing over marketing, absolutely. Skim for methods and results; save configs only after cross‑checking versions. When in doubt, re-run the smallest possible test and see if your logs tell the same story as his.
Want the fast take on where his work shines and where it falls short—so you know exactly when to trust it and when to bring a bigger grain of salt? That’s up next.
Strengths, weaknesses, and who will get the most value
If you’re tired of glossy “it just works” narratives and want the ugly truth of what breaks on mainnet, this is where his writing shines. It’s practical, sometimes harsh, and almost always grounded in experiments that leave breadcrumbs you can follow.
“Truth in crypto is what survives a test.”
Strengths
- First-hand experiments, not second-hand summaries. Expect real setups: payment channels opened, nodes stressed, wallets funded, fees paid, and failure logs included. For example, his Lightning Network write-ups weren’t theoretical—they involved running a large routing node, paying invoices under different liquidity and fee conditions, and documenting what actually cleared versus what got stuck.
- Clear failure reporting. You’ll see what broke and why, not just victory laps. He calls out things like:
- Payment routing that succeeds in isolation but crumbles under real liquidity constraints
- Wallets with “smart” fee estimation that still misprice during mempool spikes
- Privacy toggles that look safe but leak metadata when combined with common usage patterns
- Useful for builders, advanced users, and researchers. If you care about how things behave at scale—channel sizing, rebalancing, on-chain fee rescue (RBF/CPFP), or wallet UX under pressure—you’ll get concrete takeaways. Even when a specific command is dated, the reasoning tends to carry forward.
Why this matters: Crypto software moves fast, and what looks fine in a local test setup can collapse in the wild. Lightning, for instance, changed materially with anchor outputs and improved multi-part payments; approaches that failed in older builds may now succeed—but it takes someone who actually runs the gauntlet to tell you where the line is today.
Weaknesses
- Can be too technical for newcomers. There’s little hand-holding. If you’ve never opened a channel, adjusted a fee policy, or read a mempool chart, some posts will feel like jumping into the deep end.
- Posts age fast as software changes. Protocol and client updates (LND, Core Lightning, Eclair, wallet fee policies) can make yesterday’s workarounds obsolete. For example, fee bumping tactics and routing heuristics that once felt mandatory may behave differently after version bumps or policy updates. Always check versions.
- Tone is blunt; not a tutorial series. You’ll get the signal, but you won’t get a step-by-step onboarding. If you want a classroom vibe, this isn’t it.
Context note: Research on fast-moving technical ecosystems has repeatedly shown that guidance decays quickly as dependencies update. That’s exactly why the focus on reproducible methods and visible logs is so valuable—you can rerun the same steps against newer versions and see what holds up.
Who will get the most value
- Builders and engineers: You’ll appreciate the method-first approach—inputs, environment, outputs, and honest failure cases. It’s field data you can plug into product decisions.
- Power users: Expect actionable nuggets like when to rebalance versus reopen channels, how wallet fee settings behave under load, and what privacy trade-offs actually show up in routine use.
- Researchers and analysts: The experiments offer hypotheses you can test across clients and versions, which is gold for trend and reliability analysis.
- Who might struggle: Absolute beginners who want a gentle, step-by-step path. The signal is strong, but the learning curve is real.
If a post leaves you thinking, “Okay, but where do I start, and what should I click next?” you’re not alone. I’ve queued up exactly that—tight, high-signal starting points and references so you don’t have to hunt. Want the links and resources I actually keep on my desk?
Handy links and resources before you go
Here’s the bookmarkable kit I keep open when I’m reading Andreas’s pieces. It’ll save you clicks, help you verify anything that looks spicy, and keep you from copying stale commands.
Official places to start
- Medium home: https://medium.com/@abrkn (optional RSS: /feed/@abrkn)
- Social updates: Search “@abrkn” on X for fresh threads and context on new tests
- Project docs and repos: When a post names a tool or node, check its latest release before acting. On GitHub, click Releases and compare versions with the post.
Core docs I keep next to his posts
- Bitcoin Core Docs — RPCs, config flags, version-specific notes
- Lightning Network BOLTs — the spec; great for understanding expected behavior vs. observed quirks
- LND Docs — APIs, channel management, reliability caveats
- Core Lightning Docs — plugin model, routing, config examples
- Eclair Docs — policy defaults and edge cases that often matter in real-world routing
- BTCPay Server Docs — invoice flows, payment reliability, merchant-side gotchas
- Electrum Docs — wallet behaviors, fee logic, watch-only setups
- mempool.space — live fee, mempool, and confirmation data to reality-check any fee or backlog claim
Privacy and security references worth a glance
- Wasabi Wallet Docs — coinjoin mechanics and limitations
- Samourai Wallet Docs — Whirlpool, tx labeling, spending policies
- Tor Support — connection issues, performance trade-offs, do’s and don’ts
Fast fact-check and replication helpers
- Wayback Machine — capture a post and linked docs as they were when published
- Bitcoin Optech — weekly changes and release notes; perfect for dating assumptions
- Bitcoin-Dev mailing list and Lightning-Dev mailing list — upstream decisions that influence behavior he might be testing
Quick rule I use: if the claim isn’t reproducible today, I treat it as an anecdote, not a conclusion.
Smart ways to keep content fresh
- Version pinning: When you save a command snippet, note the exact software version it worked on.
- Release alerts: “Watch” repos on GitHub for Releases only; you’ll get pinged when something changes that could invalidate a config.
- Fee sanity checks: Before repeating any fee advice, check current mempool conditions; a post written during congestion can mislead in quiet periods, and vice versa.
Bookmark ideas (steal my folders)
- Payments: BTCPay docs, LND/Core Lightning docs, mempool.space
- Nodes: Bitcoin Core docs, BOLTs, your node’s service docs
- Privacy: Wasabi docs, Samourai docs, Tor support
- Meta: Wayback Machine, Bitcoin Optech, mailing lists
One last productivity note: users skim, not read word-for-word. Eye-tracking research from Nielsen Norman Group shows the classic “F-pattern” scanning behavior on the web, which is why I front-load links and methods you can act on immediately. Use headings and bullets as anchors while you evaluate whether a post is worth a deep read.
Want my take on which of these links I check first when I’m deciding whether to act on one of his experiments? I’ll lay that out next—and I’ll tell you exactly how I turn it into a go/no-go in under five minutes.
My verdict
I keep Andreas Brekken on my shortlist because his testing mindset cuts through fluff. When he shares results, I don’t get PR polish—I get what actually broke and why. That’s gold if you want to make smart choices without wasting weeks. Read him for reality checks, not for hand-holding. Pair his experiments with current docs and you’ll get real value.
TL;DR: Trust methods, not opinions. If he shows his setup, metrics, and failure cases, you can reuse the insight—even when software versions change.
How I’d use this from here
- Pick your starting post based on what you’re building or using: payments, wallets, nodes, privacy.
- Run the accuracy checklist: check date, versions, steps, and at least one independent source.
- Save only what still works today: if a command or config is stale, keep the lesson, not the line.
- Revisit quarterly—protocols, fee markets, and UX change fast.
One quick real-world example
During the heavy fee spikes triggered by inscriptions and BRC-20 activity in late 2023–2024, I used his “test what fails first” approach to stress-test Lightning wallets and node setups. I set a fee ceiling, measured payment success rates across multiple route attempts, and logged where things failed (liquidity vs. pathfinding vs. wallet policy). That led me to adjust what I recommend when the mempool is packed: push users toward wallets with clear fee controls, predictable channel policies, and transparent error messages. The outcome matched what his style primes you to do—optimize for reliability under worst-case conditions, not sunny days.
If you want a proof-backed habit: the Stanford History Education Group shows that lateral reading—opening new tabs to verify claims and sources—beats staying on one page. And the Nielsen Norman Group has long noted that specifics build credibility (numbers, methods, constraints). Both align perfectly with how to read Andreas: check methods, confirm versions, and favor posts with concrete data.
Want a tailored compare?
If you want me to stack his posts against alternatives—docs, forums, other researchers—tell me what you’re working on: wallets, nodes, privacy tooling, or payment routing. I’ll map a reading path and add current repo docs so you can decide faster.
What I skip (so you don’t waste time)
- Old commands/configs pasted without version notes. I extract the idea, then re-implement on today’s release.
- Hot takes without setup details. Fun to read, not useful to act on.
- Unbounded benchmarks (no limits on hardware, peers, or dataset). I’ll only trust numbers that state constraints.
Bottom line
Andreas’s Medium is worth your time if you like honest experiments and can handle a bit of technical grit. Use the approach above to read smarter, avoid outdated pitfalls, and walk away with tactics you can actually use. If you want a comparison tailored to your stack, tell me your goal and I’ll point you to the right posts—with current docs to back it up.