Fitness / Motivation / Technology & A.I / Crypto

Welcome to Edition 125 of the Powerbuilding Digital Newsletter—where momentum turns into habit and consistency becomes the advantage. As the year continues to unfold, this edition is about staying locked in, refining your approach, and making steady progress across every pillar that matters.
This newsletter is built to support long-term growth—not quick spikes of motivation. Each section is designed to help you stay grounded, informed, and moving forward with intention.
Here’s what we’re focusing on this week:
- Fitness Info & Ideas
Training insights centered on durability and progress—how to build strength, manage recovery, and keep showing up week after week without burning out. - Motivation & Wellbeing
Clarity beats intensity. This section explores mindset tools and practical habits that help you maintain focus, discipline, and emotional balance as life stays busy. - Technology & AI Trends
A grounded look at how AI and modern tech continue to shape work, creativity, and daily systems—what’s useful now and what’s gaining traction. - Crypto & Digital Asset Trends
No speculation, no price talk—just emerging platforms, applications, and real-world blockchain use cases that signal where digital innovation is headed.
Edition 125 is about staying steady and intentional. Keep building, keep learning, and keep aligning your actions with where you want to go. Let’s continue forward.
Fitness
How Many Days a Week Do You Really Need to Train to See a Difference?

Most people don’t stall because they train too little.
They stall because they train inconsistently.
If you’re looking for the honest answer, not the hype: you only need 3–5 days per week to see real, noticeable change — if those days repeat week after week.
Three days a week is enough to trigger strength gains. Your nervous system adapts quickly, often within the first couple of weeks. You feel stronger before you look different, and that’s normal. This level works best when life is busy or when sustainability matters more than speed.
Four days a week is where most people hit their stride. Recovery stays manageable, volume adds up, and progress becomes predictable. This is the point where training starts to feel like a rhythm instead of a fight.
Five days a week can speed things up — but only if sleep, nutrition, and stress are handled well. Without recovery, more days don’t equal better results. They just create fatigue that hides progress.
Here’s the part most people miss:
Your body doesn’t respond to weeks. It responds to patterns.
You don’t get results from one good week.
You get them from ten average ones in a row.
If you train:
- 3 days a week → expect strength and energy changes in a few weeks
- 4 days a week → expect visible progress in 4–6 weeks
- 5 days a week → expect faster change only if recovery keeps up
The difference isn’t effort.
It’s repetition.
Train a schedule you can keep when motivation drops — and you’ll see more progress than someone chasing intensity without consistency.
That’s how results actually show up.
Motivation
From Survival Mode to Creation Mode
Transitioning from reacting to life to consciously designing it

Survival mode keeps you alive — but it was never meant to be your permanent home.
In survival mode, your days are reactive. You respond to emails, stress, obligations, and emergencies. You’re always “handling things,” but rarely building anything. Your nervous system stays tight. Your thinking stays short-term. Your energy goes toward getting through the day, not shaping the future.
And sometimes, survival mode is necessary. It gets you through hard seasons. But when it becomes a lifestyle, it quietly drains creativity, purpose, and self-trust.
Creation mode is different.
Creation mode begins the moment you stop asking, “What do I need to deal with today?” and start asking, “What am I building?”
The shift doesn’t happen overnight. It starts when your nervous system feels safe enough to stop scanning for threats. When you reclaim small pockets of stillness. When you stop making every decision from urgency and start making some from intention.
In creation mode, your time feels deliberate. Your actions align with values instead of pressure. You respond instead of react. You plan instead of brace. Even your problems change — they become challenges you choose, not crises that choose you.
This doesn’t mean life gets easier. It means you get clearer.
You move from:
- putting out fires → laying foundations
- constant motion → meaningful momentum
- coping → constructing
The real difference is agency.
Survival mode says, “I hope things calm down.”
Creation mode says, “I decide what matters.”
And the transition doesn’t require perfect circumstances. It requires one decision repeated consistently: to pause before reacting, to design before drifting, to act from purpose instead of fear.
That’s how a life stops feeling like something that’s happening to you — and starts feeling like something you’re intentionally shaping.
Not louder.
Not faster.
But on your terms.
Technology & A.I
Meta’s AI Bet Pays Off—For Now

Meta entered the final quarter of 2025 under intense scrutiny—legal, political, and financial—but its latest earnings suggest the company is still very much in control of its narrative.
The company reported fourth-quarter revenue of $59.89bn, exceeding Wall Street expectations of $58.59bn, while earnings per share came in at $8.88 versus forecasts of $8.23. Shares climbed nearly 10% in after-hours trading, reflecting renewed investor confidence at a time when many large technology firms are being questioned over the sustainability of their AI spending.
On the earnings call, Mark Zuckerberg framed 2025 as a transition year—one that set the stage for a much more aggressive push into artificial intelligence. He emphasized Meta’s intention to move beyond experimental AI tools toward what he described as “personal superintelligence,” systems capable of understanding users at an individual level rather than merely optimizing content feeds.
That vision is being backed with capital. Meta has committed to spending between $162bn and $169bn in 2026, with the bulk allocated to infrastructure and talent to support AI development. A recently announced agreement worth up to $6bn with Corning will supply fiber-optic cables for Meta’s expanding network of datacenters—physical assets that are rapidly becoming the backbone of the AI economy.
Zuckerberg has argued that the next phase of Meta’s platforms will blend large language models with the recommendation engines that already drive Facebook, Instagram, Threads, and its advertising business. In his view, users will no longer interact with impersonal algorithms, but with AI systems that can both understand context and generate personalized content on demand.
Not everyone is convinced. Investors have repeatedly raised concerns that AI spending across the tech sector is racing ahead of clear monetization strategies, fueling fears of a speculative bubble. Zuckerberg has acknowledged the near-term cost pressures but continues to insist that the returns will materialize over time—a position he reiterated when pressed about how AI investments would translate into revenue in coming years.
What is clearer is where Meta’s priorities now lie. The company is scaling back its once-dominant metaverse ambitions, cutting more than 1,000 roles in Reality Labs and narrowing its focus to wearables and smart glasses. While that division still reported losses of over $6bn last year, Meta says sales of its glasses more than tripled, offering a narrower but more tangible path forward.
At the same time, Meta’s AI infrastructure push is drawing political attention. Datacenters are increasingly criticized for their impact on energy consumption and local power grids. States such as Georgia, Maryland, and Oklahoma are exploring restrictions on new facilities, while members of Congress—across party lines—are questioning whether soaring utility costs are being passed on to consumers.
Meta has responded with an aggressive public-relations campaign, spending millions to highlight job creation and economic benefits tied to datacenter construction. Independent reporting, however, suggests that while construction creates short-term employment, long-term operational jobs remain limited.
All of this unfolds as Zuckerberg prepares to testify in a landmark trial examining whether social media platforms intentionally designed addictive products harmful to young users. The case marks the first time Meta executives will face sustained questioning in open court by prosecutors rather than lawmakers. Notably, the trial went unmentioned during the earnings call.
For now, Meta’s financial performance has bought it time—and market goodwill. But as AI spending accelerates, regulatory pressure mounts, and legal risks loom, the company’s ability to balance innovation with accountability may prove more consequential than its next earnings beat.
What Is ClawBot — and Is It Actually a Game Changer?

Right now, anything with “AI” and “automation” in the name gets labeled a revolution. Most of it isn’t.
So when people ask about ClawBot, the real question isn’t what does it claim to do — it’s what problem is it actually solving, and for who?
At its core, ClawBot positions itself as an automated AI-driven system designed to execute tasks faster and more consistently than humans. Depending on the context you’re seeing it in, it’s often framed as a tool for automation, optimization, or execution — not creativity, not general intelligence, but repeatable action at scale.
That distinction matters.
ClawBot isn’t trying to “think” for you. It’s trying to remove friction from processes that already exist — whether that’s monitoring, execution, or responding to predefined signals. In that sense, it sits in the same category as many modern AI agents: useful when paired with clear rules, dangerous when trusted blindly.
So is it a game changer?
Potentially — but only under specific conditions.
ClawBot becomes powerful if:
- The task is repetitive
- The rules are well-defined
- Human emotion or delay is the main bottleneck
- Oversight is still in place
In those environments, automation doesn’t just save time — it changes behavior. It removes hesitation. It enforces consistency. And consistency, more than intelligence, is what compounds results.
Where people get burned is assuming “AI bot” means set it and forget it.
It doesn’t.
If ClawBot is poorly configured, fed bad inputs, or used without understanding its limits, it doesn’t amplify intelligence — it amplifies mistakes. Faster execution cuts both ways.
The real shift ClawBot represents isn’t technical — it’s psychological.
It reflects a broader move away from humans doing everything manually toward humans designing systems that act on their behalf. The leverage isn’t in the bot itself. It’s in the person who understands:
- when to automate
- when to intervene
- and when not to use automation at all
That’s the difference between a tool and a liability.
So no — ClawBot isn’t magic.
And yes — in the right hands, it can change how work gets done.
But like every so-called “game changer,” it rewards clarity and punishes blind trust.
The future doesn’t belong to people who use bots.
It belongs to people who understand the systems behind them.
Decoding the Genome’s Dark Matter: DeepMind’s New AI Targets the Roots of Disease

Google DeepMind have introduced a new artificial intelligence system that aims to tackle one of biology’s most persistent challenges: understanding how genetic mutations outside of protein-coding genes contribute to disease.
The tool, called AlphaGenome, is designed to predict how mutations disrupt gene regulation—altering when genes activate, in which cells they operate, and how strongly they are expressed. This is a critical frontier in genetics. While only about 2% of the human genome directly codes for proteins, the remaining 98% governs how those proteins are deployed. Many inherited diseases, cancers, and mental health conditions trace back not to faulty proteins, but to breakdowns in this regulatory machinery.
According to Natasha Latysheva, AlphaGenome is intended as a foundational research tool rather than a diagnostic shortcut. Speaking at a press briefing, she described it as a way to uncover what the genome’s functional elements actually do—knowledge that could significantly accelerate basic biological research.
AlphaGenome was trained on large public datasets of human and mouse genetics, allowing it to learn how mutations in specific tissues influence gene behavior. The system can process up to one million DNA “letters” at a time and forecast how changes ripple through multiple biological processes. That scale matters: the human genome contains roughly three billion base pairs, most of which remain poorly understood.
The promise, according to DeepMind, is twofold. First, the model could help researchers identify which stretches of DNA are essential for the development of specific cell types, such as neurons or liver cells. Second, it could help pinpoint the regulatory mutations that drive diseases like cancer—potentially guiding the development of new gene therapies or targeted drugs.
External researchers see cautious but real progress. Carl de Boer of the University of British Columbia noted that AlphaGenome can link mutations to their downstream effects, including which genes and cell types are impacted. In principle, he said, that insight could inform therapies designed to counteract harmful regulatory changes. Still, he emphasized that predictive models must continue to improve before they can replace experimental validation.
Early adopters are already testing the system’s limits. Marc Mansour, a clinical professor at UCL, described AlphaGenome as a “step change” in his search for the genetic drivers of cancer. Meanwhile, Gareth Hawkes from the University of Exeter highlighted its broader significance: for the first time, researchers have a tool capable of making informed predictions about the vast non-coding majority of the genome.
AlphaGenome does not solve genetics overnight. But by shedding light on the genome’s long-ignored regulatory regions, it signals a shift in how AI may be used—not just to analyze biology faster, but to explore parts of life’s code that were previously beyond practical reach.
When Algorithms Decide Your Career: AI Hiring Hits a Legal and Ethical Wall

As post-pandemic tech layoffs continue to ripple through the Bay Area, thousands of job seekers are discovering that competition is no longer just human. Artificial intelligence has quietly embedded itself into the hiring pipeline—screening résumés, scoring candidates, and in some cases conducting interviews—reshaping how people are evaluated before a recruiter ever looks their way.
That transformation is now facing a legal test.
A lawsuit filed on January 20 against Eightfold AI, a Santa Clara–based hiring technology firm, alleges that the company’s AI-driven tools violate federal and California privacy laws. The plaintiffs argue that Eightfold’s software generates candidate scores using personal data scraped from public sources—information applicants may never have knowingly provided—and that those scores can determine who advances and who is filtered out.
At the center of the case is a demand for transparency. According to Rachel Dempsey, who represents the plaintiffs, job seekers are effectively subjected to a “black box” system. Candidates, she argues, are entitled to know what data is being used to evaluate them and to review or correct that information, as required under the Fair Credit Reporting Act.
Eightfold disputes the allegations. The company says its technology relies on data supplied by applicants and its corporate customers, and that individuals are able to review and amend their information. The legal proceedings are still in their early stages, but the case has already become a flashpoint in the broader debate over AI’s role in employment decisions.
The stakes are high. Eightfold counts major organizations among its customers, including Salesforce, the U.S. Department of Defense, Caterpillar, and Activision. While the company does not disclose how clients deploy its tools, critics argue that AI-based screening systems increasingly operate out of sight—quietly influencing outcomes across entire labor markets.
Beyond the courtroom, frustration is growing on both sides of the hiring equation. Recruiters report a surge in AI-generated résumés and cover letters that exaggerate skills, while candidates fear that algorithmic scoring systems flatten complex careers into simplistic numerical rankings.
Veteran recruiter and career coach Bryan Creely describes the situation as a feedback loop of mistrust. Job seekers, wary of being screened by machines, optimize their applications with AI. Employers, suspicious of misrepresentation, respond with even more automated filtering. The result, he says, is a system that moves faster but understands less.
Data suggests the tension is widespread. A recent survey by the hiring platform Greenhouse found that most U.S. hiring managers suspect candidates of using AI to misstate qualifications, while more than half of job seekers believe AI is evaluating them without disclosure. About one-third of candidates say they use AI simply to keep pace.
Greenhouse CEO Daniel Chait summed it up bluntly: employers use AI to filter candidates out, while candidates use AI to apply to more jobs. The loop tightens—and trust erodes.
Research suggests the risks go deeper than frustration. Kyra Wilson, who studies AI decision-making, found that biased AI recommendations can dramatically sway human judgment. In experiments, evaluators followed biased AI guidance up to 90% of the time, reinforcing concerns that historical discrimination can be scaled under the guise of algorithmic objectivity.
For some job seekers, the future is already here. Martin Moakler, a Los Angeles social media manager, recently completed an interview conducted entirely by an AI voice. He described the experience as strange but functional—a reminder that human interaction is no longer guaranteed in early hiring stages.
Former EEOC commissioner Jenny Yang sees both promise and peril. AI, she argues, could help identify talent beyond traditional résumé filters—but only if companies are transparent and intentional about minimizing bias. Without safeguards, she warns, the technology risks amplifying inequality rather than reducing it.
The Eightfold case may not decide the future of AI in hiring, but it underscores a growing reality: as algorithms take on more authority over people’s livelihoods, questions of accountability, consent, and fairness are no longer theoretical. They are becoming unavoidable—and, increasingly, unavoidable in court.
Crypto
Bybit Steps Into Retail Banking as Crypto Exchanges Chase Mainstream Finance

Bybit, one of the world’s largest cryptocurrency exchanges by trading volume, is preparing to cross a line that few crypto-native platforms have managed to approach at scale: retail banking.
The company said it will begin rolling out banking services in February through a product called My Bank powered by Bybit, unveiled during a live online keynote. The service is designed to give users personal international bank account numbers (IBANs), enabling them to send and receive funds across traditional banking networks in multiple currencies.
According to Ben Zhou, the initial launch will support U.S. dollar transfers, with plans to expand to as many as 18 currencies, pending regulatory approval. Users who complete Know Your Customer (KYC) checks will be able to access their accounts immediately, deposit fiat currency, receive salaries, pay bills, and trade crypto under their own legal name—all from within the Bybit platform.
The move reflects a broader effort by Bybit to evolve beyond pure trading. It also comes after a turbulent period for the company, which suffered a $1.4 billion hack roughly a year ago. Expanding into regulated financial services appears to be part of a longer-term strategy to rebuild trust while embedding crypto more deeply into everyday financial activity.
Bybit is not attempting this alone. Reporting by Bloomberg indicates the exchange is working with Pave Bank, a startup lender licensed in the Eastern European country of Georgia. The company also says it is collaborating with established financial institutions, including Qatar National Bank and DMZ Finance, with which it partnered in 2025 on tokenized asset initiatives.
If successful, the service would blur the line between crypto exchanges and traditional banks—an ambition many platforms have floated, but few have executed meaningfully due to regulatory friction. Bybit has emphasized that its banking product remains subject to local approvals, underscoring the complexity of operating across jurisdictions.
Taken together, the launch signals a quiet shift in the crypto industry’s direction. Rather than positioning themselves as alternatives to banks, some exchanges are increasingly attempting to become banks—or at least bank-like interfaces—layered on top of crypto markets. Whether regulators and users are ready for that convergence will determine whether initiatives like Bybit’s remain niche experiments or become a defining feature of the next phase of digital finance.
Robinhood Backs the Pipes, Not the Hype, in a $1.5B Crypto Infrastructure Bet

Robinhood is placing a calculated wager on the less visible—but increasingly critical—layer of the crypto economy: institutional trading infrastructure.
The brokerage said it is investing in Talos, extending the New York–based firm’s Series B funding round and lifting its valuation to roughly $1.5 billion. The extension brings Talos’s total Series B raise to $150 million, up from the $105 million secured in 2022.
Rather than chasing consumer-facing crypto products, Robinhood’s move signals a deeper strategic shift toward the backend systems that power professional markets. Anton Katz, Talos’s co-founder and CEO, said the extension was designed to accommodate interest from strategic partners who see digital assets migrating onto “digital rails” once dominated by traditional finance.
Those partners now include Robinhood Markets, alongside Sony Innovation Fund, IMC, QCP, and Karatage. They join an already heavyweight investor roster featuring a16z crypto, BNY, and Fidelity Investments—underscoring Talos’s position as a bridge between crypto-native markets and legacy financial institutions.
For Robinhood, the investment complements a broader expansion beyond retail trading. The company has been steadily building crypto-native infrastructure, including the development of its own blockchain network on Arbitrum, tokenized stock trading in Europe, and the rollout of staking and perpetual futures products. According to Johann Kerbrat, Talos’s platform gives Robinhood greater flexibility to deepen liquidity and enhance features for its crypto customers.
Talos operates firmly on the institutional side of the market. Its platform aggregates liquidity from exchanges, over-the-counter desks, and prime brokers, offering professional investors a unified interface for execution, risk management, and post-trade settlement. The company says it serves clients in roughly 35 countries, with traditional finance firms accounting for as much as 70% of new customers over the past year. Asset managers using Talos collectively oversee about $21 trillion in assets under management.
That institutional tilt has been reinforced through acquisitions. Talos recently agreed to acquire Coin Metrics for more than $100 million, adding on-chain analytics, market data, and benchmark indexes to its stack. Earlier purchases—including D3X Systems, Cloudwall, and Skolem—have expanded its capabilities across the trading lifecycle.
The funding extension arrives at a moment when crypto infrastructure, rather than speculative tokens, is attracting renewed attention. As traditional asset classes increasingly experiment with tokenization and blockchain settlement, firms like Talos are positioning themselves as neutral plumbing—less exposed to market cycles, but essential if digital finance is to scale.
For Robinhood, the message is clear: the next phase of crypto growth may depend less on flashy front-end products and more on owning a stake in the systems that institutions rely on when capital moves at scale.
Regulators Call a Truce: SEC and CFTC Unite on Crypto Oversight

After years of public friction over who controls the crypto rulebook, U.S. regulators are signaling a decisive shift from rivalry to coordination.
The Commodity Futures Trading Commission and the Securities and Exchange Commission announced a joint effort called Project Crypto, an initiative aimed at aligning how digital asset markets are regulated across federal agencies. The announcement came Thursday during the Joint CFTC–SEC Harmonization Event, where leaders from both watchdogs emphasized that fragmented oversight no longer reflects how modern markets function.
SEC Chair Paul Atkins framed the initiative as a response to structural reality rather than ideology. Trading, custody, clearing, and risk management, he said, increasingly operate across asset classes and technologies at once. In that environment, overlapping or inconsistent regulation creates confusion rather than protection.
That language marks a sharp contrast with the regulatory posture of recent years. Under former leadership, the agencies were widely seen as locked in a jurisdictional struggle over crypto. The CFTC argued that most digital assets were commodities, while the SEC—under former Chair Gary Gensler—maintained that nearly all tokens, aside from bitcoin, fell under securities law.
The tone began to change last fall, when then–acting CFTC Chair Caroline Pham declared the turf war effectively over. Her successor, Michael Selig, used one of his first public appearances to formalize that détente.
Selig said the CFTC will not pursue a separate crypto framework but will instead partner directly with the SEC on Project Crypto. As an interim step, he has instructed staff to explore “joint codification” of a proposed taxonomy from Atkins that would clarify which digital assets fall under securities regulation and which do not—at least until Congress finalizes market structure legislation.
That caveat matters. Lawmakers are still struggling to pass comprehensive crypto legislation, with competing bills advancing unevenly through the Senate. While the Senate Agriculture Committee moved its proposal forward along party lines, the Banking Committee has yet to hold a hearing amid disputes over stablecoin yield and oversight authority.
Still, Selig suggested regulators cannot afford to wait. While acknowledging that legislation is the preferred long-term solution, he argued that agencies must act within their existing authority to ensure the U.S. remains competitive as a financial center.
Atkins echoed that view in comments to The Wall Street Journal, noting that while a bill would provide clarity, the SEC and CFTC are prepared to move forward regardless. Both agencies, according to the Journal, plan to sign a memorandum of understanding to formalize cooperation.
The coordinated approach extends beyond crypto spot markets. Selig also outlined a shift in the CFTC’s stance on prediction markets, announcing plans to withdraw a 2024 proposal that would have banned political and sports-related event contracts. He also directed staff to rescind a 2025 advisory that warned firms about such products, saying it ultimately added uncertainty rather than guidance.
Prediction markets—including platforms like Polymarket and Kalshi—have grown rapidly, particularly during the 2024 election cycle. Previous CFTC leadership had raised concerns about election-related contracts, arguing they should be handled at the state level. More recently, however, the agency has approved several firms to enter the space, including Gemini Titan and Bitnomial, while Trump-backed Truth Social has explored prediction features through a Crypto.com partnership.
Selig said new rulemaking is underway to establish clearer standards for event contracts, acknowledging that the current framework has struggled to keep pace with market innovation.
Taken together, Project Crypto represents more than regulatory housekeeping. It signals a philosophical shift: U.S. regulators appear less interested in drawing hard jurisdictional lines and more focused on building a unified framework for markets that no longer fit traditional categories. Whether Congress ultimately provides the legislative backbone remains uncertain—but for the first time in years, the agencies tasked with overseeing crypto appear to be moving in the same direction.
Quantum Fear Meets Reality: Why Bitcoin’s Biggest Cryptographic Risk Isn’t Imminent

Concerns that quantum computing could one day undermine Bitcoin’s cryptographic foundations have moved from academic circles into mainstream market debate. But according to a new research note from Benchmark, the growing sense of urgency may be misplaced.
In a report published Thursday, Benchmark analyst Mark Palmer acknowledged that quantum computing poses a legitimate theoretical threat to Bitcoin. However, he emphasized that practical attacks capable of compromising Bitcoin’s cryptography remain “decades away, not years,” giving the network substantial time to adapt.
Palmer’s central argument is not that quantum risk should be ignored—but that it is being mispriced in the present. While quantum machines could, in theory, derive private keys from public keys, the level of computational power required far exceeds anything currently available or credibly projected in the near term.
Just as importantly, Benchmark pushed back on the idea that Bitcoin is too rigid to respond. The report points to previous upgrades, including Taproot, as evidence that the network can evolve when risks become material. Any transition toward quantum-resistant cryptography, Palmer argued, would likely occur through gradual, well-coordinated upgrades rather than a disruptive protocol overhaul.
The note arrives amid a broader push across the crypto industry to prepare for post-quantum scenarios. The Ethereum Foundation recently established a dedicated post-quantum security team and announced a $1 million research prize. Coinbase has also launched a quantum advisory council to assess long-term risks across blockchain networks.
Investor sentiment, however, remains divided. Earlier this month, Christopher Wood removed Bitcoin from a model portfolio at Jefferies, calling quantum computing an “existential” threat to its store-of-value thesis. Benchmark’s analysis directly challenges that view, arguing that long-dated risks should not be conflated with near-term fragility.
Technically, only a subset of bitcoin would be vulnerable even in a future quantum scenario. Palmer noted that risk would apply primarily to coins held in addresses with exposed public keys—such as reused addresses or early “Satoshi-era” wallets—not the full circulating supply. Benchmark estimates roughly 1 to 2 million bitcoins fall into this category, significantly below higher estimates that place the figure closer to 7 million.
Those higher-end estimates align more closely with comments from Vetle Lunde, who said last month that about 6.8 million bitcoins could theoretically be exposed in a mature quantum future—but stressed that the issue calls for developer coordination rather than market panic.
Disagreement over timelines remains the biggest variable. Venture capitalist Chamath Palihapitiya warned in late 2025 that quantum threats could emerge within two to five years, sharply compressing the window for defensive upgrades. That view has been strongly disputed by Adam Back, who places the risk 20 to 40 years out, if it materializes at all.
Benchmark’s conclusion is measured: quantum computing represents a real challenge—but one that is neither imminent nor unmanageable. In that framing, the debate is less about existential collapse and more about long-term protocol stewardship. Bitcoin, the firm argues, has time—and history suggests it will use it.
Crypto Market Structure Bill Advances—But Politics, Not Policy, Now Loom Largest

A long-anticipated effort to establish federal rules for the crypto market cleared a key procedural hurdle this week, even as partisan tensions underscored how difficult the final stretch may be.
The Senate Agriculture Committee voted 12–11 along party lines to advance its version of broad crypto market structure legislation, pushing the bill forward while exposing fractures that Democrats say have widened since the start of the year. Much of that tension centers on President Donald Trump and his family’s growing involvement in crypto-related ventures.
The Agriculture Committee is widely viewed as the more collaborative of the two Senate panels overseeing digital asset policy, and Thursday’s hearing reflected a stated desire from both parties to get legislation across the finish line. Still, several Democrats argued that momentum toward bipartisan agreement stalled after the holidays.
Sen. Cory Booker pointed to months of cross-party work, including a bipartisan discussion draft released in November, but said that cooperation eroded in early January. Booker urged colleagues to reengage, arguing that a compromise bill could still be finalized within weeks if negotiations resumed in earnest.
Trump’s crypto ties complicate the path
The bill text released last week would significantly expand the authority of the Commodity Futures Trading Commission over crypto markets and includes provisions favorable to decentralized finance, such as protections for noncustodial software developers and infrastructure providers. But debate over potential conflicts of interest quickly overshadowed those policy details.
Several amendments proposed by Democrats sought to address concerns around Trump’s crypto business interests. Bloomberg recently estimated that Trump has generated roughly $1.4 billion from crypto-related ventures, including involvement with a DeFi and stablecoin project known as World Liberty Financial. The Trump family also reportedly holds a substantial stake in the mining firm American Bitcoin.
Sen. Michael Bennet, who introduced an amendment tied to conflicts of interest, framed the issue as one of democratic norms rather than crypto policy. That amendment, along with others—such as a proposal aimed at preventing fraudulent transactions at digital asset kiosks—failed to gain enough support during the markup.
Committee Chair John Boozman acknowledged concerns about elected officials holding crypto assets but said those questions extend beyond the scope of the legislation before the panel and would require broader Senate input.
A narrower road ahead
With the Agriculture Committee’s version now advanced, attention shifts to the Senate Banking Committee, which must still hold a markup of its own market structure bill. That process has already proven unstable. A scheduled markup was abruptly pulled earlier this year after Coinbase withdrew support, citing unresolved issues around tokenized equities, DeFi treatment, agency jurisdiction between the SEC and CFTC, and how stablecoin rewards should be regulated.
If and when the Banking Committee advances its proposal, the two bills will need to be reconciled into a single package before heading to the full Senate. Passage there would require 60 votes—meaning unanimous Republican support and backing from at least some Democrats.
Thursday’s vote shows that crypto market structure legislation is still alive. But the narrow margin and unresolved political disputes suggest that the remaining challenge may not be technical design or regulatory architecture—but whether lawmakers can separate crypto policy from broader political battles long enough to reach consensus.
Kraken Tries to Make DeFi Boring—and That Might Be the Point

Kraken is taking a deliberate step toward mainstreaming decentralized finance by removing much of what has historically made it intimidating.
The U.S.-based crypto exchange said it is launching a new product called DeFi Earn, designed to give users access to DeFi-style yields without requiring them to manage wallets, seed phrases, or onchain transactions themselves. Instead, Kraken is positioning the product as a familiar, exchange-native experience layered on top of decentralized infrastructure.
“DeFi has always promised more control, yet most people end up overwhelmed,” the company said in announcing the rollout. DeFi Earn, Kraken argues, is meant to preserve the economic upside of onchain lending while abstracting away the technical friction that has limited adoption beyond crypto-native users.
Under the hood, the process remains DeFi. Users deposit funds on Kraken, which are converted into stablecoins and routed into vaults that supply liquidity to onchain lending protocols. Kraken handles execution, monitoring, and rebalancing, while users earn variable yields—up to 8% APY, according to the exchange—through a single interface.
The product is built on infrastructure provided by Veda, with risk oversight handled by Chaos Labs and Sentosa. The initial lineup includes three USDC-denominated vaults—Balanced Yield, Boosted Yield, and Advanced Strategies—each calibrated to different risk and return profiles.
Veda co-founder Sun Raghupathi framed the appeal in macro terms. As yields in traditional savings products compress, he said, onchain lending continues to reflect real demand for capital, producing higher but variable returns. DeFi Earn, in that sense, is less about financial engineering and more about distribution—making those rates accessible to users who would never touch a DeFi dashboard directly.
The launch also reflects a broader shift among centralized exchanges. Rather than positioning DeFi as an alternative system, platforms like Kraken are increasingly acting as intermediaries—wrapping decentralized protocols in compliance-friendly, user-managed products. That approach raises familiar questions around custody, transparency, and counterparty risk, but it also acknowledges a reality: most users prioritize simplicity over ideological purity.
DeFi Earn will initially be available to users in most U.S. states, excluding New York and Maine, as well as in Canada. The product will not be offered in the European Economic Area at launch.
Kraken’s bet is straightforward. If decentralized finance is ever going to move beyond early adopters, it may have to stop feeling like an experiment. DeFi Earn doesn’t try to reinvent the system—it just tries to make it usable. Whether that tradeoff satisfies regulators, purists, and everyday users at the same time remains an open question.