Fitness / Motivation / Technology & A.I / Crypto

Welcome to Edition 126 of the Powerbuilding Digital Newsletter—and let’s be clear about the mission of this space: growth that lifts everyone up. This isn’t about shortcuts, hype, or chasing attention. It’s about discipline, consistency, and building something real—together.
If you’re reading this, it means you care about becoming better and you respect the work it takes to get there. Even more than that, it means you want to see others win too. That mindset is power—and it compounds.
Here’s what we’re driving forward in this edition:
- Fitness Info & Ideas
Strength is built, not gifted. This section focuses on training principles that reward patience, effort, and commitment—the same traits that create success everywhere else in life. - Motivation & Wellbeing
Real motivation isn’t loud. It’s the quiet decision to show up again. Here we break down mental frameworks that build resilience, confidence, and the ability to stay steady while others quit. - Technology & AI Trends
The future belongs to those who learn continuously. We highlight the tools and innovations that empower people to work smarter, create more freely, and open doors for themselves and others. - Crypto & Digital Asset Trends
Beyond speculation, this space is about builders—new platforms, applications, and use cases that are expanding access, ownership, and opportunity in the digital world.
Edition 126 is about collective elevation. Train hard. Think clearly. Share knowledge. Support progress. When one person grows, it raises the standard for everyone.
Let’s keep building—stronger bodies, sharper minds, and a future where more people win.
Fitness
Compound Mastery & Neural Strength

Real strength isn’t built by chasing exhaustion. It’s built by mastering force. Compound lifts aren’t powerful because they’re heavy — they’re powerful because they demand coordination. Squats, deadlifts, presses, rows… they don’t just train muscles. They train the nervous system to organize the body as one unit.
That’s neural strength.
When you train compounds with intent, you’re teaching your brain how to recruit more muscle fibers, faster, and in the right sequence. You’re refining timing, tension, and control. This is why experienced lifters can make heavy weight look calm — their nervous system is efficient, not frantic.
Compound mastery is about repetition under precision.
Same movement patterns. Same setup. Same cues.
Week after week.
Every clean rep lays another neural pathway. Every rushed, sloppy rep wires weakness.
This is also why progress doesn’t always look dramatic at first. Early gains often show up as better bar speed, cleaner technique, and increased confidence under load before they show up as bigger numbers. That’s your nervous system adapting — and it’s the foundation of long-term strength.
Neural strength rewards patience.
It favors lifters who stop chasing novelty and start chasing refinement.
More exercises won’t make you stronger.
Better execution will.
If you want strength that lasts, prioritize:
- Fewer movements
- Better control
- Repeated exposure
- Full-body tension
Because muscle grows from stress —
but strength grows from mastery.
And mastery starts in the nervous system.
Motivation
The Battle Between Comfort and Calling

Comfort whispers.
Calling pulls.
Comfort says, “Stay where it’s familiar.”
Calling asks, “Who are you becoming if you don’t move?”
Comfort is seductive because it feels safe. It offers predictability, approval, and routines that require little risk. It doesn’t demand growth — only maintenance. And for a while, that feels like peace.
But comfort has a hidden cost.
Over time, it dulls urgency. It quiets curiosity. It convinces you that stability is the same as fulfillment. You start mistaking ease for alignment — until something inside you grows restless.
That restlessness is the calling.
Calling isn’t loud. It doesn’t beg. It shows up as discomfort, tension, and the sense that you’re capable of more than you’re expressing. It asks you to trade certainty for meaning, and safety for purpose.
This is where the battle begins.
Comfort asks you to preserve what you’ve built.
Calling asks you to risk it.
Comfort keeps you liked.
Calling makes you honest.
Comfort rewards compliance.
Calling demands courage.
Most people don’t lose the battle dramatically. They lose it quietly — by postponing the hard choice one more day. By telling themselves “later” until later becomes never.
But here’s the truth:
Comfort never asks who you’re meant to be. Calling does.
Answering your calling doesn’t guarantee ease. It guarantees alignment. And alignment has a gravity of its own — it steadies you even when the path is uncertain.
The question isn’t whether comfort will be tempting.
It always will be.
The real question is whether you’re willing to live with the cost of ignoring what keeps pulling at you from the inside.
Because comfort keeps you comfortable.
But calling is what makes you whole.
Technology & A.I
AI’s Next Phase Is About Power, Not Hype

As governments and investors race to define their place in an AI-shaped world, a recurring theme is emerging: the future of artificial intelligence may be less about models and more about who controls the systems that run them.
That was the tone set during a high-level discussion at the World Governments Summit 2026, where Chamath Palihapitiya and Joseph Tsai joined policymakers to debate where AI is heading—and who ultimately benefits from it. The session was moderated by Omar Sultan Al Olama, the UAE’s Minister of State for Artificial Intelligence.
Palihapitiya framed AI not as a software trend but as a geopolitical forcing function. Within the next three to five years, he argued, countries will be compelled to make explicit choices about their “sovereignty of productivity”—a concept that ties national GDP growth directly to who owns and governs AI infrastructure.
In that context, open-source models take on strategic importance. Palihapitiya described openness not as ideology but as leverage, arguing that transparent, auditable systems allow nations to avoid dependence on a handful of foreign-controlled platforms. “The future is open source,” he said, positioning it as a prerequisite for long-term autonomy.
Tsai echoed that sentiment, noting that governments are increasingly attentive to sovereignty and data control as AI systems scale. But he drew a clear distinction between model availability and economic durability. At Alibaba Group, Tsai explained, open-source models like Qwen are paired with owned cloud infrastructure—a combination that allows the company to monetize training and inference rather than rely solely on subscriptions.
That distinction matters, Tsai suggested, because the economics of consumer AI remain uncertain. While enterprise and government use cases are clearer, he cautioned that it is still unclear whether mass-market subscriptions alone can sustain the enormous costs of development and deployment.
The discussion also addressed a question increasingly circulating in financial markets: is AI in a bubble? Palihapitiya dismissed the idea, arguing that the real disruption lies ahead, not behind. Breakthroughs in adjacent technologies—such as energy storage, superconductors, or small modular reactors—could radically alter assumptions about “safe” assets like oil, gas, or traditional infrastructure. From that perspective, AI is not inflating an existing system but reshaping the foundation beneath it.
Tsai largely agreed, but added a note of scale. Hyperscalers, he said, are now spending on AI infrastructure at levels previously unseen—roughly $120 to $150 billion per company per year, roughly double the pace of just a year ago. Those figures underscore both confidence in AI’s trajectory and the concentration of capital required to stay competitive.
Taken together, the exchange suggested that the next chapter of AI will not be decided by benchmarks or model releases alone. Instead, it will hinge on who controls infrastructure, how openly systems are built, and whether nations view AI as a shared capability—or a strategic asset to be defended.
That framing fits squarely with the broader mission of the World Governments Summit, which runs through Feb. 5 under the theme “Shaping the Governments of the Future,” bringing together heads of state, ministers, and business leaders to confront how emerging technologies are redefining power, policy, and economic resilience.
FTC Redraws the Line on AI Enforcement: Fewer Capability Bans, Sharper Focus on Deception

U.S. regulators are quietly reshaping how artificial intelligence will be policed, and the shift signals a meaningful change in priorities. Under the second Trump Administration, the Federal Trade Commission is narrowing its AI enforcement posture—pulling back from broad restrictions on what AI tools can do, while doubling down on how those tools are marketed to consumers.
That pivot became explicit on December 22, 2025, when the FTC set aside a final consent order against Rytr, an AI-powered writing assistant. The Commission concluded that the order “unduly burdens” AI innovation, an unusually direct rebuke of its own prior enforcement.
The original case alleged that Rytr’s automated review-writing feature could be misused to generate large volumes of realistic-looking customer reviews, potentially enabling consumer deception. The FTC had argued that providing such a tool constituted an unfair practice under Section 5 of the FTC Act because it supplied the “means and instrumentalities” for fraud. Rytr settled in December 2024 and was barred from offering any service that generated customer reviews or testimonials.
That settlement is now undone.
The Commission’s decision closely mirrors the dissent issued at the time by current FTC Chair Andrew N. Ferguson, who argued that banning a general-purpose AI tool based on hypothetical misuse risked criminalizing innovation itself. In setting aside the order, the FTC adopted that logic wholesale, warning that treating AI as “categorically illegal” because of potential abuse could “strangle a potentially revolutionary technology in its cradle.”
The Rytr reversal is not an isolated move—it reflects a broader realignment triggered by the administration’s July 2025 AI Action Plan. That plan directed federal agencies to identify and roll back enforcement actions that might impede AI development, explicitly calling on the FTC to revisit ongoing investigations and existing consent orders that “unduly burden AI innovation.”
This approach marks a clear departure from the Biden-era FTC, which pursued a more aggressive posture toward AI risks. Initiatives such as Operation AI Comply focused on both deceptive “AI washing” and on the underlying capabilities of AI systems that could facilitate fraud. Rytr itself was swept up in that earlier framework.
The current FTC, by contrast, appears to be embracing a dual-track strategy.
On one track, enforcement tied to the capabilities of AI products—even where misuse is plausible—has been scaled back. Since the Rytr decision, no new actions of this type have emerged, and the Commission’s willingness to reopen and vacate a finalized consent order is rare, occurring only once or twice in the past two decades.
On the second track, enforcement against false or inflated claims about AI capabilities remains very much alive.
Chairman Ferguson has repeatedly emphasized that deceptive marketing is squarely within the FTC’s traditional mandate. In congressional testimony in May 2025, he described a “circumspect and appropriate enforcement” approach focused on classic Section 5 violations—false advertising dressed up in AI language.
Recent cases reflect that emphasis. In April 2025, accessiBe agreed to a $1 million settlement over allegations that it overstated its AI tools’ ability to make websites compliant with accessibility standards. In August, the FTC approved judgments exceeding $20 million against Click Profit and related defendants for falsely claiming to use advanced AI. That same month, Workado settled claims that it exaggerated the accuracy of its AI-detection products, avoiding monetary penalties but agreeing to substantiate future claims.
Taken together, these actions clarify the Commission’s emerging philosophy. AI tools themselves are not being treated as inherently suspect, even when they could be misused. But statements about what those tools can do—particularly claims that promise compliance, accuracy, or automation beyond what is provable—remain a regulatory risk.
The FTC has also signaled that this restraint has limits. Investigations tied to child safety, deepfakes, and election-related harms remain active areas of concern, and the agency has indicated it will pursue AI misuse that directly violates existing laws, including under the Take It Down Act.
For companies developing or deploying AI, the message is nuanced but firm. Innovation will face fewer preemptive barriers, but marketing language will be scrutinized as closely as ever. In this regulatory climate, the fastest way to attract enforcement is not what an AI system might enable—but what a company claims it already does.
Microsoft Finds a Way to Unmask AI Backdoors Before They Go Live

As organizations increasingly rely on open-weight language models sourced from public repositories, a quieter risk has been growing in parallel: poisoned models that behave normally during testing but activate malicious behavior when triggered in just the right way. Researchers at Microsoft now say they have a practical method to expose those hidden threats—without knowing the trigger phrase or the intended attack in advance.
In a new paper titled The Trigger in the Haystack, Microsoft researchers describe a scanning technique designed to identify so-called “sleeper agents” embedded inside large language models. These backdoors can remain dormant through standard evaluations, only surfacing later to generate insecure code, hateful outputs, or other harmful behavior when a specific input pattern appears.
The vulnerability is fundamentally a supply-chain problem. Training frontier models is expensive, which encourages enterprises to reuse fine-tuned models from open repositories. That economic reality creates leverage for attackers: compromise one popular model, and the downstream impact can spread across dozens or hundreds of deployments.
How the scanner exposes poisoned models
The approach rests on a counterintuitive insight: poisoned models tend to memorize their backdoor examples far more strongly than benign models memorize ordinary data. That memorization leaves fingerprints.
When researchers prompted models with their own chat template tokens—markers like the start of a user or assistant turn—poisoned models frequently leaked parts of their original poisoning data. In several tests, the prompt caused the model to reproduce the trigger phrase itself, effectively revealing the backdoor.
Once candidate triggers are extracted, the system moves to verification by examining the model’s internal attention patterns. The researchers identified a distinctive signal they call attention hijacking. When a trigger is present, attention heads form a “double triangle” structure: trigger tokens attend almost exclusively to one another, while attention from the rest of the prompt collapses toward zero. The result is a segregated computation path, suggesting the backdoor operates independently of normal prompt conditioning.
Results and practical performance
The full pipeline consists of four stages—data leakage, motif discovery, trigger reconstruction, and classification—and relies only on inference. No retraining, weight modification, or runtime overhead is required, making the scanner suitable as a pre-deployment audit step.
Microsoft tested the method against 47 intentionally poisoned models, including variants of Meta’s Llama-3, Google’s Gemma, and Microsoft’s own Phi-4. In fixed-output attacks (for example, generating a hostile phrase on trigger), the scanner detected roughly 88% of sleeper agents and produced zero false positives across 13 clean models. In more complex scenarios, such as triggers that caused vulnerable code generation, the system was able to reconstruct working triggers for most poisoned models.
The technique outperformed existing approaches like BAIT and ICLScan, which often require prior knowledge of the malicious behavior. Microsoft’s method assumes no such knowledge, a key advantage in real-world procurement scenarios.
What it does—and doesn’t—solve
The research reframes memorization, typically viewed as a privacy liability, as a defensive signal. However, the method has limits. It currently focuses on fixed triggers; adaptive or context-dependent triggers may be harder to surface. “Fuzzy” triggers—variants that still activate the backdoor—also complicate clean detection.
The scanner is strictly diagnostic. If a model is flagged, the only safe response is to discard it. The method does not attempt to remove or neutralize the backdoor.
It also requires access to model weights and attention states, which makes it suitable for open-weight models but unusable for black-box API systems.
Why it matters
Standard safety fine-tuning and reinforcement learning are often ineffective against intentional poisoning; sleeper agents can survive those processes intact. Microsoft’s work suggests that enterprises adopting third-party or open-source models need an explicit integrity-verification step, not just capability and safety evaluations.
Rather than offering formal cryptographic guarantees, the approach trades theoretical certainty for scale. In an ecosystem flooded with open models, that tradeoff may be the difference between practical defense and blind trust.
Cisco’s Quiet Advantage in AI: Turning Infrastructure into Operational Intelligence

While much of the AI spotlight falls on model builders and hyperscalers, Cisco has been advancing a different, less visible strategy—embedding artificial intelligence directly into the operational fabric of enterprise IT. Rather than positioning AI as a standalone capability, Cisco is treating it as an extension of infrastructure, services, and security systems that already underpin global networks.
Internally, Cisco uses a mix of machine learning and agentic AI to improve service delivery and tailor customer experiences. These systems are not experimental pilots but production deployments, built on what the company describes as a shared AI fabric—an architecture refined through years of validation across compute, networking, and large-scale enterprise environments. The emphasis is less on raw GPU power and more on integration: aligning the demands of model training with the very different performance and reliability requirements of inference.
That infrastructure-first philosophy carries into Cisco’s commercial offerings. Long known as a core supplier of enterprise networking, the company has found a natural application of AI in network automation. Configuration workflows, identity management, and access controls are increasingly driven by natural language inputs, enabling faster and more flexible deployments while reducing manual overhead.
To support customers building AI systems of their own, Cisco has expanded its hardware and orchestration portfolio. A recent collaboration with NVIDIA produced new switching platforms and the Nexus Hyperfabric line of AI network controllers, designed to simplify the complex clustering required for high-performance AI workloads. These tools aim to abstract away much of the operational complexity that typically accompanies large GPU deployments.
At the production level, Cisco’s Secure AI Factory framework—developed with partners including NVIDIA and Run:ai—targets end-to-end AI pipelines. The framework brings together distributed orchestration, GPU utilization governance, Kubernetes-based microservices optimization, and storage management under the Intersight platform. For edge scenarios, Cisco Unified Edge applies similar principles closer to where data is generated, integrating compute, networking, security, and storage in a single operational model.
Latency-sensitive environments are a key focus. Rather than building narrowly tailored industrial IoT solutions, Cisco extends data center operating models to edge deployments. The result is consistency: security policies, configurations, and management practices remain aligned across cloud, data center, and remote sites. That uniformity allows engineers to manage vastly different environments using the same tools and skills.
Security and risk management are central to Cisco’s AI narrative. Its Integrated AI Security and Safety Framework addresses threats across the AI lifecycle, including adversarial attacks, supply chain vulnerabilities, multi-agent risks, and multimodal exploits. The approach assumes that these challenges apply regardless of deployment size, reinforcing Cisco’s view that AI governance must scale with infrastructure.
Cisco is also positioning itself for the shift from generative AI to agentic AI, where autonomous software agents execute operational tasks. Supporting that transition requires new tooling and operating procedures—areas where Cisco is investing alongside its core infrastructure work. The company continues to expand its software stack and platform capabilities, including through acquisitions such as NeuralFabric, to deepen its role beyond hardware.
Taken together, Cisco’s AI strategy is less about headline-grabbing models and more about making AI operational at scale. By combining networking, compute, security, and management into a unified approach, the company is carving out a practical path for enterprises looking to move from experimentation to production-grade AI systems across cloud, core, and edge environments.
Crypto
Polymarket Cuts the Bridge: Native USDC Becomes the Backbone of Prediction Markets

As onchain prediction markets scale, infrastructure choices are starting to matter as much as liquidity. Polymarket is making a decisive one.
The prediction market platform announced it will migrate from bridged USDC on Polygon to Circle’s native USDC, reducing its reliance on cross-chain bridges as trading volumes and participation continue to grow. The transition, which will take place over the coming months, replaces USDC.e—an asset representation minted via bridging—with stablecoins issued and redeemed directly by Circle’s regulated entities.
For Polymarket, the change is less about branding and more about risk architecture. Cross-chain bridges work by locking assets on one blockchain and issuing synthetic equivalents on another. While flexible, that design introduces trade-offs in security, trust assumptions, and operational complexity—issues that become more pronounced as platforms scale.
Native USDC removes those layers. Issued directly on-chain by Circle and redeemable one-for-one for U.S. dollars, it offers a more capital-efficient settlement mechanism without the dependency on bridge infrastructure.
Shayne Coplan, Polymarket’s founder and CEO, framed the move as foundational rather than cosmetic, describing native USDC as a way to reinforce “a consistent, dollar-denominated settlement standard” as the platform’s markets expand.
Polymarket operates as an onchain prediction exchange where users trade outcome-based contracts tied to real-world events—ranging from political races to macroeconomic indicators—using stablecoins as collateral. As activity has increased, so has scrutiny on the plumbing that supports those trades.
The timing is notable. Prediction markets are no longer a niche corner of crypto. Alongside Polymarket and Kalshi, major platforms have entered the space. Gemini launched Gemini Predictions nationwide following regulatory approval, while Coinbase announced a prediction market partnership with Kalshi shortly after. Crypto.com has since rolled out its own U.S.-only platform, OG, operated through its derivatives arm.
Even outside crypto-native firms, traditional players have taken notice. Robinhood and DraftKings both introduced prediction-style markets in 2025, accelerating competition in a category that first gained mainstream traction during the 2024 U.S. presidential election.
That growth has not come without friction. Analysts have warned that prediction markets may be susceptible to insider trading or data manipulation, particularly in thin or fast-moving markets. Regulators are also circling. Kalshi is facing legal challenges from gaming authorities in several U.S. states, including Massachusetts and New York, over whether event-based contracts cross into gambling territory.
Against that backdrop, Polymarket’s move away from bridged assets looks like a preemptive tightening of its risk profile. By anchoring settlement to native USDC, the platform reduces a known attack surface at a time when prediction markets are drawing both capital and regulatory attention.
xAI Wants a Crypto Native Brain—Not to Trade, but to Teach the Machine

Elon Musk’s artificial intelligence venture is signaling that digital assets are no longer a side topic in frontier AI development—they’re becoming core training material.
xAI has posted a new role seeking a crypto quantitative expert to help train and refine its next-generation AI models. The position is not about running a trading desk or launching a token, but about encoding how sophisticated participants actually think about crypto markets.
According to the listing, the hire would be responsible for supplying high-quality data, structured annotations, and detailed reasoning that reflects how professional quantitative traders analyze blockchain systems. That includes modeling tokenomics, evaluating on-chain flows, navigating extreme volatility, and exploiting inefficiencies across both centralized and decentralized markets.
In other words, xAI is trying to teach its models the logic of crypto-native finance—not just the vocabulary.
The scope of training spans decentralized finance protocols, perpetual futures and derivatives, cross-exchange arbitrage, and portfolio-level risk management. The goal is to help AI systems reason through market structures that don’t behave like traditional equities or bonds, and that often operate under radically different assumptions.
The company is targeting candidates with advanced quantitative backgrounds—typically a Master’s or PhD—and hands-on familiarity with crypto data platforms such as Dune Analytics, Glassnode, Nansen, and DefiLlama. The emphasis is on real analytical fluency, not surface-level exposure.
The timing is notable. The job opening arrives as Elon Musk moves to merge xAI with SpaceX, his rocket and satellite business, ahead of a potential public offering. According to Reuters, the combined maneuver values SpaceX at roughly $1 trillion and xAI at around $250 billion, based on reporting from sources familiar with the matter.
Taken together, the hiring push suggests xAI views crypto not as a speculative curiosity, but as a stress test for machine intelligence. Digital asset markets compress volatility, game theory, incentives, and adversarial behavior into a single environment—exactly the kind of domain that exposes whether an AI system can reason under uncertainty.
Rather than building a trading bot, xAI appears to be doing something more foundational: teaching its models how a new financial system actually works, from first principles.
Gemini Retreats Abroad to Consolidate Power at Home

Crypto exchange Gemini is shrinking its global footprint in favor of a more concentrated—and increasingly automated—strategy centered on the United States.
In a blog post published Thursday, founders Tyler Winklevoss and Cameron Winklevoss said the company will exit the United Kingdom, European Union, and Australian markets while cutting roughly 25% of its remaining workforce. The decision, they said, reflects a sober reassessment of where Gemini can compete effectively amid rising regulatory complexity and uneven demand.
“These foreign markets have proven hard to win,” the founders wrote, adding that the cost of maintaining compliance and operations no longer aligns with user growth in those regions. Rather than continue spreading resources thin, Gemini is opting for geographic focus.
The headcount reduction marks another chapter in the company’s multiyear downsizing. Gemini’s workforce peaked at around 1,100 employees in 2022, fell by roughly half by the end of 2025, and will now shrink further. Management framed the cuts not only as cost discipline, but as a structural shift enabled by wider use of artificial intelligence across both engineering and business functions.
The bet is that a smaller organization—augmented by AI—can move faster, execute more cleanly, and support a narrower set of priorities.
That retrenchment mirrors a broader recalibration across the crypto industry, where firms are adapting to lower trading volumes, tighter liquidity, and higher regulatory overhead following the last market cycle. For Gemini, however, the pullback abroad coincides with renewed momentum at home.
Doubling down on U.S. regulation
Even as it exits overseas markets, Gemini is expanding its U.S. regulatory footprint. The company recently received approval from the Commodity Futures Trading Commission to launch a regulated prediction market, and has signaled interest in building out additional derivatives offerings.
At the same time, a long-running legal overhang is finally lifting. Gemini disclosed that the Securities and Exchange Commission plans to dismiss its lawsuit related to the Gemini Earn program with prejudice, following the full recovery of customer funds. The move closes a three-year dispute that had weighed heavily on the firm’s operations and reputation.
Taken together, the changes point to a more pragmatic Gemini—less focused on global sprawl, more intent on operating within clear regulatory boundaries in its home market. The strategy carries risks, particularly in an industry that has historically rewarded global scale. But it also reflects a recognition that, in crypto’s current phase, survival may depend less on reach and more on execution.
Tether Writes a $100M Check to Regulated Crypto Banking

Tether is moving deeper into regulated financial infrastructure, backing Anchorage Digital with a $100 million strategic equity investment as the federally chartered crypto bank prepares for a potential public-market debut.
The investment, disclosed Thursday, formalizes and extends an existing partnership between the two firms. It comes at a moment when Anchorage Digital is reportedly exploring a $200 million to $400 million capital raise ahead of a possible IPO next year, signaling growing ambition to scale within the U.S. regulatory perimeter.
According to Tether, the deal builds on prior collaboration that includes Anchorage’s role in issuing USAt, a dollar-pegged stablecoin launched on Jan. 27. USAt is designed to operate under the federal payment stablecoin framework established by the GENIUS Act in July 2025, positioning it explicitly for compliant use inside the United States.
Founded in 2017, Anchorage Digital is the first federally chartered digital asset bank in the country. It provides custody, settlement, staking, and stablecoin issuance services to institutional clients—making it a critical on-ramp between crypto markets and traditional finance. The investment was made through Tether Investments, the company’s El Salvador–based investment arm.
Profits funding expansion
The scale of Tether’s balance sheet helps explain the move. The company reported more than $10 billion in net profit for 2025 and $6.3 billion in excess reserves in its fourth-quarter attestation released in January. Those profits have increasingly been redeployed into strategic equity investments rather than held passively.
CEO Paolo Ardoino said in July that Tether had already invested in more than 120 companies using internally generated capital, with plans to continue expanding that portfolio. Recent bets reflect a widening scope: in November, Tether invested in Ledn, a consumer lender backed by Bitcoin collateral, and has reportedly considered a $1.15 billion investment in German robotics firm Neura.
In December, Tether led an $8 million funding round for Speed, a firm focused on enabling enterprise stablecoin payments over the Lightning Network—an indication that payments infrastructure is becoming a strategic priority alongside issuance.
Tether remains best known as the issuer of USDt, the world’s largest stablecoin, with roughly $185 billion in circulation—about 60% of the global stablecoin market, according to DefiLlama. But the company has also been quietly accumulating Bitcoin. On Jan. 1, Ardoino disclosed that Tether added 8,888 BTC at the end of 2025, bringing total holdings above 96,000 Bitcoin. Were it a public company, that stash would make Tether the second-largest corporate Bitcoin holder, according to BitcoinTreasuries.NET.
Strategic signal
The Anchorage investment underscores a broader shift in crypto’s power structure. Rather than operating entirely outside the traditional system, major players are increasingly anchoring themselves to regulated entities—banks, payment frameworks, and federally chartered institutions.
For Tether, backing Anchorage Digital offers exposure to U.S.-compliant stablecoin issuance and institutional custody at a time when regulation, not scale alone, is becoming the decisive competitive advantage. For Anchorage, the investment brings capital, credibility, and a deeper relationship with the dominant force in stablecoins—just as it positions itself for the scrutiny of public markets.