USDC Stablecoin Issuer Circle Unveils New Token to Give Bitcoin More Utility
Publicly traded stablecoin issuer Circle is launching a new token, cirBTC, its own wrapped Bitcoin alternative.
In brief
-
Circle is launching cirBTC, a wrapped Bitcoin alternative designed to unlock Bitcoin utility for institutions and investors.
-
The token will first launch on Ethereum mainnet and Arc, Circle's stablecoin focused blockchain.
-
cirBTC will join notable wrapped Bitcoin products like BitGo's WBTC and Coinbase's cbBTC.
Publicly traded stablecoin issuer Circle wants to unlock utility for the world’s largest crypto asset. Its solution? A new wrapped Bitcoin token—cirBTC—backed 1:1 with native on-chain Bitcoin reserves.
“Bitcoin is sitting on the sidelines of DeFi. Not because people don't want yield or liquidity—it's because they don't trust the wrapper,” Rachel Mayer, VP of product at Circle and the Arc blockchain, posted on X.
“cirBTC is Circle's answer: 1:1 backed, on-chain-verifiable, and built on infrastructure the market already trusts,” she added.
Circle claims its “proven credibility” and “full-stack flexibility” will make cirBTC an attractive alternative for institutions looking to add utility to BTC.
In other words, the firm expects that users want to put their Bitcoin to work, like via lending or borrowing in decentralized finance (DeFi) applications. Using a wrapped Bitcoin product allows them to engage with DeFi protocols and smart contracts on networks beyond the native Bitcoin blockchain.
The token will first launch on Ethereum mainnet and Arc, the stablecoin-focused blockchain incubated by the firm, with ready-made integrations with its dollar-backed stablecoin USDC and Circle Mint, its stablecoin issuance platform.
“We are bringing the same infra that supports USDC, EURC, and USYC to the largest digital asset, creating a neutral infrastructure for new applications for on-chain BTC,” Circle co-founder and CEO Jeremy Allaire posted on X.
Circle’s wrapped alternative joins existing wrapped Bitcoin tokens like BitGo’s Wrapped Bitcoin (WBTC) and cbBTC, a similar token offered by Coinbase that can be used on multiple blockchains.
But the alternative options are not free of controversy.
In August 2024, the custodian of WBTC announced it was partnering with BiT Global, a firm with connections to Tron founder Justin Sun. That invited criticisms from some in the crypto community, who were wary of the connection to Sun.
Following that move, Coinbase launched cbBTC, earning its own criticisms from Sun, who mocked the asset as the “central bank of Bitcoin.”
Following the launch of its own wrapped Bitcoin product, Coinbase ultimately delisted WBTC from its crypto exchange, leading to a lawsuit from BiT Global that alleged a “predatory and unfair move.” That suit was eventually dropped.
At the time of writing, BitGo’s WBTC remains the largest wrapped Bitcoin alternative, maintaining a market cap of nearly $8 billion at the time of writing. Coinbase’s cbBTC has nearly a $6 billion market cap.
Shares in Circle (CRCL) closed down 0.53% on Thursday, recently changing hands around $90.26. They have now fallen nearly 40% in the last six months.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
launch
How to Get Gemma 4 26B Running on a Mac Mini with Ollama
So you picked up a Mac mini with the idea of running local LLMs, pulled Gemma 4 26B through Ollama, and... it either crawls at 2 tokens per second or just refuses to load. I've been there. Let me walk you through what's actually going on and how to fix it. The Problem: "Why Is This So Slow?" The Mac mini with Apple Silicon is genuinely great hardware for local inference. Unified memory means the GPU can access your full RAM pool — no separate VRAM needed. But out of the box, macOS doesn't allocate enough memory to the GPU for a 26B parameter model, and Ollama's defaults aren't tuned for your specific hardware. The result? The model either fails to load, gets killed by the OOM reaper, or runs painfully slowly because half the layers are falling back to CPU inference. Step 0: Check Your Hard

The Agent Economy Is Here — Why AI Agents Need Their Own Marketplace
The Agent Economy Is Here — Why AI Agents Need Their Own Marketplace AI Agents are starting to need each other's services. But there's no standardized way for them to discover, verify, and pay. That's changing. Agents Are No Longer Just Tools — They're Becoming Economic Participants Between late 2025 and early 2026, the role of AI Agents shifted in a subtle but critical way. When we used to say "AI Agent," we pictured an assistant that follows orders — organizing inboxes, summarizing documents, handling customer support. It was a tool. You were the user. Clear relationship. That's not how it works anymore. A quantitative trading Agent needs real-time news summaries. It doesn't scrape news sites itself — it calls another Agent that specializes in news aggregation. That news Agent needs mult
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

OpenClaw CVE-2026-33579: Unauthorized Privilege Escalation via `/pair approve` Command Fixed
CVE-2026-33579: A Critical Analysis of OpenClaw’s Authorization Collapse The recently disclosed CVE-2026-33579 vulnerability in OpenClaw represents a catastrophic failure in its authorization framework, enabling trivial full instance takeovers. At the core of this issue lies the /pair approve command—a mechanism intended for secure device registration that, due to a fundamental design flaw, bypasses critical authorization checks. This analysis dissects the vulnerability’s root cause, exploitation process, and systemic failures, underscoring the urgency of patching and the ease of attack. Root Cause: Authorization Bypass via Implicit Trust OpenClaw’s pairing system is designed to facilitate temporary, low-privilege access for device registration. The /pair approve command, however, omits ex

How to Get Gemma 4 26B Running on a Mac Mini with Ollama
So you picked up a Mac mini with the idea of running local LLMs, pulled Gemma 4 26B through Ollama, and... it either crawls at 2 tokens per second or just refuses to load. I've been there. Let me walk you through what's actually going on and how to fix it. The Problem: "Why Is This So Slow?" The Mac mini with Apple Silicon is genuinely great hardware for local inference. Unified memory means the GPU can access your full RAM pool — no separate VRAM needed. But out of the box, macOS doesn't allocate enough memory to the GPU for a 26B parameter model, and Ollama's defaults aren't tuned for your specific hardware. The result? The model either fails to load, gets killed by the OOM reaper, or runs painfully slowly because half the layers are falling back to CPU inference. Step 0: Check Your Hard




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!