๐Ÿ“บWatch the video version: ThinkSmart.Life/youtube

What Is xurl?

If you've ever tried to interact with the X (Twitter) API from the command line, you know the pain: bearer tokens, OAuth dance, curl flags, JSON wrangling โ€” all before you've even made a single API call. xurl is X's answer to that problem. It's the official CLI for the X API v2, built and maintained by xdevplatform โ€” the team inside X responsible for developer tooling.

Think of xurl as curl, but purpose-built for the X API. Instead of crafting raw HTTP requests with OAuth headers manually, you run a single xurl command and it handles authentication, token refresh, endpoint routing, and streaming โ€” automatically. It's written in Go, which means a single binary with no runtime dependencies, and it supports all three X authentication methods: OAuth 1.0a (for user context with the old credential system), OAuth 2.0 PKCE (the modern flow), and Bearer token (for app-only access).

One feature that sets xurl apart from generic HTTP clients is its multi-app credential management. You can register multiple X developer apps under named profiles โ€” say, my-bot, analytics-app, personal โ€” each with their own credentials stored in a YAML config at ~/.xurl. When you need to switch context, it's just --app my-bot. No more juggling environment variables or hitting the wrong account by accident.

Official source: xurl is maintained at github.com/xdevplatform/xurl by the X Developer Platform team. It's the same tool referenced in X's official developer documentation for testing and scripting.

The other piece that matters for developer workflows is streaming support. The X API v2 has several streaming endpoints โ€” filtered stream, sample stream, volume stream โ€” that maintain long-lived HTTP connections and push tweet data in real time. xurl auto-detects these endpoints (anything matching /2/tweets/search/stream, /2/tweets/sample/stream, etc.) and handles the connection lifecycle automatically. You don't need to write custom streaming code; just pipe the output.

Why It Matters

xurl matters for three distinct groups: individual developers, automation engineers, and โ€” increasingly โ€” AI agent systems.

For Developers

Before xurl, testing and scripting against the X API meant either using Python libraries (tweepy, twitter-api-v2) that pull in heavyweight dependencies, or crafting manual curl commands with Authorization headers that expire and need refreshing. xurl collapses this into a single binary. You can prototype, debug, and automate X API interactions the same way you'd use any other CLI tool. It's the difference between using curl for REST APIs versus wiring up a full HTTP client library.

For Automation Engineers

Automation pipelines โ€” cron jobs, GitHub Actions, serverless functions โ€” often need to post tweets, check mentions, or pull recent user data. xurl's credential management means you can securely store multiple app credentials and invoke them by name in scripts. The YAML token store at ~/.xurl is readable and portable, making it straightforward to provision in CI environments with secrets management.

For AI Agent Systems

This is where xurl's value compounds significantly. Modern AI agent frameworks โ€” LangChain agents, AutoGPT-style systems, custom LLM harnesses โ€” increasingly need to interact with X as an output channel (posting updates, sharing research) and as a real-time data source (monitoring mentions, tracking trending topics). xurl gives these systems a clean, composable interface: a shell command that takes JSON input and returns JSON output, with authentication fully abstracted. OpenClaw, the AI assistant infrastructure platform, uses xurl as the backbone of its xurl skill โ€” a standardized interface for agents to post to X, read timelines, and interact with the API without needing to manage OAuth flows from within the agent's own code.

3
Auth methods supported
Go
Single binary, no deps
~/.xurl
YAML credential store
โˆž
Multi-app profiles

Installation Guide

xurl offers four installation methods to suit different environments and preferences. All result in the same xurl binary.

Method 1: Homebrew (macOS โ€” Recommended)

The fastest path on macOS. This installs the pre-built binary via X's official Homebrew tap:

brew install --cask xdevplatform/tap/xurl

The --cask flag is required here. After install, verify with xurl --version.

Method 2: npm (Node.js environments)

If you're working in a Node.js ecosystem or want a consistent install path across platforms via npm:

npm install -g @xdevplatform/xurl

This is the best choice for CI pipelines, Docker images, or environments where npm is already present. The global install makes xurl available in PATH.

Method 3: Curl install script (Linux / quick setup)

For Linux servers or any environment where you want a one-liner install without a package manager:

curl -fsSL https://raw.githubusercontent.com/xdevplatform/xurl/main/install.sh | bash

The script detects your OS and architecture, downloads the appropriate binary from the GitHub releases page, and places it in /usr/local/bin. Review the script before running in production environments.

Method 4: go install (Go developers)

If you have the Go toolchain installed and prefer to build from source:

go install github.com/xdevplatform/xurl@latest

The binary will be placed in your $GOPATH/bin. Ensure that's in your PATH. This method always gets you the latest commit, and it's the right choice if you want to inspect or modify the source.

Quick verification: After any install method, run xurl --help to confirm the binary is in your PATH and see the available commands.

Authentication Setup

Authentication is the step that trips up most developers when working with the X API. xurl doesn't eliminate that complexity, but it centralizes and stores it so you only go through it once per app. Here's the complete setup flow:

Step 1: Create an X Developer App

  1. Go to developer.x.com and log in with your X account.
  2. Create a new Project, then create an App inside it.
  3. Under your app settings, navigate to Keys and Tokens.
  4. Generate a Client ID and Client Secret (for OAuth 2.0) โ€” these are what xurl needs.
  5. Set the callback URL to http://localhost:8080/callback (or whatever xurl uses for the PKCE flow โ€” check xurl auth --help for the exact redirect URI).

Step 2: Register the App with xurl

Once you have your Client ID and Secret, register them with xurl under a named profile:

# Register app credentials under the name "my-app"
xurl auth apps add my-app \
  --client-id YOUR_CLIENT_ID \
  --client-secret YOUR_CLIENT_SECRET

This stores the credentials in ~/.xurl. You can register as many apps as you like under different names. List registered apps with:

xurl auth apps list

Step 3: Complete the OAuth 2.0 PKCE Flow

With credentials registered, run the OAuth2 authentication command to get user-level access tokens:

xurl auth oauth2 --app my-app

This opens a browser window to X's OAuth authorization page. Log in with the X account you want to authorize, grant permissions, and xurl will capture the callback, exchange the authorization code for tokens, and store them in ~/.xurl. You won't need to repeat this unless the refresh token expires.

Bearer Token (App-Only Access)

For read-only access that doesn't require user context โ€” fetching public tweets, user lookups โ€” you can use a Bearer token instead:

xurl auth apps add my-app \
  --client-id YOUR_CLIENT_ID \
  --client-secret YOUR_CLIENT_SECRET \
  --bearer-token YOUR_BEARER_TOKEN

Bearer tokens work for most GET endpoints and are the right choice for high-volume read pipelines where you don't need to act on behalf of a specific user.

Security note: The ~/.xurl YAML file stores tokens in plaintext. Ensure it has restrictive permissions (chmod 600 ~/.xurl) and is never committed to version control. In server environments, consider using a secrets manager to inject credentials rather than storing them on disk.

Core Usage

xurl's command structure mirrors curl's: method flag, endpoint path (relative to https://api.x.com), data flag for POST bodies. The learning curve is minimal if you already know curl.

Basic GET Requests

# Get your own user info
xurl /2/users/me

# Get a user by username
xurl "/2/users/by/username/OpenAI"

# Get recent tweets from a user (requires their user ID)
xurl "/2/users/123456789/tweets?max_results=10"

# Search recent tweets
xurl "/2/tweets/search/recent?query=xurl%20lang:en&max_results=20"

POST Requests: Posting Tweets

# Post a tweet
xurl -X POST /2/tweets \
  -d '{"text": "Hello from xurl! ๐Ÿฆ"}'

# Reply to a tweet
xurl -X POST /2/tweets \
  -d '{"text": "Great point!", "reply": {"in_reply_to_tweet_id": "1234567890"}}'

# Post with a poll
xurl -X POST /2/tweets \
  -d '{
    "text": "Which do you prefer?",
    "poll": {
      "options": [{"label": "Option A"}, {"label": "Option B"}],
      "duration_minutes": 1440
    }
  }'

Multi-App Usage

When you have multiple apps registered, use --app to specify which credentials to use:

# Use the "analytics-app" credentials
xurl --app analytics-app /2/tweets/search/recent?query=AI

# Use the "bot" credentials to post
xurl --app my-bot -X POST /2/tweets -d '{"text": "Automated post"}'

Authentication Override

# Force OAuth 2.0 user context
xurl --auth oauth2 /2/users/me

# Force Bearer token (app-only)
xurl --auth bearer /2/tweets/search/recent?query=hello

Streaming Endpoints

xurl auto-detects streaming endpoints and keeps the connection alive, printing each event as it arrives:

# Connect to the filtered stream (requires rules to be set first)
xurl /2/tweets/search/stream

# Sample stream (1% of all public tweets)
xurl /2/tweets/sample/stream

# Set a stream rule first
xurl -X POST /2/tweets/search/stream/rules \
  -d '{"add": [{"value": "xurl lang:en", "tag": "xurl mentions"}]}'

The output is newline-delimited JSON โ€” easy to pipe to jq, write to a file, or feed into downstream processing.

Piping and Scripting

# Pretty-print tweet JSON
xurl /2/users/me | jq '.'

# Extract just the username
xurl /2/users/me | jq -r '.data.username'

# Post content from a file
xurl -X POST /2/tweets -d @tweet.json

# Loop through a list of users
while IFS= read -r username; do
  xurl "/2/users/by/username/$username" | jq '.data.id'
done < users.txt

xurl in AI Agent Pipelines

The intersection of AI agents and social platforms is one of the most active areas of agent engineering right now. Agents need to post (share research results, engage with users, announce actions) and read (monitor mentions, track conversations, gather context). xurl provides the cleanest interface for the writing/posting side of this equation.

How OpenClaw Uses xurl

OpenClaw is an AI orchestration platform that runs Claude and other LLMs as persistent personal agents. Its xurl skill is a pre-packaged integration that gives agents the ability to interact with X as a native capability. When you ask an OpenClaw agent to "post a tweet about this research," it invokes xurl under the hood:

xurl Skill โ€” OpenClaw Integration

Agent capability ยท Shell-based ยท Credential-injected

The skill wraps xurl commands with the agent's configured X credentials, exposes a clean interface for the LLM to call, and handles error cases. The agent never sees OAuth tokens โ€” it just calls the skill with a tweet payload.

# What the agent invokes internally:
xurl --app openclaw-bot -X POST /2/tweets \
  -d '{"text": "New research published: DeerFlow by ByteDance..."}'

# Reading mentions for context:
xurl --app openclaw-bot /2/users/me/mentions?max_results=20 | jq '.data[]'

LangChain and Custom Agent Integrations

For LangChain-based agents, xurl fits naturally as a shell tool โ€” a BashTool or ShellTool that the agent can invoke. The standard pattern is:

  1. Agent decides to post or read from X
  2. Constructs the xurl command as a string
  3. Invokes via subprocess/shell tool
  4. Parses JSON output for downstream reasoning
# Example LangChain tool wrapper (Python)
import subprocess
import json

def post_tweet(text: str, app: str = "default") -> dict:
    """Post a tweet via xurl and return the response."""
    payload = json.dumps({"text": text})
    result = subprocess.run(
        ["xurl", "--app", app, "-X", "POST", "/2/tweets", "-d", payload],
        capture_output=True, text=True, check=True
    )
    return json.loads(result.stdout)

def get_mentions(user_id: str, app: str = "default") -> list:
    """Fetch recent mentions for a user."""
    result = subprocess.run(
        ["xurl", "--app", app, f"/2/users/{user_id}/mentions?max_results=20"],
        capture_output=True, text=True, check=True
    )
    return json.loads(result.stdout).get("data", [])

GitHub Actions Integration

xurl works seamlessly in CI/CD pipelines. A common pattern is posting release announcements or research updates automatically:

# .github/workflows/announce.yml
- name: Install xurl
  run: npm install -g @xdevplatform/xurl

- name: Configure xurl credentials
  run: |
    mkdir -p ~/.xurl
    echo "${{ secrets.XURL_CONFIG }}" > ~/.xurl/config.yaml

- name: Post release announcement
  run: |
    xurl --app ci-bot -X POST /2/tweets \
      -d "{\"text\": \"v${VERSION} released! ${RELEASE_URL}\"}"

MCP (Model Context Protocol) Integration

As MCP becomes the standard protocol for connecting AI models to external tools, xurl-based servers are emerging. An MCP server that wraps xurl exposes tweet posting, search, and user lookup as callable tools that any MCP-compatible AI can use โ€” Claude, GPT-4, Gemini โ€” without each needing its own X API integration. This is the direction the ecosystem is moving: standardized tool interfaces that abstract away credential management and API specifics.

The X API Pricing Problem

Here's the uncomfortable truth: for anything beyond basic posting and simple reads, the official X API is extraordinarily expensive. Understanding the pricing tiers explains why the alternatives market exists and why intelligent developers use a hybrid approach.

Free
1 request / 15 min for retrieval
$100
Basic / month ยท 10K tweets
$5K
Pro / month ยท full archive
$42K+
Enterprise / month

The Free tier gives you essentially nothing for read access: one request per 15 minutes for tweet retrieval. That's barely enough to monitor a single keyword manually. The Free tier does allow posting (for basic bot use cases), which is why it's still useful for write-heavy applications like AI agents that post updates.

The Basic tier at $100/month sounds reasonable until you run the numbers: 10,000 tweets per month with only 7 days of history. For any serious analytics, competitive intelligence, or research use case, you'll exhaust that in hours. And 7 days of history means you can't even look back a week and a half.

The Pro tier at $5,000/month is where the full API becomes useful โ€” full archive access, higher rate limits, streaming. But $5,000/month puts it out of reach for the vast majority of independent developers, startups, and researchers. That's $60,000/year for access to a social media API.

The Enterprise tier starts at $42,000/month and goes up from there. It's designed for large media organizations, financial data companies, and enterprise analytics platforms that can amortize the cost across millions of data consumers.

The 2023 pricing shock: X (then Twitter) dramatically increased API pricing in early 2023 under Elon Musk's ownership, eliminating the free tier and raising prices across the board. This triggered a mass exodus of developers and researchers to alternatives โ€” a shift that fundamentally reshaped the third-party data ecosystem.

The practical consequence for most developers: you can use xurl (backed by the official X API) for write operations and lightweight reads, but for any kind of research, analytics, or high-volume data collection, you need to go elsewhere. This is why the alternatives market exploded after 2023 and why a thoughtful "hybrid strategy" has emerged as the standard approach.

Alternatives Comparison

A robust ecosystem of third-party X API providers has emerged specifically to fill the gap left by X's pricing model. These range from drop-in API replacements to multi-platform social data aggregators. Here's a structured comparison:

Provider Pricing Coverage Best For
Official X API + xurl $100โ€“$5,000+/mo X/Twitter only Posting, compliance, official data provenance
Xpoz Free (100K req/mo), $20โ€“$200/mo Twitter, Instagram, TikTok, Reddit AI/LLM via MCP, natural language queries
TwitterAPI.io $0.15 per 1,000 tweets X/Twitter only Drop-in replacement, high-volume scraping
Data365 Pay-per-use Twitter, Instagram, TikTok Multi-platform structured data
SociaVault Tiered X/Twitter SMB social monitoring, simpler interface
Apify ~$0.50 per 1,000 items Multi-platform No-code workflows, visual pipeline builder
Bright Data $500+/mo Multi-platform Enterprise-scale, proxy infrastructure

Xpoz โ€” The AI-Native Option

Xpoz positions itself specifically for the AI/LLM use case. Its free tier gives 100,000 requests per month โ€” dramatically more than X's official free tier โ€” and paid plans run $20โ€“$200/month. The standout feature is MCP (Model Context Protocol) support: you can query Xpoz using natural language through an MCP server, which means AI agents can ask "what are people saying about xurl this week?" and get structured results without writing a single API call. Coverage extends beyond X to Instagram, TikTok, and Reddit, making it useful for cross-platform signal gathering.

The tradeoff: Xpoz scrapes data rather than using the official API, which means no guarantee of completeness and potential fragility if X changes its structure. For research and trend analysis this is usually acceptable; for compliance-sensitive use cases it is not.

TwitterAPI.io โ€” The Budget Drop-In

TwitterAPI.io is the most direct replacement for the X API at a fraction of the cost. At $0.15 per 1,000 tweets, collecting 10 million tweets costs $1,500 โ€” compared to several months of X Pro tier. The API is designed to be a drop-in replacement with compatible request/response formats, which means minimal code changes if migrating from the official API. Best for high-volume, X-only use cases where cost is the primary constraint. No multi-platform coverage.

Data365 โ€” Structured Multi-Platform Data

Data365 covers Twitter, Instagram, and TikTok with a pay-per-use model. Its strength is structured, normalized data: you get consistent schemas across platforms rather than raw API responses. Useful for analytics teams building dashboards that need to compare engagement across platforms. The pay-per-use model works well for variable-volume use cases but can get expensive for continuous monitoring.

Apify โ€” No-Code Pipelines

Apify is a cloud platform for web scraping and data extraction, with pre-built "Actors" (scrapers) for Twitter/X, Instagram, TikTok, YouTube, LinkedIn, and more. At roughly $0.50 per 1,000 items, pricing is moderate. The differentiator is the visual pipeline builder and marketplace of pre-built scrapers โ€” you don't need to write code to collect social data. This makes Apify the right choice for non-technical users and rapid prototyping. The tradeoff is less control over data freshness and collection scheduling versus rolling your own.

Bright Data โ€” Enterprise Infrastructure

Bright Data is enterprise-grade web data infrastructure. It starts at $500+/month and scales to massive volume. Its edge is the underlying residential proxy network โ€” it's harder for platforms to block than datacenter IPs โ€” combined with structured data APIs. For organizations doing truly large-scale social intelligence (hedge funds, large media companies, market research firms), Bright Data is the right tool. For everyone else, it's overkill.

When to Use What

The optimal strategy isn't to pick one provider โ€” it's to use the right tool for each specific job. Here's a decision guide based on what you're actually trying to accomplish:

โœ… Use xurl + Official X API when:

  • You need to post tweets (the Free tier allows posting)
  • You're building a bot or automated account that primarily writes
  • You need official data provenance โ€” compliance, journalism, research that requires citing the official source
  • You're doing OAuth-based user actions: DMs, likes, follows, retweets under a specific user account
  • You're using an AI agent that needs to interact with X as an output channel
  • Budget allows for Basic ($100/mo) and you need < 10K tweets/month

โœ… Use TwitterAPI.io when:

  • You need high-volume tweet collection on a budget
  • You want a drop-in replacement for the official API with minimal code changes
  • Your use case is X-only (no multi-platform needs)
  • You need more than 10K tweets/month but can't justify $5K/month Pro tier

โœ… Use Xpoz when:

  • You're building an AI agent or LLM application that needs social data as context
  • You want natural language querying via MCP without writing raw API calls
  • You need multi-platform coverage (Twitter + Instagram + TikTok + Reddit)
  • You want a generous free tier to prototype before committing to paid

โœ… Use Apify when:

  • You're a non-technical user who needs social data without writing code
  • You want multi-platform scraping with pre-built scrapers
  • You're doing periodic large extractions (not continuous monitoring)
  • You value a visual pipeline interface over raw API control

โœ… Use Bright Data when:

  • You need enterprise-scale data collection at volume that exceeds other providers
  • You need residential proxy infrastructure to avoid platform-level blocking
  • Budget is not the primary constraint and data reliability/scale is

The Hybrid Pattern (Recommended for AI Agents)

For AI agent developers, the recommended architecture is a hybrid of xurl (official API for all write operations) plus a cost-effective read provider for research and monitoring:

# Write operations: always use xurl + official API
# (free tier allows posting, maintains official provenance)
xurl --app my-agent -X POST /2/tweets -d '{"text": "..."}'

# Read / research operations: use Xpoz or TwitterAPI.io
# (dramatically cheaper for high-volume reads)
curl "https://api.twitterapi.io/twitter/tweet/search?query=xurl" \
  -H "Authorization: Bearer YOUR_KEY"

This hybrid approach means you're using the official API for actions that require it (posting as your authenticated account, accessing private data), while using cost-effective alternatives for research and monitoring tasks that don't require official provenance. Most production agent deployments have settled on some version of this pattern.

๐ŸŽฏ Bottom Line

xurl is the right foundation for any developer or AI agent that needs to interact with X. It solves the authentication and scripting complexity that makes the X API painful to use directly, and it's maintained by X's own developer platform team โ€” so it stays current with API changes.

The limitation is X's own pricing: the official API is only cost-effective for write-heavy use cases and lightweight reads. For research, analytics, or any kind of volume data collection, the hybrid strategy โ€” xurl for writes, a third-party provider for reads โ€” is the pragmatic approach that most serious developers have adopted.

As AI agent frameworks mature and MCP becomes the standard for tool connectivity, expect xurl-based MCP servers and standardized X integration layers to become first-class citizens in agent toolkits. The infrastructure is already taking shape.

References

  1. xdevplatform. (2024). xurl โ€” Official X API CLI. GitHub. github.com/xdevplatform/xurl
  2. X Developer Platform. (2026). X API v2 Pricing Tiers. developer.x.com
  3. Infatoshi. (2024). x-cli โ€” Simpler CLI for X API v2. GitHub. github.com/Infatoshi/x-cli
  4. Xpoz. (2026). Social Data API with MCP support. xpoz.io
  5. TwitterAPI.io. (2026). Affordable Twitter API Alternative. twitterapi.io
  6. Apify. (2026). Web Scraping and Data Extraction Platform. apify.com
  7. Bright Data. (2026). Web Data Platform. brightdata.com
  8. Data365. (2026). Social Media Data API. data365.co
  9. Anthropic / OpenClaw. (2026). xurl Skill for AI Agent X Integration. Internal documentation.