The Signal
The signal isn't sentiment itself — it's velocity. A stock mentioned 10x more than its 30-day average in a 48-hour window is an event, regardless of tone. OpenClaw monitors Reddit and news feeds for your watchlist, scores tone, and alerts on anomalies.
Two Data Layers
Layer 1: Reddit (Free API)
- Reddit API: 100 requests/minute on free tier, OAuth required
- Subreddits to monitor: r/wallstreetbets, r/investing, r/stocks, r/SecurityAnalysis, r/options
- Track: mention count, upvote velocity, comment sentiment on ticker threads
Layer 2: GDELT (Completely Free, Massive)
- Global Database of Events, Language, and Tone
- Monitors 100+ languages, 65+ countries, near real-time
- Free to query via BigQuery (GCP free tier) or direct download
- Provides tone scores (-10 to +10) and event categorization
HEARTBEAT Configuration
name: sentiment_monitor
schedule: "0 */6 * * *"
steps:
- fetch_reddit:
subreddits:
- wallstreetbets
- investing
- stocks
tickers: "{{ watchlist_tickers }}"
window_hours: 24
- fetch_gdelt:
companies: "{{ watchlist_names }}"
window_hours: 24
- calculate_velocity:
baseline_days: 30
alert_threshold_multiplier: 3
- llm:
prompt: |
Analyze these Reddit posts and news mentions for my watchlist.
Score overall sentiment (-1 to +1) for each ticker.
Flag any ticker with mention velocity 3x above its 30-day baseline.
Reddit data: {{ reddit_data }}
News data: {{ gdelt_data }}
- notify:
subject: "💬 Sentiment Alert: {{ ticker }} — {{ velocity_note }}"
condition: velocity_spike_detected
Reddit API Implementation
Authentication and Search
import httpx
import time
class RedditMonitor:
def __init__(self, client_id: str, client_secret: str, user_agent: str):
self.base = "https://oauth.reddit.com"
token_r = httpx.post(
"https://www.reddit.com/api/v1/access_token",
data={"grant_type": "client_credentials"},
auth=(client_id, client_secret),
headers={"User-Agent": user_agent}
)
self.token = token_r.json()["access_token"]
self.headers = {"Authorization": f"bearer {self.token}", "User-Agent": user_agent}
def search_ticker(self, ticker: str, subreddit: str = "wallstreetbets", limit: int = 25) -> list:
url = f"{self.base}/r/{subreddit}/search"
params = {"q": ticker, "sort": "new", "limit": limit, "restrict_sr": "true"}
r = httpx.get(url, params=params, headers=self.headers)
posts = r.json().get("data", {}).get("children", [])
return [{"title": p["data"]["title"], "score": p["data"]["score"],
"created": p["data"]["created_utc"], "url": p["data"]["url"]} for p in posts]
GDELT Query Implementation
def query_gdelt_mentions(company_name: str, days: int = 1) -> list:
"""
Query GDELT 2.0 GKG (Global Knowledge Graph) for company mentions.
Uses the free GDELT API endpoint.
"""
import httpx, urllib.parse
query = urllib.parse.quote(f'"{company_name}"')
url = f"https://api.gdeltproject.org/api/v2/doc/doc?query={query}&mode=artlist&maxrecords=25×pan={days}d&format=json"
r = httpx.get(url)
if r.status_code == 200:
data = r.json()
return data.get("articles", [])
return []
Velocity Calculation
from collections import defaultdict
from datetime import datetime, timedelta
def calculate_velocity(current_mentions: int, historical: list, baseline_days: int = 30) -> float:
"""Returns ratio of current mentions vs historical daily average."""
if not historical:
return 1.0
avg = sum(historical) / len(historical)
return current_mentions / avg if avg > 0 else float("inf")
Frequently Asked Questions
Yes, even for read-only access to the new Reddit API (post-2023 changes). Register a free app at reddit.com/prefs/apps.
GDELT is excellent for news volume and geographic spread. Its tone scores are algorithmic — validate with LLM for higher accuracy.
3x the 30-day average is a reasonable starting point. Calibrate per ticker — volatile meme stocks will have higher natural variance.
Next Steps
Now that you can track narrative momentum, move to Part 4: App Store & GitHub Intelligence to monitor product health metrics that often lead earnings surprises.