Why investor days matter
Investor days and analyst days are when management presents multi-year strategy, financial targets, and product roadmaps. They're often more information-dense than quarterly earnings calls. But they happen infrequently (once per year or less) and are easy to miss. OpenClaw monitors for announcements and captures everything automatically.
Detection methods
Method 1 — SEC 8-K monitoring (most reliable)
Companies must file an 8-K when they announce a material event. Investor day announcements typically come as Item 7.01 (Regulation FD) or Item 8.01 (Other Events). The EDGAR RSS feed for 8-K filings catches these within 15 minutes.
Method 2 — IR page calendar scraping
Most IR pages have an "Events" or "Calendar" section. Scrape weekly for new events.
Method 3 — Earnings call forward references
During earnings calls, management often says "we'll discuss this further at our analyst day in March." Extract these forward references automatically using the transcript pipeline from Part 1.
Major conference calendar
| Conference | Typical month | Sector focus |
|---|---|---|
| JPMorgan Healthcare | January | Healthcare, biotech |
| Goldman Sachs Technology | February | Technology |
| Morgan Stanley Consumer | March | Consumer, retail |
| Goldman Sachs Communacopia | September | Media, telecom |
| Barclays Global Consumer | September | Consumer goods |
| Deutsche Bank Technology | September | Technology |
HEARTBEAT configuration
name: investor_day_monitor
schedule: "0 8 * * 1-5"
steps:
- fetch_8k_feed:
url: "https://www.sec.gov/cgi-bin/browse-edgar?action=getcurrent&type=8-K&dateb=&owner=include&count=40&output=atom"
filter_items: ["7.01", "8.01"]
keywords: ["investor day", "analyst day", "investor conference", "capital markets day"]
ciks: "{{ watchlist_ciks }}"
- scrape_ir_calendars:
companies: "{{ watchlist_companies }}"
selector_hints:
- ".events-list"
- "#events-calendar"
- "[data-events]"
- check_seen:
dedup_key: event_url
store: seen_events.json
- schedule_followup:
days_before: 3
action: "reminder_notify"
- on_event_day:
capture_slides: true
capture_audio: true
trigger_transcription: true
- notify:
subject: "📅 Investor Day Detected: {{ company }} — {{ event_date }}"
8-K event detection — Python snippet
import httpx
import feedparser
import re
INVESTOR_DAY_KEYWORDS = [
"investor day", "analyst day", "investor conference",
"capital markets day", "strategic update", "business review",
"analyst meeting", "investor briefing"
]
def scan_8k_feed_for_events(watchlist_ciks: list) -> list:
"""Scan EDGAR 8-K RSS for investor day announcements."""
feed_url = ("https://www.sec.gov/cgi-bin/browse-edgar?"
"action=getcurrent&type=8-K&dateb=&owner=include&count=40&output=atom")
feed = feedparser.parse(feed_url)
events = []
for entry in feed.entries:
cik_match = re.search(r'CIK=(\d+)', entry.link, re.IGNORECASE)
cik = cik_match.group(1) if cik_match else None
if watchlist_ciks and cik not in [str(c) for c in watchlist_ciks]:
continue
title_lower = entry.title.lower()
summary_lower = entry.get("summary", "").lower()
combined = title_lower + " " + summary_lower
for keyword in INVESTOR_DAY_KEYWORDS:
if keyword in combined:
events.append({
"cik": cik,
"title": entry.title,
"url": entry.link,
"keyword_matched": keyword,
"date": entry.get("published", ""),
})
break
return events
Post-event capture — Python snippet
import httpx, os, time
from bs4 import BeautifulSoup
def capture_event_materials(event_page_url: str, output_dir: str) -> dict:
"""Download slides and audio from an investor day event page."""
os.makedirs(output_dir, exist_ok=True)
r = httpx.get(event_page_url,
headers={"User-Agent": "EventCapture/1.0 contact@youremail.com"},
timeout=15, follow_redirects=True)
soup = BeautifulSoup(r.text, "html.parser")
captured = {"slides": [], "audio": [], "video": []}
for a in soup.find_all("a", href=True):
href = a["href"]
if href.lower().endswith(".pdf"):
fname = os.path.join(output_dir, os.path.basename(href))
pdf_r = httpx.get(href, follow_redirects=True)
with open(fname, "wb") as f:
f.write(pdf_r.content)
captured["slides"].append(fname)
time.sleep(2)
elif re.search(r'\.(mp3|mp4|m4a)$', href, re.IGNORECASE):
captured["audio"].append(href)
return captured