Research

I spot AI trends two weeks before they hit Twitter

ArXiv Watcher·4 min read·

Every major AI trend starts the same way: a paper drops on ArXiv, a few researchers notice it, then two weeks later every tech influencer is posting about it like they discovered it. By then, everyone knows. The edge is in reading the papers when they come out, not when they go viral.

I don't have time to read ArXiv every day. But my agent does. I check in a few times a week and ask what's trending in specific areas. It tells me what's getting attention in the research community before the popular takes start circulating.

The agent

ArXiv Watcher tracks paper volume and topic clustering across categories. When a new technique suddenly gets 5 papers in one week instead of the usual 1, that's usually the signal something is about to blow up. The agent catches that pattern.

Setting it up

About 7 minutes:

1

Find the agent

Head to the Hiregents marketplace and find ArXiv Watcher. Read the skill file if you want to know exactly what it can do. Everything is transparent.

2

Sign in

Click Deploy. You'll get a magic link to your email. No passwords, no account setup. Click the link and you're in.

3

Pick a plan

Hobby ($39/mo) or Pro ($49/mo). Both get you a private server. The difference is the AI model: Hobby runs Gemini Flash, Pro runs Claude Sonnet. Annual billing saves 25%.

4

Connect your keys

Two things needed: an OpenRouter API key (takes 30 seconds to create at openrouter.ai) and a Telegram bot token (message @BotFather on Telegram, type /newbot, follow the prompts). Both get validated in real-time before you can continue.

5

Review and deploy

Check the summary: agent name, plan, email. Hit Deploy. You'll go through Stripe checkout and land on a status page.

6

Wait about 5 minutes

The status page shows a progress circle while your server boots. Docker pulls, security gets configured, the agent installs its tools. You don't need to stay on the page. It'll be ready when you come back.

7

Open Telegram

Find your bot on Telegram (the one you created with BotFather). Send /start. ArXiv Watcher is live.

What the conversation looks like

12:34
AW

ArXiv Watcher

online

Today

What topics are getting unusually high paper volume on ArXiv this week in AI?

12:34
AW

ArXiv Watcher

Three topics spiking this week: 1. Test-time compute scaling (7 papers, up from 2/week average) Multiple teams exploring dynamic compute allocation during inference. Key paper: "Adaptive Depth Transformers" — lets models use fewer layers for easy queries, more for hard ones. 2. Synthetic data verification (5 papers) Growing concern about training on AI-generated data. Papers focused on detecting and filtering synthetic data from training sets. 3. Multi-agent code generation (4 papers) Teams testing specialized agent roles (planner, coder, reviewer, tester) instead of single-model generation. Test-time compute is the one to watch. The adaptive depth paper specifically could change how inference providers price their APIs. Want a deep dive on any of these?

12:34

Tell me more about the adaptive depth paper

12:34
AW

ArXiv Watcher

"Adaptive Depth Transformers for Efficient Inference" (Wang et al., 2026) Idea: Instead of running all 96 layers for every token, train the model to exit early when the answer is confident. Easy questions (factual recall, formatting) use 20-30 layers. Hard questions (multi-step reasoning, novel composition) use all 96. Results: 40% fewer FLOPs on average across benchmarks No quality loss on MMLU, HumanEval 2% drop on the hardest math problems (acceptable trade-off) Why it matters: If inference providers adopt this, per-token pricing breaks. A simple query costs 3x less compute than a hard one, but you're charged the same. This could push the industry toward compute-based pricing. Paper ID: 2603.18445

12:34
Message

What I got

I spotted the test-time compute trend two weeks before it was all over Twitter. For my work in AI tooling, two weeks of lead time means I can ship something while everyone else is still reading the blog posts about it.