Claude Code Insights

628 messages across 63 sessions (81 total) | 2026-04-10 to 2026-05-11

At a Glance
What's working: You consistently ship work end-to-end — merged PRs, linked issues, post-merge cleanup, and worktree pruning — rather than stopping at 'code written.' Your skeptical pushback on over-engineered solutions (like challenging the bash safeguard down to 4 lines) keeps output lean, and you reliably capture learnings back into reusable skills so the next cycle is faster. Impressive Things You Did →
What's hindering you: On Claude's side: it sometimes acts on stale or unverified context (outdated worktree files, assumed branch state) and occasionally over-engineers or jumps to execution before confirming intent, requiring you to interrupt and redirect. On your side: the Notion workflow generates a lot of churn because edits happen incrementally — broken update tools, duplicate blocks, and connector confusion compound when pages are built block-by-block instead of in one shot. Where Things Go Wrong →
Quick wins to try: Try building a custom Skill for your Notion publishing pattern that constructs the full block tree first and writes once, sidestepping the broken update-a-block tool entirely. A pre-session Hook that prints current branch, worktree, and origin/main divergence would also kill the stale-context class of mistakes before they happen. Features to Try →
Ambitious workflows: Your SAMM-style investigations across Git, Billi, Fireflies, Notion, and Gmail are prime candidates for parallel sub-agents — one per data source gathering evidence concurrently, then a synthesis agent reconciling contradictions and publishing the report. Longer term, your dotfiles/skills/docs drift audits could run as scheduled background agents that open ready-to-merge PRs autonomously, turning multi-hour audit sessions into short review queues. On the Horizon →
628
Messages
+7,481/-443
Lines
113
Files
13
Days
48.3
Msgs/Day

What You Work On

Notion Workspace Management & Content Publishing ~18 sessions
Heavy use of the Notion MCP to create, edit, and restructure pages for project status reports, real estate negotiations, meeting outlines, and personal reflections. Claude frequently worked around a broken update-a-block tool using delete+append patterns, and helped configure local MCP servers with proper token storage. Tasks ranged from drafting bilingual blog posts to summarizing strategy documents.
Dotfiles, Skills & Claude Code Configuration ~10 sessions
Significant work on personal Claude Code setup: versioning CLAUDE.md memories across repos, extracting reusable skills, fixing MCP server loading issues, auditing dotfiles drift with automated detection scripts, and migrating from personal 1Password sessions to scoped service accounts. Multiple PRs were shipped to clean up configuration patterns and document workspace conventions.
Billi/Toggl/CRA Time Tracking Automation ~7 sessions
Used Claude to sync time-tracking data between Toggl, Billi API, and Google Sheets, including creating 2025 CRAs from Excel and writing 89 days of data across multiple people. Implemented a day-cap with reconciliation in the toggl-calendar skill (PR #12) and iteratively cleaned up monthly entries. Skill/memory capture was an explicit goal.
Code Reviews, PRs & Bug Fixes ~8 sessions
End-to-end PR workflows including diagnosing iPhone responsiveness bugs, hardening production DB safeguards (PR #1182), rebasing Cloudflare Terraform changes, and fixing session hook errors. Claude handled worktree management, PR description rewrites, and applied review feedback, though occasionally over-engineered initial solutions until pushed back.
Research, Explanations & Architecture Guidance ~7 sessions
Informational and exploratory sessions covering chunked video upload architectures (tus vs WebSocket), Episto recodes database design, e2b.dev evaluation, and storage backend alternatives to Notion. Claude also assisted with sensitive legal/jurisdictional questions and explained tooling distinctions like Claude Code vs opencode, typically delivering codebase-grounded answers.
What You Wanted
Iterative Refinement
6
Information Retrieval
5
Explanation Request
5
Email Search
4
Notion Content Editing
4
Documentation Update
3
Top Tools Used
Bash
1056
Mcp Notion API-Delete-A-Block
243
Read
215
Edit
124
ToolSearch
110
Write
92
Languages
Markdown
183
Python
87
JSON
23
TypeScript
15
YAML
14
Ruby
9
Session Types
Multi Task
22
Single Task
12
Quick Question
7
Iterative Refinement
6
Exploration
3

How You Use Claude Code

You operate as a high-throughput orchestrator who treats Claude Code as a delivery system rather than a brainstorming partner. Your sessions typically span multiple deliverables chained together—filing CII taxes AND emailing accountants AND publishing a bilingual blog post in one go, or cleaning Toggl AND creating CRAs across three companies AND updating Notion. You give terse, goal-oriented prompts and expect Claude to figure out the path, but you stay actively engaged: the 21 'wrong_approach' frictions and frequent interruptions show you course-correct in real-time rather than writing detailed specs upfront. When Claude over-engineered a bash safeguard, you pushed back with 'This seems completely useless. Convince me otherwise'—a signature move of demanding justification before accepting complexity.

You trust Claude to run long autonomous chains (Bash dominates at 1056 calls, with heavy multi-file edits and Agent delegation) but you interrupt decisively when the approach drifts. You caught Claude editing the wrong worktree, reading stale CLAUDE.md, missing an email thread judged by title alone, and trying to exit plan mode prematurely. You also frequently ask 'which MCP did you use?' or question PR conventions—you want to understand the mechanism, not just receive the output. The high satisfaction rate (114 likely_satisfied) combined with willingness to revert work mid-stream ('revert SAMM CRA changes due to a client deal') suggests you're comfortable iterating in production rather than perfecting in staging.

Your workflow is PR-centric and documentation-heavy: nearly every session ends with a merged PR, a Notion subpage, or both. Markdown dominates your language footprint (183 files) because you treat Notion and CLAUDE.md as first-class artifacts alongside code. You bounce between French and English depending on context, work across many repos (dotfiles, episto, billi, samm, claude workspace), and consistently extract reusable skills from one-off solutions—turning a transcribe workflow into PR 84, or extracting project knowledge into user-level skills. You're essentially using Claude as a personal staff engineer + executive assistant hybrid, and you measure success by shipped artifacts, not conversation quality.

Key pattern: You chain ambitious multi-deliverable goals into single sessions, trust Claude to run autonomously, but interrupt sharply to demand justification or correct drift—optimizing for shipped PRs and Notion pages over careful upfront specification.
User Response Time Distribution
2-10s
62
10-30s
89
30s-1m
87
1-2m
80
2-5m
45
5-15m
65
>15m
51
Median: 60.7s • Average: 297.8s
Multi-Clauding (Parallel Sessions)
47
Overlap Events
47
Sessions Involved
33%
Of Messages

You run multiple Claude Code sessions simultaneously. Multi-clauding is detected when sessions overlap in time, suggesting parallel workflows.

User Messages by Time of Day
Morning (6-12)
87
Afternoon (12-18)
395
Evening (18-24)
146
Night (0-6)
0
Tool Errors Encountered
Other
65
Command Failed
58
User Rejected
36
File Not Found
6

Impressive Things You Did

Across 63 sessions spanning a month, you've blended deep technical work with knowledge management, infrastructure hygiene, and real-world business operations.

Ship-it-end-to-end discipline
You consistently push work all the way through to merged PRs with linked issues, post-merge cleanup, and worktree pruning. Examples like the prod-DB safeguards (PR #1182 + issue #1199), the dotfiles drift detector with LaunchAgent automation, and the toggl-calendar day-cap fix (PR #12 closing issue #10) show you treat 'done' as merged-and-cleaned, not just coded.
Pushback that improves code
When Claude over-engineered a bash safeguard, you challenged it with 'This seems completely useless — convince me otherwise,' forcing a simplification down to 4 lines. This pattern of skeptical review (also visible in your PR review iterations on CLAUDE.md content) keeps the output lean and prevents accumulating cruft.
Cross-system business orchestration
You routinely chain Toggl, Billi, Notion, Fireflies, Gmail, and Google Sheets into single coherent workflows — like cleaning April Toggl data, generating CRAs for three clients, and updating Notion in one pass, or filing CII taxes while emailing accountants and publishing a bilingual blog post. You then capture the learnings as reusable skills so the next cycle is faster.
What Helped Most (Claude's Capabilities)
Good Explanations
17
Multi-file Changes
14
Good Debugging
6
Correct Code Edits
5
Proactive Help
4
Fast/Accurate Search
4
Outcomes
Partially Achieved
7
Mostly Achieved
11
Fully Achieved
31
Unclear
1

Where Things Go Wrong

Your sessions show strong overall success but recurring friction from Claude jumping into action with stale context, broken external tools (especially Notion MCP), and occasional over-engineering that requires you to push back.

Stale or unverified context before acting
Claude frequently acts on outdated information or assumptions instead of verifying first, forcing you to interrupt and redirect. Asking Claude to confirm current state (origin/main, file contents, email bodies) before proposing solutions would save round-trips.
  • Claude answered a PR convention question from stale CLAUDE.md in a worktree, requiring you to demand verification against origin/main
  • Claude judged emails by title alone and missed the 'état des lieux samm' thread, requiring you to explicitly point it out
Notion MCP instability and formatting churn
The Notion integration is a persistent pain point — broken update tools, duplicate blocks during incremental formatting, and confusion between the local MCP server and the claude.ai connector. You'd benefit from a stable 'write once' Notion workflow (build full block tree, then create) rather than incremental edits.
  • The MCP update-a-block tool failed, forcing delete+append workarounds across multiple sessions (243 delete-block calls is a red flag)
  • During the VPS pentest writeup, incremental formatting created so many duplicate Notion entries that you fell back to a local markdown file
Over-engineering and premature action requiring pushback
Claude sometimes ships more complexity than needed or jumps to execution before confirming intent, making you act as a brake. Defaulting to minimal solutions and confirming scope before exiting plan mode or rendering output would reduce this.
  • Claude over-engineered a bash safeguard with redundant checks, and you had to push back with 'This seems completely useless. Convince me otherwise' to get a 4-line version
  • Claude tried to exit plan mode and post a GitHub issue prematurely when you only wanted the plan filed, requiring interruption
Primary Friction Types
Wrong Approach
21
Misunderstood Request
8
Buggy Code
7
User Rejected Action
7
Excessive Changes
5
Stale Context
1
Inferred Satisfaction (model-estimated)
Frustrated
2
Dissatisfied
12
Likely Satisfied
114
Satisfied
23
Happy
2

Existing CC Features to Try

Suggested CLAUDE.md Additions

Just copy this into Claude Code to add it to your CLAUDE.md.

Multiple sessions had friction from Claude editing the wrong checkout, losing cd context between bash calls, and pushing to the wrong repo — this is a recurring pattern across at least 3 sessions.
Notion-related friction appeared in 5+ sessions: broken update tool, duplicates from incremental edits, wrong connector used, and calculations dumped to chat instead of the page.
Sessions repeatedly hit issues from stale context (worktree CLAUDE.md), missed email threads judged by title, and misread symlink structures — user had to push back and request re-verification.
User explicitly pushed back on over-engineering ('This seems completely useless', 'Convince me otherwise') and a CLAUDE.md draft was rejected in review for being too detailed — simplicity is a recurring preference.

Just copy this into Claude Code and it'll set it up for you.

Custom Skills
Reusable prompts as markdown files invoked by /command
Why for you: You repeatedly do Billi CRA creation, Toggl reconciliation, Notion publishing, PR splitting, and dotfiles drift checks — each is a multi-step workflow that would benefit from a single /command entry point. You already have a toggl-calendar skill and whisper-cpp skill; extend the pattern.
mkdir -p ~/.claude/skills/billi-cra && cat > ~/.claude/skills/billi-cra/SKILL.md <<'EOF' # Billi CRA Creation Given a month and project, create CRAs via Billi API: 1. Read 1Password for BILLI_API_TOKEN 2. Fetch existing CRAs to avoid duplicates 3. Build day entries from Toggl (use toggl-calendar skill) 4. POST to Billi /cra endpoint 5. Update tracking Google Sheet 6. Report summary table EOF
Hooks
Shell commands auto-run at Claude lifecycle events
Why for you: You hit a `grep -q` hook exit-code bug already, and you have recurring patterns where Claude edits the wrong worktree. A PreToolUse hook on Edit/Write could verify `pwd` matches the expected worktree, and a SessionStart hook could echo the current branch and CWD.
// .claude/settings.json { "hooks": { "SessionStart": [{ "command": "echo \"CWD: $(pwd) | Branch: $(git branch --show-current 2>/dev/null)\"" }], "PreToolUse": [{ "matcher": "Edit|Write", "command": "git rev-parse --show-toplevel || true" }] } }
MCP Servers
Connect Claude to external tools via Model Context Protocol
Why for you: You spend significant time on Gmail searches (4+ email_search sessions, one missed a key thread) and Billi API calls. A Gmail MCP would let Claude do full-text searches instead of title-only triage, and a Billi MCP would replace ad-hoc curl scripts.
claude mcp add gmail -- npx -y @modelcontextprotocol/server-gmail # Then in session: 'Search all emails (body included) for SAMM commitments with François'

New Ways to Use Claude Code

Just copy this into Claude Code and it'll walk you through it.

Front-load context verification
Start sessions that touch worktrees, branches, or repo conventions with an explicit 'verify state' step before doing work.
Several of your sessions had Claude answer from stale worktree CLAUDE.md or wrong-directory edits. A 30-second verification at the start (which branch, which remote, which worktree, what's on origin/main) prevents the bigger cleanup later. You already do this implicitly when you push back — making it explicit upfront saves the round-trip.
Paste into Claude Code:
Before we start: run `pwd && git branch --show-current && git status && git fetch && git log origin/main..HEAD --oneline` and confirm the working state. Then proceed with: <task>
Batch Notion writes, don't iterate
When updating Notion pages with structured content, build the full block tree first, then write once.
You hit duplicate-block issues multiple times when Claude edited Notion incrementally (especially tables). The pattern that worked: delete the target block, then append the complete new structure in one patch-block-children call. Tell Claude to plan the full Notion structure in chat first, then execute.
Paste into Claude Code:
Plan the complete Notion page structure as a markdown outline first. Show it to me for approval. Then write it to Notion in a single delete+append operation per top-level block — no incremental edits.
Use plan mode for multi-deliverable sessions
For sessions with 3+ deliverables (CRA + Notion + email, etc.), explicitly request a plan before execution.
Your most successful sessions (SAMM status, 1Password migration, CII taxes) had clear multi-step structure. Your friction sessions often had Claude jump to execution then revert (Toggl SAMM revert, monetization plan posting). Asking Claude to enumerate deliverables and confirm order upfront would catch course-corrections cheaply.
Paste into Claude Code:
Enter plan mode. List every deliverable with: (1) source data needed, (2) target system, (3) reversibility. Wait for my approval before executing anything that writes to Notion, GitHub, Billi, or Google Sheets.

On the Horizon

Your workflow shows a power user orchestrating complex multi-system tasks across Notion, GitHub, Billi, and local skills—the next frontier is letting autonomous agents own these loops end-to-end while you review outcomes, not steps.

Autonomous Drift Detection & Self-Healing Repos
Given your repeated dotfiles audits, skill documentation drift fixes, and Terraform rebases, a scheduled background agent could continuously detect drift across all your repos (skills↔README↔website, dotfiles↔installed packages, Terraform↔Cloudflare state) and open ready-to-merge PRs autonomously. Instead of you triggering 'check for drift' sessions, you'd wake up to a dashboard of proposed fixes with passing CI. This turns your reactive 5-10 hour audit sessions into 5-minute review queues.
Getting started: Use Claude Agent SDK with cron-triggered headless sessions plus your existing LaunchAgent pattern, gh CLI for PR creation, and a shared 'drift report' Notion page as the single source of truth.
Paste into Claude Code:
Build me an autonomous drift-detection system using the Claude Agent SDK that runs nightly via LaunchAgent. It should: (1) iterate over a configured list of repos, (2) for each repo, spawn a headless Claude session that compares declared state (skills/, README, dotfiles manifests, Terraform configs) against actual state (filesystem, installed packages, deployed infrastructure), (3) auto-create a draft PR per drift category with a clear diff and justification, (4) post a daily summary to a Notion 'Drift Dashboard' page with links to each PR. Include rate limiting, a kill switch, and ensure each spawned agent works in its own git worktree. Start by scaffolding the orchestrator and one example detector for the skills↔README case.
Parallel Agents for Multi-System Reporting
Your SAMM/Billi/Toggl/Fireflies investigations show you stitching evidence from 5+ systems sequentially over hours. Spawn parallel sub-agents—one per data source (Git, Billi API, Fireflies, Notion, Gmail)—that gather evidence concurrently, then a synthesis agent reconciles findings, flags contradictions (like the Dec/Jan timeframe mismatch), and publishes a structured Notion report. What takes 90 minutes serially completes in 10 minutes with higher recall.
Getting started: Use the Task tool with multiple parallel Agent invocations, each scoped to one MCP server (notion, gmail, fireflies) plus a final synthesis pass. Define a shared evidence schema (JSON) so sub-agents return comparable structured findings.
Paste into Claude Code:
Create a 'project-archaeology' workflow that investigates the history of a given project across multiple sources in parallel. Spawn 5 sub-agents simultaneously using the Task tool: (1) git-archaeologist searching commits/PRs/issues across relevant repos, (2) email-archaeologist searching Gmail for threads, (3) meeting-archaeologist searching Fireflies transcripts, (4) notion-archaeologist searching Notion pages, (5) invoice-archaeologist checking Billi CRAs. Each must return findings as JSON {date, source, actor, evidence, confidence}. Then a synthesis agent merges results into a chronological timeline, flags contradictions, and publishes to a Notion subpage. Test it on: 'When did the SAMM Ximi sync become functional and who decided on recurring interventions?'
Test-Driven Skill Evolution Loop
Your toggl-calendar day-cap fix (implement → test → verify on real data → PR) is the template for a fully autonomous skill-improvement loop. Define behavioral tests for each skill (transcribe, billi-cra, whisper-cpp, toggl-calendar), then let an agent iterate: detect failing real-world cases from session logs, write a failing test, implement the fix, run tests, open the PR, all without human prompting until green. Your skills compound in quality automatically.
Getting started: Combine pytest/jest fixtures with the Agent SDK in a loop: feed it failing cases from your session transcripts, let it edit skill files and re-run tests until pass, then auto-PR. Use Claude's built-in Bash + Edit tools with a strict 'tests must pass before commit' guard.
Paste into Claude Code:
Set up a test-driven evolution loop for my Claude skills. For each skill in ~/.claude/skills/: (1) scaffold a tests/ directory with real-world fixture inputs and expected outputs derived from past session transcripts, (2) write a runner script that executes the skill against fixtures and reports diffs, (3) build a 'skill-evolver' agent that on a weekly cron: scans recent session transcripts for skill invocations that produced suboptimal results (e.g., the missed 8.4h overnight timer in toggl-calendar), generates a new failing fixture, edits the skill until tests pass, and opens a PR with the fix + new test. Start with toggl-calendar and transcribe. The agent must never commit if tests fail.
""This seems completely useless. Convince me otherwise.""
When Claude proposed an over-engineered bash safeguard with redundant checks against prod DB resets, the user pushed back hard, forcing Claude to defend its design — and ultimately simplify it down to a clean 4-line block. A great moment of human skepticism keeping the AI honest.