1.3 KiB
1.3 KiB
description, args, section, topLevelCli
| description | args | section | topLevelCli |
|---|---|---|---|
| Compare multiple sources on a topic and produce a source-grounded matrix of agreements, disagreements, and confidence. | <topic> | Research Workflows | true |
Compare sources for: $@
Derive a short slug from the comparison topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements:
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to
outputs/.plans/<slug>.md. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding. - Use the
researchersubagent to gather source material when the comparison set is broad, and theverifiersubagent to verify sources and add inline citations to the final matrix. - Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
- Generate charts with
pi-chartswhen the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons. - Distinguish agreement, disagreement, and uncertainty clearly.
- Save exactly one comparison to
outputs/<slug>-comparison.md. - End with a
Sourcessection containing direct URLs for every source used.