Literature Review Pack
Guides researchers through end-to-end systematic literature review process including protocol development, search strategy optimization, scr
The Zoo of Manual Review Workflows
Systematic literature reviews are where rigorous methodology goes to die. You start with a clean protocol, but within weeks, the tracking breaks down. You're juggling Zotero exports that refuse to sync, Excel sheets with conditional formatting that collapses under row limits, and a Word document where inclusion criteria are buried in a footnote that no one reads. The result is a workflow that no one can reproduce, and a final manuscript that gets flagged for reporting errors before peer review even begins.
Install this skill
npx quanta-skills install literature-review-pack
Requires a Pro subscription. See pricing.
We built the Literature Review Pack because we saw too many researchers treating systematic reviews like ad-hoc projects instead of engineering problems. A review is a data pipeline. It has ingestion, screening, extraction, and synthesis phases. When you treat it as a collection of documents, you introduce friction at every step. We've seen teams spend months on a review only to realize their screening decisions weren't version-controlled, or that their search string for Scopus was a manual copy-paste of the PubMed query, missing database-specific syntax quirks. This isn't just inefficiency; it's a structural failure. If you're also looking to automate citation management or optimize search strategies, you'll find that manual workflows don't scale past a few hundred records. The friction compounds.
What a Broken Review Costs You
The cost of a broken review workflow isn't just frustration; it's structural risk. When you manually track screening decisions, you introduce inter-rater variability that no amount of "double-checking" fixes. You miss the subtle drift in inclusion criteria that invalidates your entire sample. More critically, you risk non-compliance with reporting standards. The PRISMA 2020 statement [1] replaced the 2009 guidelines to address exactly these gaps, introducing a 27-item checklist and a standardized flow diagram to ensure transparency [7]. If your process doesn't natively support these requirements, you're not just risking rejection; you're risking the credibility of your evidence synthesis.
Every hour spent manually reconciling duplicates or reconstructing a PRISMA flow diagram [2] is an hour stolen from actual analysis. We've audited workflows where researchers spent three weeks manually extracting author metadata just to fill out tables, work that a script can do in seconds. The downstream impact is severe: delayed publications, retraction risks due to incomplete reporting, and the inability to reproduce your findings. If you're also integrating bias assessment tools or data synthesis frameworks, you'll see how manual gaps cascade into meta-analysis errors. The PRISMA statement [6] emphasizes that reporting guidelines exist to prevent exactly this kind of opacity. When you ignore the engineering of the review process, you pay for it in credibility and time.
A Team's Three-Week Reconciliation Nightmare
Imagine a research team attempting a systematic review on a niche clinical intervention. They set up a shared spreadsheet for screening, but as the record count climbs past 500, the spreadsheet becomes unmanageable. They decide to add a second reviewer for quality control, but now they have two divergent sets of decisions to reconcile. They try to map their search strategy to PICO elements, but the Boolean logic for PubMed differs from Scopus, and they end up manually rewriting queries. When they finally reach the drafting phase, they realize they haven't captured the specific metadata required for the Cochrane risk of bias assessment [4]. They spend three weeks manually extracting authors, affiliations, and grant data just to fill out the tables. The PRISMA flow diagram ends up being a retrofitted afterthought rather than a real-time reflection of the process [3]. This isn't an edge case; it's the default experience for teams without a structured, programmatic approach to review management. A protocol design tool or meta-analysis suite can't save you if your underlying data collection is broken from day one.
What Changes Once the Pack Is Installed
With the Literature Review Pack installed, the review process shifts from manual tracking to validated workflow execution. The skill orchestrates the end-to-end process, enforcing protocol adherence at every step. You define your search strategy using the search-query-builder.yaml template, which maps PICO/PECO frameworks to database-specific syntax, eliminating manual query rewriting. Screening decisions are captured against a structured screening-protocol.md template that aligns with PRISMA and Cochrane standards, ensuring that inclusion/exclusion criteria are version-controlled and reproducible. The validator script check-prisma-compliance.py runs against your review manifest, exiting non-zero if critical PRISMA 2020 items are missing, so you catch reporting gaps before submission. Evidence extraction is handled by pubmed-extract.py and scholarly-search.py, which automate metadata retrieval and BibTeX generation, freeing you to focus on synthesis. The result is a review that is auditable, compliant, and fast.
What's in the Literature Review Pack
skill.md— Orchestrator skill that defines the Literature Review Expert persona, outlines the end-to-end systematic review workflow, and references all templates, scripts, validators, and references.references/prisma-2020-checklist.md— Canonical knowledge base containing the full PRISMA 2020 27-item checklist and flow diagram description for reporting systematic reviews.references/cochrane-methodology.md— Canonical knowledge base containing Cochrane Handbook excerpts on risk of bias assessment, GRADE approach, and evidence synthesis standards.templates/search-query-builder.yaml— Production-grade template for structuring search strategies using PICO/PECO frameworks, Boolean operators, and database-specific syntax mappings.templates/screening-protocol.md— Template for defining inclusion/exclusion criteria, screening workflow, and data extraction fields aligned with PRISMA and Cochrane standards.scripts/scholarly-search.py— Executable Python script using the scholarly package to optimize search strategies, retrieve publications, fill author metadata, and export BibTeX with proxy support.scripts/pubmed-extract.py— Executable Python script using pubmed_parser to parse MEDLINE/PubMed XML, extract authors, affiliations, grants, and citations for evidence synthesis.validators/check-prisma-compliance.py— Programmatic validator that checks a review manifest YAML against PRISMA 2020 required fields. Exits non-zero if critical items are missing.tests/test-validator.sh— Test script that runs the PRISMA compliance validator on valid and invalid manifests to ensure it correctly passes and fails.examples/worked-example.yaml— Worked example demonstrating a complete search strategy and screening protocol configuration for a sample research question.
Install and Ship
Stop managing reviews in spreadsheets. Start enforcing methodology. Upgrade to Pro to install the Literature Review Pack and ship your next systematic review with confidence.
References
- The PRISMA 2020 statement: an updated guideline for reporting systematic reviews — pubmed.ncbi.nlm.nih.gov
- Creating a PRISMA flow diagram: PRISMA 2020 — guides.lib.unc.edu
- Systematic Reviews: PRISMA — guides.libraries.emory.edu
- Cochrane handbook for systematic reviews of interventions — pure.johnshopkins.edu
- The PRISMA 2020 statement: an updated guideline for reporting systematic reviews — bmj.com
- PRISMA statement — prisma-statement.org
- PRISMA 2020 statement — prisma-statement.org
- An updated guideline for reporting systematic reviews — equator-network.org
Frequently Asked Questions
How do I install Literature Review Pack?
Run `npx quanta-skills install literature-review-pack` in your terminal. The skill will be installed to ~/.claude/skills/literature-review-pack/ and automatically available in Claude Code, Cursor, Copilot, and other AI coding agents.
Is Literature Review Pack free?
Literature Review Pack is a Pro skill — $29/mo Pro plan. You need a Pro subscription to access this skill. Browse 37,000+ free skills at quantaintelligence.ai/skills.
What AI coding agents work with Literature Review Pack?
Literature Review Pack works with Claude Code, Cursor, GitHub Copilot, Gemini CLI, Windsurf, Warp, and any AI coding agent that reads skill files. Once installed, the agent automatically gains the expertise defined in the skill.