🚧 This site is not yet live — actively under construction
Independent · Open Methodology · AI-Powered

Semantic evidence search for One Health research

Search across veterinary, zoonotic, wildlife, and epidemiological literature. Every claim graded. Every source linked. Every method transparent.

2.4M+
Papers indexed
8.1M+
Claims extracted
47
Species covered
GRADE
Evidence quality levels
What we do

Research answers, not just search results

Traditional databases return papers. We return structured evidence: graded claims, linked across species and disciplines, with full provenance to the source. Ask a question, get structured evidence.

Evidence Cards

Every paper, structured

Each paper in our index gets an Evidence Card: study type classification, PICO extraction, consensus-validated claims, and a GRADE-aligned quality level. Not a summary — a structured, queryable record.

Semantic Search

Find related evidence across papers

Claims and entities are embedded in a shared biomedical vector space using PubMedBERT. Search by meaning, not keywords — find related findings across species, disciplines, and terminology.

Evidence Summaries

Transparent strength assessment

Each study produces an evidence summary with its claims, GRADE quality level, and composite quality score. You see the structured evidence, not just a list of papers.

Evidence at a glance

Structured for decisions, not just discovery

An Evidence Card is not an abstract. It's a structured extraction: what was studied, in which species, what was found, and how strong the evidence is. Every field is searchable. Every claim links back to its source sentence.


Cards are generated automatically using our AI extraction pipeline. Our grading rubric, extraction methodology, and quality controls are fully documented and public.

Moderate
Journal of Veterinary Internal Medicine · 2025
Efficacy of maropitant versus ondansetron for chemotherapy-induced emesis in dogs receiving doxorubicin
RCT Canine n=84 Multicenter
Maropitant was associated with reduced acute emesis episodes compared to ondansetron (p=0.02) over 72 hours post-treatment.
No significant difference in delayed emesis between treatment groups at day 5.
Appetite scores were higher in the maropitant group at 48 hours.
How it works

From paper to searchable evidence

Ingest

We continuously index metadata and abstracts from cross-publisher feeds, open repositories, and publisher-provided endpoints. Every paper gets a canonical identifier.

Extract

Three independent AI extractions run on every paper and are clustered by semantic similarity. Only claims that reach consensus (2-of-3 agreement) survive. Each claim is linked to its supporting text.

Grade

Each source receives a GRADE-aligned quality level (High, Moderate, Low, Very Low) based on study design, plus a composite quality score incorporating sample size, journal tier, and recency. The rubric is public and versioned.

Index

Claims and entities are embedded into a shared vector space using PubMedBERT. Semantic similarity search lets you find related evidence across species and disciplines — even when different terminology is used.

Who it's for

Built for the people who need the evidence

Veterinary Clinicians

Ask a clinical question, find structured evidence filtered by species. See how many RCTs support the treatment, what GRADE quality level each carries, and what the studies found — in the time it takes to read an abstract.

Epidemiologists & Public Health

Trace evidence for a pathogen across host species. See which interventions have been studied at the wildlife-livestock-human interface, and where the surveillance gaps are.

Researchers & Students

Start a literature review from structured evidence, not keyword search. Semantic search surfaces related findings across species and terminology, showing which questions have been studied and where the gaps are.

Policymakers & Guideline Authors

Get evidence summaries structured for decision-making: counts by study type, GRADE quality levels, and transparent quality scoring for every source.

Our commitment

Transparent by design


We are an independent research tool — not affiliated with any government body, publisher, or institution. Our methodology is public. Our grading rubric is versioned. Every claim links to its source. We show our work.

Open rubric

Our evidence grading methodology is fully documented and versioned. You can inspect, critique, and suggest improvements.

Full provenance

Every extracted claim links to the source sentence in the original abstract. Nothing is fabricated; everything is traceable.

No clinical imperatives

We report what the evidence shows. We never say "you should." Clinical decisions remain with qualified professionals.

Start searching the evidence

Ask a question. See the structured evidence. Follow every claim back to its source.