Frequently asked questions
Common questions about our platform, methodology, and how to interpret the evidence we present.
General
onehealth.science is an AI-powered evidence search platform for One Health research. We index metadata and abstracts from veterinary, zoonotic, wildlife, and epidemiological literature, then use automated extraction to produce structured Evidence Cards with graded claims for every paper. The result is a semantic search system where you can ask a clinical or research question and get back structured, graded evidence — not just a list of papers.
No. We are an independent research platform. "One Health" is a widely used scientific framework, not a proprietary term. We use it because it accurately describes our scope: evidence across the human-animal-environment interface. We have no affiliation with any government body, university, or publisher.
Our primary audiences are veterinary clinicians looking for evidence-based answers to clinical questions, epidemiologists and public health professionals tracking evidence across species boundaries, researchers and students conducting literature reviews, and policymakers developing evidence-informed guidelines. The platform is designed to be useful whether you have five minutes between appointments or five hours for a deep review.
Basic search and evidence profiles are free. We believe that access to evidence should not be a barrier to good clinical and research decisions. Advanced features — including API access, bulk data exports, and integration tools — are available through paid plans. Details will be published when these features launch.
Evidence & Methodology
We use the GRADE (Grading of Recommendations, Assessment, Development and Evaluation) framework, the international standard used by the WHO, Cochrane, and over 100 organizations worldwide. Each source receives one of four quality levels. High quality evidence comes from systematic reviews and well-designed RCTs. Moderate quality comes from RCTs with limitations or strong cohort studies. Low quality comes from observational studies. Very Low quality comes from case series, case reports, in vitro studies, and expert opinion. Alongside the quality level, each source also receives a composite quality score (0–10) that integrates study design, journal tier, sample size, and recency into a single comparable number.
Our automated grading uses the GRADE framework — the same evidence quality standard used by the WHO, Cochrane, and major clinical guideline bodies. It is transparent structured triage, not a substitute for expert systematic review. It tells you: "This source is an RCT with 84 participants in a multicenter study, rated Moderate quality" — which is factual and useful — but it cannot assess internal validity or methodological rigor at the level a trained reviewer would. We publish our rubric, version it, and every claim goes through triple-extraction consensus (three independent extractions, only claims with 2-of-3 agreement survive). You can always see why a source received its quality level and judge for yourself.
When semantically related claims across studies point in different directions — for example, some studies finding a positive effect and others finding no effect — we describe this as "mixed evidence." This doesn't mean the evidence is bad; it means the science is not settled on this question. Each study's Evidence Card shows its claims, polarity, and GRADE quality level so you can see the full picture.
We cross-reference our index with retraction databases where available. Retracted papers are flagged and excluded from active evidence profiles. If a retraction occurs after a paper has been indexed and its claims extracted, the claims are removed from the active graph and the paper's Evidence Card is updated with a retraction notice.
Currently, our extraction pipeline works primarily with abstracts and metadata. This means we capture the main findings of each paper but may miss subgroup analyses, detailed methodology, or nuance that appears only in the full text. For open-access papers with permissive licensing, we do ingest full text for richer extraction. We plan to expand full-text coverage in a future release.
PubMed and Google Scholar are search engines: they return papers that match your query. We return structured evidence. When you search our platform, you don't get a list of PDFs — you get Evidence Cards showing what types of studies address your question, what each one found, and how strong the evidence is (using the GRADE framework). Every claim links to its source. Every quality level is explained. Semantic search via PubMedBERT embeddings lets you find connections across species and disciplines that keyword search can't reveal.
Data & Coverage
New papers are ingested weekly. The extraction pipeline processes new arrivals within 48 hours of ingestion. Entity dictionaries are updated monthly. Grading rubric changes are versioned and documented on the methodology page.
There are several possible reasons: the paper may not yet have been indexed by our source feeds (new papers can take 1–2 weeks to appear), the journal may not be in our current coverage scope, the paper may be in a language we don't yet process (currently English only), or the abstract may not have been available in our metadata sources. If you believe a paper should be included, please let us know at coverage@onehealth.science.
Not yet. We are planning an author/researcher upload feature that will allow you to submit manuscripts and datasets directly. This will be available in a future phase. For now, if your paper has been published and has a DOI, it should be picked up through our standard ingestion pipeline.
Clinical Use
You should use this as one input into clinical decision-making, alongside your professional expertise, patient-specific factors, and other information sources. We present evidence; we do not make recommendations. Our evidence profiles help you see what the research shows and where it's uncertain, but clinical decisions involve judgment that no automated tool can replace.
Because evidence alone doesn't determine the right clinical action. A treatment supported by Moderate quality evidence might not be appropriate for a specific patient due to comorbidities, cost, availability, or owner preferences. Clinical recommendations require integrating evidence with practitioner expertise and patient context — and ideally, expert peer review of the recommendation itself. We provide the evidence layer; the recommendation layer is for qualified professionals and guideline bodies.
Still have questions?
We're happy to discuss our methodology, coverage, or anything else about the platform.
Contact us