How It Works
Nquiry isn't a chat window where you paste evidence and hope for the best. It's a structured workflow built on the same methodology used by oversight agencies, internal auditors, and compliance professionals.

Define your scope, structure your inquiry, and set up the foundation before collecting a single document.
Gather evidence systematically. Every document, interview transcript, and dataset gets cataloged with metadata and linked to questions.
The AI searches your entire evidence collection for each question, applies a professional evidence evaluation framework, and drafts findings with citations.
Generate professional reports where every finding traces back to supporting evidence. Edit, refine, and export.
Topics and questions structure the inquiry.

Every document cataloged and linked to questions.

AI drafts findings with citations — you make the calls.

Export-ready reports where every finding traces to evidence.

Analysis Types
Each built for a different investigative need. Every type applies the same evidence evaluation framework and produces structured, citation-backed output.
Evaluates evidence against a specific investigation question. The AI searches for relevant evidence, assesses its relevance and limitations, weighs conflicting sources, and produces a finding with a confidence level and full citations.
You get: A direct answer with confidence level, the evidence that supports it, alternative explanations considered, evidence gaps identified, and recommended follow-up.
When you have multiple questions under one topic, this analysis looks across all of them to find common themes, patterns, and cross-cutting gaps.
You get: A synthesis of findings across related questions, common patterns identified, cross-cutting evidence gaps, and topic-level conclusions.
Systematically identifies where your evidence is thin, what questions lack sufficient support, and what types of evidence would strengthen your findings.
You get: Questions with insufficient evidence flagged, specific evidence types recommended, priority gaps ranked, and a roadmap for additional collection.
A high-level synthesis of the entire investigation for leadership, stakeholders, or report introductions.
You get: Key findings across all topics, overall evidence assessment, conclusions, and a clear picture of the investigation’s current state.
Looks across all existing analyses for internal consistency — do your findings contradict each other? Does evidence cited in one analysis conflict with another?
You get: Cross-analysis consistency review, flagged contradictions, and areas where conclusions may need reconciliation.
Professional Standards
Nquiry's evaluation framework draws on CIGIE Quality Standards, GAO Yellow Book, IIA Standards, ACFE guidance, PCAOB, Federal Rules of Evidence, INTOSAI, and ISO 19011.
For each piece of evidence, the AI determines whether it's relevant to the question at hand and whether it has material limitations — concerns about source credibility, provenance, timeliness, internal consistency, or factual basis.
When limitations exist, the AI describes them specifically so you can weigh them in context.
Evidence assessments feed directly into the confidence level of each finding. Strong, limitation-free evidence from multiple sources pushes toward “Established.” Evidence with material concerns is still considered but weighted accordingly.
The result: every finding comes with a documented reasoning chain you can verify and defend.
The evaluation framework is informed by quality dimensions including relevance, reliability, sufficiency, validity, competence, completeness, timeliness, objectivity, authenticity, and consistency. These principles shape how the AI reasons about your evidence — not as a checklist, but as the professional lens through which every assessment is made.
Transparency
A defined taxonomy based on the quality, quantity, and convergence of evidence.
Strong, sufficient, convergent evidence. Multiple independent sources agree. The conclusion would be accepted by a reasonable, objective evaluator.
Good evidence with minor gaps. More likely than not, but some uncertainty remains.
Some evidence supports the conclusion, but significant gaps, conflicts, or weaknesses exist.
Evidence is too weak, conflicting, or incomplete to support any conclusion. More evidence is needed.
There's also Contradicted — when available evidence actually weighs against the proposed conclusion. That's not a failure. That's the system working.
Human in the Loop
Every analysis goes through a structured review flow before it can contribute to your report.
Read what the AI found — the conclusion, the evidence it considered, and the confidence level.
Before you can agree or disagree, you must open at least one cited evidence item and verify it yourself. This is built into the interface.
Agree — the analysis is sound. Disagree — something’s wrong, and you record why. Unsure — you need more before you can decide.
Provide investigator direction to focus the AI. “Focus on timeline inconsistencies.” “Evaluate against Section 3.2.” The AI incorporates your guidance without overriding the evaluation framework.
Every action — agree, disagree, regenerate, edit — is timestamped and logged. When someone asks “who reviewed this finding and when?” you have an answer.
Quality Assurance
These aren't the same AI process that wrote the analysis. They're independent evaluations that catch problems the generating model might miss.
A separate AI process examines every factual claim and checks whether it’s supported by the retrieved evidence.
95%+ means nearly all claims verified. Below 85% is flagged for attention.
Breaks the question into constituent elements and verifies the analysis addresses each one. Gaps are identified explicitly.
Ensures the analysis actually answers what was asked.
Every retrieved passage comes with a relevance score. The system tracks high vs. low relevance and flags weak evidence bases.
Tells you how strong the evidence foundation is.
Evidence Evaluation Pipeline
When you run analysis, Nquiry doesn't rely on a single search method. It runs a three-stage evidence evaluation pipeline that combines precision, conceptual understanding, and AI judgment.
Scans all evidence for exact word matches. Catches case numbers, policy codes, names, dates, and identifiers that must be matched precisely.
Converts questions and evidence into meaning-based vectors, finding conceptually similar passages even when the words are completely different.
A separate AI model evaluates every result from both searches in context, promoting genuinely relevant evidence and filtering out false positives.
For every analysis, expand the “Evidence Considered” panel to see exactly which passages the pipeline found, how they were discovered, their relevance scores, and whether they were included or excluded. Full transparency into what the AI saw when it wrote its findings.
Built-in self-audit catches potential errors and unsupported claims. You review every finding before it goes in the report. Your judgment, your conclusions—Nquiry just helps you get there faster.
The AI Guide panel sits beside your work, ready to help with navigation, methodology, and project status — without ever leaving the page.

A context-aware assistant available from any page. It knows where you are in your investigation, what phase you're in, and what features are available.