Evidence-ledger draft

Paper draft

Generated paper draft with evidence base, comparison table, findings, and limitations.

Evidence Ledgers for Autonomous Research Harnesses

Draft generated: 2026-05-07

Abstract

Autonomous research systems increasingly promise end-to-end scientific workflows, but their outputs are difficult to trust when claims are not explicitly tied to evidence. This draft synthesizes evidence from the autonomous research harness literature and argues that the near-term research direction should prioritize auditable evidence ledgers, persistent research state, and reviewer-independent claim checks over fully black-box automation.

1. Introduction

The current wave of AI Scientist systems spans open-ended machine-learning research, clinical research workflows, survey-level assessments, and research-harness engineering. Across these systems, a recurring concern is not just whether agents can produce papers, but whether their claims remain grounded in inspectable evidence. This paper draft therefore treats the evidence ledger as the central product and research object.

2. Evidence base

PaperRoleCore claimEvidence status
2605.03042v1Anchor technical report for autonomous research harness designLong-horizon single-agent research is unreliable by default; the central failure mode is plausible unsupported success, mitigated through persistent state, modular execution, and independent assurance via cross-family executor/reviewer separation.Full text read from arXiv PDF. Integration implication: our harness should keep evidence_matrix/claim ledger as first-class artifacts, add research-harnesses taxonomy, and keep reviewer-independent audits as hard invariants.
2504.08066v1End-to-end autonomous ML research capability baselineAI Scientist-v2 reports an end-to-end agentic system that formulates hypotheses, designs and executes experiments, analyzes and visualizes data, and writes manuscripts; compared with v1 it removes reliance on human-authored code templates and uses progressive agentic tree search.Abstract-derived evidence row; needs full-text audit before final submission.
2603.28589v1Domain-specific autonomous research frameworkClinical autonomous research needs domain-specific evidence grounding; Medical AI Scientist transforms surveyed literature into actionable evidence and uses clinician-engineer co-reasoning to improve traceability of generated ideas.Abstract-derived evidence row; full text required before citing numerical comparisons.
2507.23276v2Survey and bottleneck map for AI Scientist systemsCurrent AI Scientist systems have visible achievements, but the field still needs clarity on bottlenecks and critical components before scientific agents can produce ground-breaking discoveries that solve grand challenges.Abstract-derived evidence row; needs full text for taxonomy details.
2511.04583v4Risk-focused autonomous scientist systemJr. AI Scientist follows a novice-student-like workflow from a baseline paper: analyze limitations, formulate hypotheses, iterate experiments until improvements, and write a result paper; the work also reports risks and limitations of current AI Scientist systems.Abstract-derived evidence row; useful for risk and workflow comparison.

3. System comparison

PaperWorkflow scopeEvidence / audit mechanismReported evaluationLimitation for this draft
2605.03042v1System design report: ARIS uses three layers (execution, orchestration, assurance), 65+ Markdown-defined skills, five end-to-end workflows, a research wiki, an evidence-to-claim audit cascade, and cross-model review loops.Use ARIS as evidence that autonomous research systems are moving from monolithic agents toward artifact-contract workflows with independent assurance and provenance-aware claim ledgers.Observational metrics: reviewer score improved from 5.0 to 7.5/10 over about 8 hours, 4 review-revise rounds, and 20+ GPU experiments; no controlled causal evaluation.No correctness guarantee; audit cascade is advisory; reviewer bias can be amplified; repository-level review can leak sensitive code to external APIs; local-only reviewer routing is planned but not implemented; controlled evaluation is future work.
2504.08066v1Progressive agentic tree-search methodology managed by an experiment manager agent, plus a VLM feedback loop for iterative refinement of figure content and aesthetics.Use AI Scientist-v2 as the capability frontier example for end-to-end AI-generated ML manuscripts, then contrast it with ARIS-style claim auditing.Workshop peer-review scores; the abstract reports that one manuscript exceeded the average human acceptance threshold.Abstract-only extraction; workshop acceptance and score details require full-text verification; the abstract emphasizes capability more than evidence-ledger assurance.
2603.28589v1Clinical autonomous research framework with evidence-grounded ideation, structured medical manuscript drafting, ethical policies, and three modes: paper-based reproduction, literature-inspired innovation, and task-driven exploration.Use Medical AI Scientist as evidence that autonomous research harnesses need domain-grounded evidence and conventions rather than generic research loops.LLM and human expert evaluations; method-implementation alignment; executable experiment success rates; double-blind human expert and Stanford Agentic Reviewer manuscript evaluations.Abstract-only extraction; clinical claims require full paper review of datasets, evaluator protocol, ethics policy, and statistical significance.
2507.23276v2Prospect-driven review that analyzes current achievements, limitations, missing components, and ultimate goals for scientific AI.Use this survey to position the first paper's taxonomy and open-problem framing rather than as direct system performance evidence.No primary experimental metric in the abstract; evaluates the field through review and bottleneck analysis.Survey-level evidence; no system implementation or controlled experiment is described in the abstract.
2511.04583v4Autonomous workflow using modern coding agents for complex multi-file implementations, building on baseline papers rather than assuming full unconstrained automation.Use Jr. AI Scientist as evidence that baseline-paper-grounded workflows may be more realistic than unconstrained full automation, but risk reporting is essential.Automated AI Reviewer assessments, author-led evaluations, Agents4Science reviews, and DeepReviewer scores.Abstract-only extraction; DeepReviewer and author-evaluation details require full-text audit; reported risks need categorization before use as claims.

4. Findings

Finding 1: 2605.03042v1

ARIS operationalizes claim pruning, review-driven revision, artifact contracts, and claim auditing. Treat it as architecture/assurance evidence, not proof that cross-family review is superior.

**Support.** Use ARIS as evidence that autonomous research systems are moving from monolithic agents toward artifact-contract workflows with independent assurance and provenance-aware claim ledgers.

**Caveat.** No correctness guarantee; audit cascade is advisory; reviewer bias can be amplified; repository-level review can leak sensitive code to external APIs; local-only reviewer routing is planned but not implemented; controlled evaluation is future work.

Finding 2: 2504.08066v1

The system reports the first entirely AI-generated peer-review-accepted workshop paper and demonstrates growing capability for autonomous scientific workflows.

**Support.** Use AI Scientist-v2 as the capability frontier example for end-to-end AI-generated ML manuscripts, then contrast it with ARIS-style claim auditing.

**Caveat.** Abstract-only extraction; workshop acceptance and score details require full-text verification; the abstract emphasizes capability more than evidence-ledger assurance.

Finding 3: 2603.28589v1

The abstract reports higher-quality ideas than commercial LLMs, stronger method-implementation alignment, higher executable experiment success rates, and manuscripts approaching MICCAI-level quality while surpassing ISBI/BIBM baselines.

**Support.** Use Medical AI Scientist as evidence that autonomous research harnesses need domain-grounded evidence and conventions rather than generic research loops.

**Caveat.** Abstract-only extraction; clinical claims require full paper review of datasets, evaluator protocol, ethics policy, and statistical significance.

Finding 4: 2507.23276v2

The survey frames where AI Scientist systems are, what is missing, and what ultimate goals scientific AI should target.

**Support.** Use this survey to position the first paper's taxonomy and open-problem framing rather than as direct system performance evidence.

**Caveat.** Survey-level evidence; no system implementation or controlled experiment is described in the abstract.

Finding 5: 2511.04583v4

The abstract reports generated papers building on real top-venue works and higher DeepReviewer scores than existing fully automated systems, while also identifying important limitations and risks.

**Support.** Use Jr. AI Scientist as evidence that baseline-paper-grounded workflows may be more realistic than unconstrained full automation, but risk reporting is essential.

**Caveat.** Abstract-only extraction; DeepReviewer and author-evaluation details require full-text audit; reported risks need categorization before use as claims.

5. Research direction

The highest-value near-term direction is not to claim fully autonomous science, but to measure whether evidence-ledger workflows reduce unsupported claims. A local-first implementation can evaluate top-N relevance, filled-evidence coverage, supported-claim precision, citation existence, unsupported-claim detection, and time-to-brief.

6. Limitations

Claim audit status