这是一篇由原始材料转换而来的阅读页,保留了源文件的主要结构,并补充了可追溯的来源说明与链接。

摘要

Your goal is to improve the implementation quality of a codebase under a measurable validation harness.

autoresearchmarkdowntemplate / pattern

Program: Code Optimization Loop

Mission

Your goal is to improve the implementation quality of a codebase under a measurable validation harness.

Primary objectives may include one of: - higher benchmark score - lower latency - lower memory usage - fewer failing tests

Secondary constraints: - preserve correctness - keep the implementation maintainable - avoid broad uncontrolled rewrites unless justified

Scope

You may modify only the code and tests explicitly marked in scope.

You must not modify: - benchmark definitions unless explicitly allowed - evaluation scripts unless explicitly allowed - deployment or production infrastructure - secrets, credentials, or environment configuration

Setup

Before starting: 1. read the scoped files 2. understand the current benchmark or test harness 3. identify the baseline metrics 4. identify the current best-known implementation state 5. verify how to run validation reproducibly

Baseline

Always establish or verify baseline behavior first: - current test status - current benchmark results - current performance profile if relevant

Experiment Loop

For each iteration: 1. identify one bottleneck, weakness, or promising simplification 2. state the hypothesis clearly 3. make one focused implementation change 4. run the smallest trustworthy validation that tests the hypothesis 5. if promising, run the full validation harness 6. log the outcome 7. keep only changes that justify their maintenance cost 8. revert weak or harmful changes

Preferred Search Order

Default order of attack: 1. remove unnecessary complexity 2. fix obviously poor algorithms or hot paths 3. improve parameterization or caching 4. refactor local structure for clearer optimization opportunities 5. attempt larger redesigns only after local opportunities are exhausted

Keep / Discard Policy

Keep if: - correctness is preserved and the main metric improves - performance is similar but the code becomes materially simpler - stability improves with neutral performance cost when that matters to the objective

Discard if: - the improvement is too small for the complexity added - correctness becomes less trustworthy - the change makes future iteration harder without clear payoff

Logging

For each run, log: - commit or patch id - benchmark/test result - resource usage if relevant - keep/discard/crash - short description of the idea

Recommended extras: - files touched - risk level - follow-up idea

Failure Handling

If tests fail unexpectedly: - inspect the minimal failing evidence - fix obvious implementation mistakes once - otherwise revert and log the direction as weak or broken

If benchmark results are noisy: - rerun within a bounded retry policy - do not overfit to one noisy run

Operating Principle

Do not optimize for novelty. Optimize for durable, validated improvement.

来源与参考

源文件: autoresearch/programs/code-optimization-program.md

来源目录: /srv/project/harness-engineering

继续阅读