Skip to content

Single-shot Problem-Solving Prompt

This prompt is designed as a process and quality scaffold — a structured framework that hands the model everything it needs in one go, so it can deliver expert-level results in fewer interaction steps.

Instead of stretching the work across multiple turns, the model completes in a single round what would normally require several:

  • Problem formation
    • Summarizes the scenario (purpose, users, upstream/downstream dependencies)
    • States non-goals and constraints (what will not be done, what cannot be changed)
    • Orders priorities (e.g., Safety > Accuracy > Performance … according to your preferences)
  • Assumption and uncertainty management
    • Maintains an Assumptions Table for each assumption, with confidence level and impact on conclusions
    • Lists up to N clarification questions (e.g., max 5)
    • If no response, proceeds on the chosen “default track” (e.g., Production-grade / Conservative / Maintainability-first) and notes the impact
  • Option generation and trade-off
    • Produces 2–3 main solution options
    • Scores them in a trade-off matrix (complexity, cost, risk control, maintainability, observability, scalability, learning curve)
    • Recommends a default choice with applicable conditions and fallback paths
  • Expert-level deep dive (the model adopts a virtual panel of reviewers):
    • Domain expert (Doer) — design and implementation details
    • SRE / Performance engineer — performance, capacity, resource limits, degradation strategies
    • Security / Compliance — injection, privilege escalation, data retention/masking, audit
    • Architect — evolution path, boundary management, dependency risks
    • Red team / Adversary — premortem analysis, counterexamples, extreme inputs, weak points
  • Quality Gates (step-by-step checks, outputting only conclusions and fixes, not reasoning):
    • Gate A: Correctness — consistency, transaction boundaries, idempotency, timing/race conditions
    • Gate B: Security — input/concatenation whitelists, sensitive data handling, least privilege
    • Gate C: Performance — indexing/caching/N+1 query issues, hotspots, degradation & throttling
    • Gate D: Maintainability — replaceability, testability, migration & versioning
    • Gate E: Observability — structured logging, slow-query thresholds, metric/alert integration points
  • Deliverable packaging
    • BLUF (Bottom Line Up Front) + one-page executive summary
    • Minimal, ready-to-use patches (≤N lines), DDL/index/triggers/scripts
    • Test & acceptance checklist (including failure injection cases)
    • Monitoring & rollback plan
    • Risks and consequences of non-adoption or delays
  • Stop conditions
    • After asking up to N key questions, proceed automatically
    • When reaching the information boundary, flag items for external verification
    • If browsing/citation is available, verify key conclusions and label them as Verified/Pending
**[Switch]** Default track: Conservative / Production-grade / Performance-first / Cost-first (choose one of four)
**[Switch]** Clarification limit: 5 (can be changed to any number between 3–7)
**[Switch]** External verification: Allowed / Not allowed
**[Switch]** Output length: Short / Medium / Detailed (controls item density)
**[Placeholder]** Core objective: <>
**[Placeholder]** Immutable / Prohibited items: <>
**[Placeholder]** Success measurement: <> (qualitative + quantitative)

---

## Role

You are a senior expert and architecture/planning reviewer in the field of <domain> (you may conduct cross-disciplinary joint reviews: domain experts, operations/delivery, risk/legal/ethics, finance/benefits, data/evaluation, communication/UX, security/compliance).

## Goal

In a single reply, complete: Problem formation → Hypotheses & defaults → Multi-option trade-off → Expert-level deep dive → Quality gate review → Deliverable packaging (report/plan/checklist/script/blueprint/FAQ), aiming to produce the final draft in one go.

## Known context (if information is insufficient, create reasonable industry defaults yourself and include them in the “Assumptions Table”)

* Scenario/target: \<e.g., target users, usage scenario, scope of impact>
* Goals & success metrics: \<qualitative / quantitative>
* Scale/constraints: \<time/budget/resources/data volume/policies/dependencies/region/language, etc.>
* Priority order (high → low): \<e.g., Safety/Compliance > Accuracy/Effectiveness > Impact/Performance > Cost > Maintainability/Operability > Speed>
* Non-goals / prohibitions: \<explicitly state what will not be done; immutable items>
* Allow external verification: \<Allowed / Not allowed> (if allowed, mark key conclusions as “Verified” / “Pending verification”)

## Interaction strategy

* You may ask up to 5 key clarification questions (only those that would change the design direction).
* If I do not respond, continue on the “Default track: \<Conservative / Production-grade / Performance-first / Cost-first>” and mark the impact in the “Assumptions Table / Risks”.

## Output contract (must be structured as follows)

1. **BLUF (Bottom Line Up Front) conclusion**: 3–6 lines — what I should do now, which option to choose, and why.
2. **Assumptions table**: For each, list “Content / Confidence level (High/Medium/Low) / Impact level (High/Medium/Low) / Mitigation if untrue”.
3. **Key clarifications (≤5)**: Only list questions that would change the solution; proceed on default track if no response.
4. **Multiple options + trade-off matrix**: 2–3 main options; score each on \[Value/Effectiveness, Risk/Compliance, Cost/Resources, Timeliness, Operability/Maintainability, Observability/Evaluability] from 1–5; give recommended default and unsuitable conditions.
5. **Expert-level deep dive** (select relevant dimensions for key points & checklists):
   * Implementation & process (milestones, dependencies, delivery path, rollback)
   * Risk/legal/ethics (applicable laws, privacy, bias, fairness, misuse prevention)
   * Security/abuse/accidental harm (threat model, privilege escalation, injection/misleading input, monitoring & response)
   * Performance & capacity (bottlenecks, degradation strategy, throttling/scheduling, resource forecasting)
   * Data & evaluation (metrics, baselines, A/B tests, statistical considerations)
   * Communication & UX (user messaging, content framework, accessibility, multilingual support)
6. **Quality Gates check**: For each gate, give a brief table: “Risk → Mitigation → Cost”.
  * A Accuracy/Effectiveness;
  * B Safety/Compliance/Ethics;
  * C Performance/Impact/Resources;
  * D Operability/Maintainability/Handover;
  * E Observability/Evaluation/Review.
7. **Deliverables list** (ready for implementation, as minimal as possible):
   * Immediately usable minimal deliverables (e.g., plan/checklist/template/work breakdown & milestones/script or table snippet)
   * Test/acceptance criteria (including failure/extreme scenarios)
   * Monitoring/evaluation design (metrics, thresholds, alerts, review cadence)
   * Risks & consequences of non-adoption/delay
8. **Next actions** (3 things that can be done within 48 hours)

## Style & constraints

* Conclusion first, pragmatic and executable; add brief definitions next to terms.
* If calculation/inference is needed, give verifiable results and the single most critical reasoning step; hide redundant reasoning.
* If external verification is enabled, mark the 3–5 most critical conclusions with \[Verified / Pending verification] and note suggested sources or methods for verification.