Course Skills

Nine course skills that take you from "I have a security problem" to "I have a working, reviewed tool" — without losing clarity under time pressure. Six drive the delivery cycle; /harness-build constructs the enforcement harness around your agents; /harness-assess diagnoses whether that harness is actually implemented; /audit-aiuc1 validates governance before elevation. Install them on Day 1. Modify them as you go.

Why This Cycle Exists

The fastest way to build the wrong thing is to start building immediately. Claude will generate code confidently in whatever direction you point it — which means if your problem is unclear, you get a well-built solution to the wrong problem. The Think → Spec → Build → Retro cycle exists to prevent that.

Each skill enforces one phase — and each phase has harness engineering baked in:

  • /think — Forces you to understand the real problem before committing to an approach. Ends with a Harness Audit: what guardrails does this work need, what constraints already exist, what's the verification strategy?
  • /build-spec — Translates your thinking into a contract the agent builds from. Includes anti-requirements (things the agent must NOT do), verifiable acceptance criteria, explicit file scope, and lessons from past retros encoded directly into the spec.
  • /worktree-setup — Isolates parallel work into separate branches. Configures pre-commit hooks and output suppression (silence passing tests, surface only failures) so the agent hits constraints during execution, not after you review.
  • /check-antipatterns — Production readiness audit before shipping. Four layers: code quality, architecture, operations, and security. Outputs BLOCKED / CONDITIONAL / READY verdict. Run after build, before merge or deployment.
  • /merge-worktrees — Sprint merge coordinator. Reads merge order from sprint-progress.md, rebases and merges PRs in sequence, validates tests between each merge, stops on failure. Run from the main session after all agents raise PRs.
  • /retro — Closes the loop with gap classification. Every problem found is labeled: spec gap, constraint gap, context gap, or process gap. Each label maps to a specific fix. Three of the same gap type triggers a permanent harness component.
  • /harness-build — The constructive skill. Reads existing context (settings.json, hooks, CLAUDE.md), classifies every control as inside vs outside the reasoning loop, scaffolds missing outside-loop enforcement, and writes or updates harnesses/*/blueprint.yaml. Run after building and after /check-antipatterns — you need to know what the agent does before you can declare its boundary.
  • /harness-assess — The diagnostic skill. Reads blueprint.yaml and checks every claim against reality. Reviews prompts, tools, permissions, state, gates, logging, and misuse resistance. Use it to score the harness /harness-build produced.
  • /audit-aiuc1 — Audit your tool or agent system against the six AIUC-1 domains before elevation to a higher autonomy mode. Produces a scored gap report that doubles as the AIUC-1 baseline evidence required for PeaRL elevation gates.

The harness lives in the connections between phases. /retro feeds /think, /think informs /build-spec, /build-spec constrains /worktree, /worktree produces the evidence for /retro. /harness-build wires outside-loop enforcement into the project; /harness-assess scores whether those controls are real.

Distinction: Harness engineering is the discipline. /harness-build is the builder. /harness-assess is the auditor. A project with only docs is not harnessed — it is described.

How Skills Accelerate Ideation to Delivery

Without skills, every Claude Code session starts from zero. You re-explain the context, re-establish the format, re-define what "done" means. Skills eliminate that overhead.

When you type /think, Claude already knows the output structure you need, the risk categories to check, and the confidence scale you use. When you type /build-spec, Claude already knows your architecture template, your AIUC-1 control table, and your definition of done. You spend the session on the problem — not on setup.

Over time, your modified skills become institutional memory. A new team member installs your skills and immediately works at your standard — not because you trained them, but because the workflow is encoded.

What is a Claude Code Skill?

A skill is a markdown file you place in ~/.claude/commands/ (global, available in all projects) or .claude/commands/ (project-local). The filename becomes a slash command. When you type /think in a Claude Code session, Claude reads think.md and follows its instructions in the context of your current work.

Skills are plain text files — read them, edit them, fork them, share them. The six below are starting points, not finished products. The best version of each skill is the one you've shaped for your specific work.

The Goal: Build Your Own Workflow

Don't install these and run them as-is indefinitely. Use them a few times, notice what's missing for your work, and modify them. Add your team's output format. Add security-specific checklists. Remove sections you never use. Over time your skills reflect your way of working — and every Claude Code session starts from that baseline.

Share what you build. Post your modified skills as GitHub gists. The security practitioner who needs a /think-threatmodel or a /retro-soc skill is out there right now, and you could already have the thing they need.

Installation (all skills)

mkdir -p ~/.claude/commands

# Download all six skills at once (run from your terminal)
curl -o ~/.claude/commands/think.md              https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/think.md
curl -o ~/.claude/commands/build-spec.md         https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/build-spec.md
curl -o ~/.claude/commands/worktree-setup.md     https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/worktree-setup.md
curl -o ~/.claude/commands/check-antipatterns.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/check-antipatterns.md
curl -o ~/.claude/commands/merge-worktrees.md    https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/merge-worktrees.md
curl -o ~/.claude/commands/retro.md              https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/retro.md
curl -o ~/.claude/commands/harness-build.md      https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/harness-build.md
curl -o ~/.claude/commands/harness-assess.md     https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/harness.md
curl -o ~/.claude/commands/audit-aiuc1.md        https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/audit-aiuc1.md

# Verify
ls ~/.claude/commands/

/think — Critical Analysis Before Acting

Stop and think before touching code or making a decision. Surfaces assumptions, risks, alternatives, and unknowns. The starting point of every cycle.

Best used in: Unit 1 (CCT incident analysis), Unit 3 (ethical AI assessments), Unit 6 (threat modeling), Unit 8 (architecture review), any time a problem feels ambiguous.

Why /think exists

The most expensive mistake in a sprint is confidently building the wrong thing. /think is a forcing function — it makes you restate the problem in your own words, list the assumptions you're making, and identify what you don't know before you commit to an approach.

In security work this matters more than most domains. A threat model built on a wrong assumption doesn't just waste time — it creates false confidence. An incident response tool designed for the wrong attacker profile misses the actual threat. /think makes the assumptions visible so you can challenge them before they're baked into code.

The updated skill ends with a Harness Audit: three questions you answer before moving to /build-spec. What guardrails does this work need? What constraints already exist (tests, linters, CI gates)? What is the verification strategy — and if you can't name a specific check, you can't delegate this work to an agent yet. Think of this phase as the manager handing off to a capable but context-less contractor.

⭳ Download think.md 👁 View
curl -o ~/.claude/commands/think.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/think.md
Suggested modifications for security work:
  • Add a "MITRE ATLAS relevance" section that always checks if a threat is in ATLAS
  • Add "AIUC-1 domain requirements check" for ethical AI decisions
  • Create /think-incident that pre-populates CCT 5-pillar headings
  • Add a confidence-to-action rule: "If confidence is Low, always run /build-spec before any build"

/build-spec — Specification Before Building

Write a formal spec before writing a line of code. Defines agents, tools, data flow, success criteria, and scope. The shared contract for every build.

Best used in: Unit 2 (MCP server design), Unit 4 (sprint planning), Unit 5 (multi-agent architecture), Unit 8 (capstone architecture document).

Why /build-spec exists

Claude is an exceptional builder — but it builds what you describe, not what you meant. Without a spec, "build a vulnerability scanner" produces something. With a spec, it produces the right something: the right agents, the right tools, the right data flow, with security controls defined before the first line of code.

The updated skill now requires two things that most specs omit. First: anti-requirements — an explicit list of what the agent must NOT do (don't touch this module, don't add dependencies, don't refactor outside scope). Every ambiguity you leave in the spec is a decision the agent makes without you. Anti-requirements shrink that space. Second: verifiable acceptance criteria — observable checks, not prose. "The API returns 200 with a valid token" is a criterion. "Authentication should work" is a wish.

The spec is also your scope enforcer. Sprint labs give you 60 minutes to build. Without a written scope, every new idea that surfaces during the build phase looks reasonable to add. The spec gives you something to point to: "That's not in scope — deferred to next sprint." This is not a constraint — it's what lets you finish.

In a team context, the spec is the shared contract. If two people are building different components in parallel worktrees, the spec is what keeps them compatible. If you're using Claude Code to build one component and doing another yourself, the spec is what keeps you aligned with your own work.

The skill also forces a cost optimization verdict per component: caching (does this component send repeated stable context?), batching (does it require real-time response, or can it run async for a 50% token discount?), and model match (is the assigned model the cheapest that meets the quality bar, or does it need a Stage 0 trial at the next tier down?). Cost becomes a design constraint at spec time — not a billing surprise at the end of a sprint.

⭳ Download build-spec.md 👁 View
curl -o ~/.claude/commands/build-spec.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/build-spec.md
Suggested modifications:
  • Add an AIUC-1 domain mapping table directly into your default spec template
  • Add a "Grading checklist" section for course deliverables
  • Fork into /spec-mcp (tuned for MCP servers) and /spec-agent (tuned for multi-agent systems)
  • Add a "Peer review required before build" gate that outputs a checklist for a teammate

/worktree-setup — Parallel Development Setup

Configures git worktrees for isolated parallel development. Each component gets its own branch and directory. Multiple developers or Claude Code sessions build simultaneously without conflicts.

Best used in: Unit 4 (sprint I with 3-agent SOC), Unit 5 (multi-agent framework builds), Unit 8 (capstone team development).

Why /worktree-setup exists

Git worktrees let you check out multiple branches simultaneously into separate directories. In a sprint with three components — say a triage agent, an enrichment agent, and a reporting layer — you don't have to build them sequentially. Each lives in its own directory, on its own branch, and can be developed (or delegated to a separate Claude Code session) in parallel.

This is how the sprint's 60-minute build phase stays achievable. Sequential builds of three components would require 60 minutes each. Parallel worktrees compress that into the time of the longest single component.

It also keeps your main branch clean. Nothing gets merged until a component is complete and passing. If one component hits a dead end, you abandon that worktree without touching anything else. The /worktree-setup skill generates the exact shell commands for your specific spec — you don't have to remember the git worktree syntax, and every component gets consistent structure from the start.

The updated skill configures two harness elements in each worktree: pre-commit hooks (linting, type checks, fast unit tests that run during agent execution, not after you review) and output suppression (pytest configured to surface only failures — passing tests rot the context window and distract the agent). These constraints make correct behavior the easy path before code ever reaches review.

Note: Worktree management via /worktree-setup is a Claude Code skill — it works in Claude Code sessions. The underlying git worktree commands work in any terminal or IDE, but the skill automation is Claude Code-specific.

⭳ Download worktree-setup.md 👁 View
curl -o ~/.claude/commands/worktree-setup.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/worktree-setup.md
Suggested modifications:
  • Add your team's standard README template to the per-worktree setup
  • Expand pre-commit hooks with your specific linters and test suites
  • Add a security-specific hook: run Bandit or detect-secrets on every commit
  • Create /worktree-agent that pairs each worktree with a named Claude Code session for a specific agent role
  • Customize TASK.md template to always include your anti-requirements section from the spec

/check-antipatterns — Production Readiness Audit

Audit code for production anti-patterns before shipping. Four layers: code quality, architecture, operations, and security. Outputs a BLOCKED / CONDITIONAL / READY verdict with file-and-line evidence for every finding.

Best used in: Unit 2 (before MCP server deliverable), Unit 4 (Sprint I and II exit gates), Unit 6 (before red team begins — fix obvious problems first), Unit 7 (production hardening), Unit 8 (capstone exit gate).

Why /check-antipatterns exists

AI-generated code has predictable failure modes. Claude prioritizes "never crash" over "fail informatively," producing silent error swallowing. It generates happy-path loops with no termination guards, module-level global state that races under concurrency, and eval()-based dispatch that becomes RCE when an attacker controls the input. These patterns look correct in isolation — they pass code review, they pass unit tests — and then they fail silently in production or get exploited in a red team exercise.

The skill audits four layers in order: L1 Code Quality (silent errors, unbounded loops, missing assertions, naive retry), L2 Architecture (connection pools, idempotency, unbounded collections, global state in tool scope), L3 Operations (structured logging, correlation IDs, health checks, graceful shutdown), and L4 Security (timing attacks, log injection, input bounds, secret rotation, eval/exec, audit trail gaps). Each layer has grep-detectable checks and code analysis checks. Findings are classified CRITICAL / HIGH / MEDIUM / LOW. CRITICAL findings block deployment.

The skill was built from the Power of 10 rules (Holzmann, NASA/JPL) adapted for Python agent code: fixed loop bounds (R2), assertion density (R5), smallest possible scope (R6), and restricted metaprogramming (R8) map directly to the four patterns most likely to cause failures in production agentic systems.

Run it before the red team finds your mistakes. CRITICAL findings discovered by Garak or PyRIT count against your blue team score — they were preventable before the exercise started.

⭳ Download check-antipatterns.md 👁 View
curl -o ~/.claude/commands/check-antipatterns.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/check-antipatterns.md
Suggested modifications:
  • Add your stack's specific patterns — Node.js async pitfalls, Go goroutine leaks, Rust unsafe blocks
  • Create /check-antipatterns-mcp tuned for MCP server code — emphasizes global state (2.8), input bounds (4.3), and audit trail gaps (4.5)
  • Add a --fix mode that generates a patch file for auto-fixable findings (1.1, 4.1, 4.2)
  • Integrate with your CI pipeline: run on every PR and fail on CRITICAL findings

/merge-worktrees — Sprint Merge Coordinator

Merges worktree PRs in dependency order after a parallel sprint. Reads the merge sequence from sprint-progress.md, rebases where needed, validates tests between each merge, and stops on failure. Always run from the main session — never from inside a worktree.

Best used in: Unit 4 (Sprint I and II close), Unit 5 (multi-agent system integration), Unit 8 (capstone sprint merge before Week 16 presentations).

Why /merge-worktrees exists

Parallel worktrees compress build time — three components built simultaneously instead of sequentially. But they create a merge problem: three PRs exist, they have dependencies, and merging them in the wrong order causes regressions that are hard to attribute. Manually managing the sequence, rebasing after each merge, and running tests between merges is exactly the kind of mechanical coordination work that should be automated.

/merge-worktrees reads the merge order you defined in sprint-progress.md when you set up the sprint. It presents the plan and waits for confirmation before touching anything. Then it merges one PR at a time: check for conflicts, rebase if needed, merge, pull main, run tests. If tests fail after a merge, it stops and surfaces the failure with options — investigate, continue at risk, or revert. It never silently continues past a broken state.

The skill ends with a cleanup prompt: run /retro, push main, remove worktree branches. This ensures the sprint is fully closed — no dangling branches, no uncommitted decisions, no context left in worktree directories that will confuse the next sprint.

Critical constraint: run this from your main Claude Code session, not from inside any worktree. Worktree panes should be closed or idle before /merge-worktrees runs.

⭳ Download merge-worktrees.md 👁 View
curl -o ~/.claude/commands/merge-worktrees.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/merge-worktrees.md
Suggested modifications:
  • Change the test command from pytest to your project's test runner
  • Add a pre-merge /check-antipatterns call — catch production issues before they land on main
  • Add Slack or GitHub notification on successful sprint merge
  • Create /merge-worktrees-dry-run that previews the merge plan and conflict risk without executing

/retro — Structured Retrospective

Closes every cycle with an honest review: what was built vs. the spec, what worked, what didn't, what to carry forward. The retro output feeds the next /think.

Best used in: Unit 4 (sprint retros, Weeks 15–16), Unit 8 (capstone sprint retros), any session where something went differently than expected.

Why /retro exists

Most developers move straight from one build to the next. The retro is the step that makes the next cycle faster than the current one. Without it, you repeat the same mistakes because you never examined them. Without it, patterns that worked disappear because you never captured them.

The retro compares what you set out to build (the spec's success criteria) against what you actually built. That gap — what you cut, what you deferred, what took longer than expected — is your most valuable input for the next sprint. A retro that says "the tool detection worked but the enrichment step took 40 of 60 build minutes" tells you exactly where to focus Week 12 hardening.

The updated skill adds gap classification to every problem you find. Each problem is one of four types: a spec gap (you didn't tell the agent something), a constraint gap (nothing prevented the mistake), a context gap (the agent couldn't see what it needed), or a process gap (a phase was skipped or out of order). Each type maps to a specific fix location — spec template, CI hook, AGENTS.md, or harness. Naming the gap type tells you exactly where to prevent it next time.

The Three Strikes Rule: when the same gap category appears three times across retros, stop noting it and build a permanent harness component instead. Three spec gaps of the same type → update the spec template. Three constraint gaps → add the rule to CI. Three context gaps → build the MCP or update AGENTS.md permanently.

The retro also generates your context library. Every sprint produces reusable patterns — a tool definition that worked, a system prompt structure that produced clean output, an agent orchestration approach worth keeping. The retro captures these before you move on and they're gone. Over time, your context library becomes the institutional knowledge that makes every new tool faster to build than the last.

⭳ Download retro.md 👁 View
curl -o ~/.claude/commands/retro.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/retro.md
Suggested modifications:
  • Add your team's standard metrics (MTTI, MTTR, cost-per-run)
  • Create /retro-light — a 5-minute version for quick end-of-session capture
  • Add a "Share with community" prompt that extracts reusable patterns into a GitHub gist
  • Link retro output automatically to a CHANGELOG.md entry

/harness-build — Build an Agent Harness

The constructive harness skill. Reads your existing project context, classifies every control as inside or outside the reasoning loop, scaffolds missing outside-loop enforcement (hook stubs, deny rules), and writes or updates harnesses/*/blueprint.yaml. Run after /build-spec. Run /harness-assess after to score the result.

Best used in: At the start of any agent-building lab, after /worktree-setup, after any sprint close to advance harness maturity, and before /harness-assess so there is something concrete to evaluate.

Why /harness-build exists

The inside/outside loop distinction is the key security concept in this course. Controls inside the reasoning loop (CLAUDE.md, skills, agents, plans) are probabilistic — the agent CAN ignore them under goal pressure. Controls outside the loop (hooks, permissions.deny, CLI flags) are deterministic — they always fire regardless of what the model decides.

/harness-build makes that distinction concrete by reading what you have, showing you which column each control sits in, and scaffolding the enforcement layer that most projects skip. A project with a polished CLAUDE.md and no hooks has guidance, not a harness. /harness-build closes that gap.

The harness grows across the course: minimal in Unit 1 (one hook, one deny rule), fully declared by Unit 7 (blueprint.yaml + all fixed_steps implemented + AIUC-1 controls mapped). /harness-build advances it from wherever you are to the next level.

⭳ Download harness-build.md 👁 View
curl -o ~/.claude/commands/harness-build.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/harness-build.md
Suggested modifications
  • Extend the blueprint schema with project-specific deny rules for your domain (e.g., block all network egress except approved endpoints)
  • Add a harness maturity rubric specific to your deployment environment
  • Wire the aiuc1: block to auto-populate from your /audit-aiuc1 output
  • Create a /harness-build-multi variant that generates per-agent blueprints for multi-agent systems

/harness-assess — Assess an Agent Harness

The diagnostic skill. Use it to inspect whether an environment's harness is real, enforceable, and appropriate for the risk level — not just well described in markdown.

Best used in: Start of any new project (assess harness state), before production hardening, when reviewing a course or repo setup, after repeated failures, and before a capstone sprint to verify the environment's controls are actually in place.

Why /harness-assess exists

The course teaches harness fundamentals, which means students need a way to inspect whether a harness actually exists. A repo can have polished prompts and good docs while still being weak on permissions, state control, auditability, or misuse resistance. /harness-assess makes those gaps visible.

/harness-assess reviews the real control surface: entrypoints and orchestration, tool control, permissions and approval gates, state and memory handling, prompt and instruction control, logging and auditability, policy and environment separation, and resistance to prompt injection or tool abuse. It distinguishes what is implemented in code or config from what is only implied in prose.

The key diagnostic question is: "What is actually enforced here?" If a safeguard only lives in markdown, it is guidance, not control. That distinction is central to good harness engineering.

Use /harness-assess to review a course setup, repo, multi-agent workflow, or deployment environment. Use the Think → Spec → Build → Retro cycle to do the work; use /harness-assess to judge whether the surrounding environment makes safe, correct behavior easier and more auditable.

⭳ Download harness-assess.md 👁 View
curl -o ~/.claude/commands/harness-assess.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/harness.md
Suggested modifications:
  • Add a maturity rubric tailored to your environment type: course, internal tool, or production system
  • Create /harness-assess-security for prompt injection, tool abuse, identity, and secret-boundary reviews
  • Add explicit checks for single-source-of-truth state and machine-readable progression gates
  • Add a comparison mode that scores two environments side by side
  • Pair findings with a remediation backlog so each assessment produces concrete control upgrades
  • Keep a running assessment log across sprints so repeated soft controls get promoted into real runtime constraints

/audit-aiuc1 — AIUC-1 Compliance Audit

Audit a tool or agent system against all six AIUC-1 domains. Produces a scored gap report that doubles as the AIUC-1 baseline evidence required for PeaRL elevation gates. The audit scopes itself to your system's deployment tier (1–4) so you only audit what's required.

Best used in: Unit 3 Week 9 (first formal audit), before any production deployment of an agent system, before requesting autonomy mode elevation in PeaRL, and as a required deliverable for the capstone project.

Why /audit-aiuc1 exists

Building a capable agent system and building a compliant agent system are different problems. Most practitioners discover the gap at deployment — when a stakeholder asks "is this AIUC-1 compliant?" and the answer is "I don't know." /audit-aiuc1 makes the answer knowable before that conversation happens.

The skill walks through all six AIUC-1 domains: Data & Privacy, Security, Safety, Reliability, Accountability, and Society. For each domain it asks structured audit questions, collects evidence, identifies gaps, and scores each finding using OWASP AIVSS severity levels (Critical, High, Medium, Low). Critical and High findings in the Security and Safety domains are blocking — they must be resolved before PeaRL will approve elevation to a higher autonomy mode.

The audit is tier-scoped. A Tier 1 embedded AI feature (Copilot-style, no autonomous actions) only requires domains A and F. A Tier 2 supervised agent requires A, B, C, and E. A Tier 3 delegated agent or Tier 4 multi-agent pipeline requires all six. You answer two questions — what is the system, and what is its tier — and the skill audits exactly what's in scope.

The output report ends with a machine-readable Elevation Gate Status block. This is the artifact that Cedar's elevation policies parse when evaluating a request to move from ASSISTIVE to SUPERVISED_AUTONOMOUS mode, or from SUPERVISED_AUTONOMOUS to DELEGATED_AUTONOMOUS. The report is saved to reports/AUDIT-{system-name}.md in the project. Without it, elevation is blocked. With it, the gate has a dated, domain-scoped, reviewer-attested baseline to evaluate against.

As a learning tool, running the audit once against a system you built is one of the best ways to internalize what AIUC-1 actually requires. The questions surface assumptions you made during the build — about data handling, access control, output safety — and make them explicit so they can be evaluated rather than assumed.

⭳ Download audit-aiuc1.md 👁 View
curl -o ~/.claude/commands/audit-aiuc1.md https://raw.githubusercontent.com/r33n3/Noctua/main/docs/skills/audit-aiuc1.md
Suggested modifications:
  • Add your organization's internal controls beyond the AIUC-1 baseline to the relevant domain sections
  • Fork into /audit-aiuc1-quick — a Tier 1-only version for lightweight assessments of embedded AI features
  • Add a MASS scan integration: run MASS against the tool before scoring Domain B and import findings directly into the B-control table
  • Add a re-audit mode that diffs against a previous report and surfaces only what changed — useful for quarterly reviews and post-remediation confirmation
  • Extend the Elevation Gate Status footer with your organization's Cedar policy IDs for each gate

Build Your Own

What skill would make your workflow 10x faster?

Think about the tasks you repeat every session. Every time you find yourself doing the same thing — structuring an analysis, setting up a project, writing a certain type of report — that's a skill waiting to be built. Use this prompt:

I keep doing [task] repeatedly in my security work. Write a Claude Code skill file for it — a markdown file I can save as ~/.claude/commands/[name].md — that automates the setup and structure so I can invoke it as /[name] in any future session.

Share your skills. Post them as GitHub gists, tag them claude-code-skills and security. The security community benefits when practitioners share what works — including the small workflow automations that save 20 minutes a day.