CSEC 601: Noctua — Foundations of Agentic AI Security
Semester 1 of a Year-Long Graduate Course
Course Information
Course Title: Noctua — Foundations of Agentic AI Security
Course Number: CSEC 601
Semester: Spring 2026
Credit Hours: 3
Class Schedule: [16 weeks, includes 50-minute lectures and 110-minute labs]
Prerequisites:
- Bachelor's degree in Computer Science, Information Security, or related field — OR equivalent professional cybersecurity experience with instructor approval
- Intermediate Python proficiency (REST APIs, JSON, async, command-line scripting)
- Hands-on cybersecurity background: incident response, threat modeling, or security operations — OWASP Top 10 familiarity expected
- REST API development experience (you build MCP servers from Week 5)
- Basic Docker/container proficiency (required from Unit 4)
- Git proficiency — branching, pull requests, command-line usage
- Claude Max subscription (program-provided or course fee)
Strongly recommended: Prior LLM API experience (any provider); AWS account and basic familiarity (required for Unit 7 AgentCore deployment).
Instructor: [Instructor Name and Contact Information]
Office Hours: [Days/Times and Location/Virtual Details]
Communication: [Email, Slack, or other platform]
Course Description
CSEC 601 prepares you to assess an organization's AI security posture and deploy agentic solutions that add measurable value. You will leave this course able to walk into a company, understand where they are in their AI deployment, identify the security gaps, and start building solutions — not in theory, but in working code.
The course is built on three pillars: Collaborative Critical Thinking (CCT) to reason rigorously about AI systems and the evidence they produce; Ethical AI to evaluate and apply emerging governance frameworks critically, not just cite them; and Rapid Prototyping to turn analysis into shipped tools. Instruction runs on Claude Code and the Claude Agent SDK — a deliberate choice for their maturity and depth — while making explicit what transfers to any platform your organization uses.
70% labs, 30% theory, starting Week 1. This is not a course about writing prompts. It is a course about building systems that solve security problems responsibly, measurably, and at production scale.
Learning Objectives
By the end of this course, students will be able to:
-
Apply Collaborative Critical Thinking (CCT) systematically to security problems, integrating Evidence-Based Analysis, Inclusive Perspective, Strategic Connections, Adaptive Innovation, and Ethical Governance when working with AI-augmented tools.
-
Design and implement context-engineered solutions that move beyond prompt engineering to leverage system prompts, structured outputs, tool definitions, and memory architectures for security applications.
-
Build and deploy Model Context Protocol (MCP) servers that standardize agent-tool communication and enable auditable, secure AI agent access to external systems.
-
Evaluate AI security tools against established frameworks including NIST AI RMF, OWASP Top 10 for Agentic Applications, and the AIUC-1 Standard for AI agent security, safety, and reliability.
-
Identify and mitigate bias, fairness, and explainability issues in AI-powered security systems through hands-on bias detection and fairness engineering.
-
Architect multi-agent security operations using Claude's agentic stack (subagents, worktrees, agent teams) to coordinate specialized security functions.
-
Prototype production-grade security tools in 2-3 hour sprints from problem analysis through deployment, demonstrating mastery of rapid agentic engineering.
-
Compose and enforce AI security policies that govern data handling, model governance, agent permissions, and incident response for AI-driven security systems.
-
Measure AI-augmented security workflows using five key performance metrics: Mean Time to Triage (MTTS), Mean Time to Protect (MTTP), Mean Time to Solve (MTTSol), Mean Time to Isolate (MTTI), and Augmented Mean Time to Respond (aMTTR).
-
Defend security decisions made by AI agents by providing explanations grounded in evidence, audit logs, and structured reasoning that satisfy both technical and non-technical stakeholders.
Course Structure & Delivery
In-Class Format: Each week consists of two class sessions:
- Day 1 — Theory & Foundations: Conceptual foundations, historical context, case studies, guided discussions, and key concept development
- Day 2 — Hands-On Lab: Practical building using Claude Code and the course toolstack, with immediate prototyping, iteration, and deliverable completion
Overall Course Philosophy:
- Hands-On First: Students build working tools before deep-diving into theory
- CCT Integration: All assignments require application of CCT principles
- Rapid Iteration: Single-session prototyping is a core competency
- Ethical Accountability: Every tool students build is audited for safety, fairness, and transparency
- Real-World Scenarios: All labs use realistic cybersecurity problems and data
Weekly Schedule
Unit 1: CCT Foundations & AI Landscape (Weeks 1–4)
| Week | Topic |
|---|---|
| Week 1: Welcome to the Agentic Era | Course overview, AI evolution, intro to CCT |
| Week 2: The 5 Pillars of CCT | Deep dive into CCT theory and cognitive biases |
| Week 3: Modern AI Landscape | AI models, capabilities, security implications |
| Week 4: Context Engineering | Beyond prompt engineering |
Unit 2: Agent Tool Architecture (Weeks 5–8)
| Week | Topic |
|---|---|
| Week 5: Model Context Protocol | Introduction to MCP architecture and standardization |
| Week 6: Tool Design Patterns | Building robust, secure tools |
| Week 7: Structured Outputs | Machine-readable formats for reports |
| Week 8: RAG for Security | Domain-specific knowledge systems |
Unit 3: AI Security Governance (Weeks 9–12)
| Week | Topic |
|---|---|
| Week 9: AIUC-1 Standard for AI Agents | The agent-specific security, safety, and reliability standard |
| Week 10: OWASP Top 10 for Agentic Applications | Security vulnerabilities in AI systems |
| Week 11: Bias, Fairness, and Explainability | Detecting and mitigating AI bias |
| Week 12: Privacy and AI Security Policy | Data governance and policy frameworks |
Unit 4: Rapid Prototyping with Agentic Tools (Weeks 13–16)
| Week | Topic |
|---|---|
| Week 13: Claude Code Deep Dive | Agentic stack mastery |
| Week 14: Rapid Prototyping Sprint I | Build from concept to demo in 3 hours |
| Week 15: Rapid Prototyping Sprint II | Hardening and production-ready quality |
| Week 16: Midyear Presentations | Demo and reflection |
Detailed Week Content
For detailed content including lecture notes, labs, and deliverables for each week, please visit the unit documentation files linked above. The remaining content below focuses on assessment, policies, and course resources.
Assessment Breakdown
| Component | Weight | Description |
|---|---|---|
| Lab Exercises (Weekly) | 25% | Deliverables from Weeks 1–15 (13 graded labs). Evaluated on functionality, documentation, and application of course concepts. |
| CCT Journals | 10% | Weekly reflection entries (Weeks 1–15, 14 entries). Must demonstrate deepening understanding of Collaborative Critical Thinking principles. |
| Participation | 5% | In-class discussion, peer collaboration, responsiveness to instructor feedback. Attendance expected for all sessions. |
| Midyear Project | 30% | Rapid prototypes and final presentation (Weeks 14–16). Evaluated on problem significance, technical execution, demo quality, and presentation. |
| OWASP/Ethics Audits | 15% | Audit deliverables from Weeks 9–12 (four major audit reports). Evaluated on thoroughness and actionability of recommendations. |
| Peer Reviews | 5% | Quality and constructiveness of feedback provided to peers. Peer review assignments throughout semester. |
| Performance Metrics Tracking | 10% | Consistent tracking and improvement of MTTS/MTTP/MTTSol/MTTI/aMTTR across sprints. Demonstrated efficiency gains from Week 14 to Week 15. |
Grading Scale:
- A: 90–100%
- B: 80–89%
- C: 70–79%
- D: 60–69%
- F: Below 60%
Course Policies
Academic Integrity
This is an AI course. You are expected to use AI tools, including Claude, to accelerate your learning and development. The skill we are building is not "avoid AI" but "direct AI effectively using structured critical thinking."
AI Usage Requirements:
- All AI-assisted work must include a brief methodology note (50–150 words) explaining:
- How you used AI (which tool, what prompts, how many iterations)
- Why you chose that approach
- How you verified the AI's output for correctness and relevance
- What you would do differently next time
- Example: "I used Claude Opus to help design the architecture for my MCP server. I provided the tool specifications and asked Claude to propose a modular design. I validated the design against security best practices and made three iterations to improve error handling. Next time, I would provide more detailed constraints upfront."
Academic integrity violations:
- Submitting work that is not your own (e.g., using a classmate's code without attribution)
- Misrepresenting AI-assisted work as entirely your own (not including methodology note)
- Plagiarizing readings or external sources
- Cheating on assignments (e.g., submitting another's prototype)
Violations will be reported to the Dean of Students per institutional policy.
Attendance
Attendance at lectures and labs is expected. If you need to miss class:
- Notify the instructor as soon as possible
- Arrange to review recorded content or notes from peers
- Complete lab deliverables even if you miss the in-class session (though participating in lab is strongly recommended)
Excessive absences may impact your participation grade.
Late Work
- Up to 48 hours late: 10% penalty
- Up to 1 week late: 20% penalty
- After 1 week: Not accepted without instructor approval
For lab-heavy courses, timely completion is critical for peer collaboration and feedback. Contact the instructor if you're falling behind.
AI Usage Policy (Detailed)
Philosophy: AI tools are force multipliers for security professionals. The goal is not to replace human thinking but to augment it with AI capabilities while maintaining rigor, fairness, and accountability.
Expected Usage:
- Use Claude or other AI tools for brainstorming, drafting, ideation, and rapid prototyping
- Use AI to accelerate coding, documentation, and report writing
- Use AI as a thinking partner to stress-test your ideas (apply CCT via Claude)
- Use AI to learn new concepts and explore alternatives
Prohibited Usage:
- Submitting AI-generated work without verification or significant modification
- Using AI to help peers cheat (sharing answers instead of helping them think)
- Over-relying on AI to the point of losing understanding (if you can't explain the output, you haven't learned it)
Documentation:
- Include the methodology note with all major deliverables (labs, reports, projects)
- Indicate in code comments where AI assisted (e.g.,
# Claude-assisted error handling) - If you copy code from AI, cite it (e.g.,
// Generated with Claude, adapted for our use case)
Course Expectations
Workload
This course is intensive and hands-on. Expect:
- In-class: 3 hours per week (lecture + lab)
- Outside class: 6–9 hours per week (reading, reflection, lab continuation, project work)
- Total: ~10–12 hours per week (typical for a 3-credit graduate course)
Weeks 14–16 (sprint weeks) may require additional time as you iterate on prototypes.
Technology Requirements
Required:
- Laptop with Claude Code installed (Mac, Windows, or Linux)
- Access to Claude (Claude Opus or Sonnet; Anthropic API credentials or Claude Max subscription)
- Git for version control
- A text editor or IDE (VSCode, PyCharm, or similar)
- Command-line familiarity (bash/zsh)
Recommended:
- Docker for containerizing tools and MCP servers
- Python 3.9+ for building tools and MCP servers
- Access to public vulnerability databases (NVD API, Shodan, GreyNoise) — free tiers available
Classroom Conduct
- Arrive on time and be ready to engage
- Collaborate respectfully with peers; diverse perspectives strengthen learning
- Communicate clearly and ask questions when concepts are unclear
- Support each other; security is a team sport
- Respect intellectual property and cite your sources
Course Resources
Provided Materials
- Weekly lecture slides and recordings (if virtual or hybrid)
- Lab starter code and datasets
- Reference materials (MITRE ATT&CK, NIST frameworks, security whitepapers)
- Example MCP servers and Claude agents
- Security tools and APIs (simulated when production access isn't available)
External Resources
- Claude documentation (https://claude.ai/docs)
- Model Context Protocol (https://modelcontextprotocol.io/)
- NIST Cyber AI Profile (https://csrc.nist.gov/)
- OWASP Top 10 for Agentic Applications (https://owasp.org/)
- Security frameworks and standards (ATT&CK, CIS Controls, SANS guidelines)
Getting Help
For course content questions:
- Office hours (as scheduled)
- Email the instructor
- Slack channel (if applicable)
- Peer discussion and collaboration
For mental health or personal support:
- Campus counseling services (if applicable)
- Your support network
- Dean of Students office
Course Schedule Summary
| Week | Topic | Major Deliverable |
|---|---|---|
| 1 | Welcome to the Agentic Era | Environment setup + CCT journal |
| 2 | The 5 Pillars of CCT | CCT analysis report |
| 3 | The Modern AI Landscape | Model comparison report |
| 4 | Context Engineering | Context-engineered tool |
| 5 | Model Context Protocol | First MCP server |
| 6 | Tool Design Patterns | Multi-tool MCP server |
| 7 | Structured Outputs | Report generator |
| 8 | RAG for Security | RAG security assistant |
| 9 | Responsible AI Principles | Ethics audit report |
| 10 | OWASP Top 10 | Vulnerability assessment |
| 11 | Bias and Fairness | Bias analysis report |
| 12 | Privacy and AI Policy | AI Security Policy |
| 13 | Agentic Stack Deep Dive | Multi-agent prototype |
| 14 | Rapid Prototyping Sprint I | Working prototype + metrics |
| 15 | Rapid Prototyping Sprint II | Hardened prototype |
| 16 | Midyear Presentations | Final project presentation |
Final Notes
Course Philosophy
This course is designed for security professionals who want to lead in the agentic AI era. You will not just learn about AI; you will build with AI, reason critically about AI, and deploy AI responsibly. By the end of Semester 1, you will be capable of:
- Thinking critically about AI-augmented security problems using a structured framework (CCT)
- Building rapidly using modern agentic tools (Claude Code, MCP, agent teams)
- Evaluating responsibly using established frameworks (NIST, OWASP, Responsible AI Principles)
- Measuring objectively using performance metrics and fairness assessments
- Deploying confidently with security hardening and ethical safeguards
Instructor Commitment
I am committed to:
- Providing clear, actionable feedback on your work
- Creating a collaborative, safe learning environment
- Staying current with rapidly evolving AI and security landscapes
- Helping you connect course learning to your career goals
- Responding to questions and concerns promptly
Your Commitment
You are expected to:
- Engage actively in lectures and labs
- Complete assignments thoroughly and on time
- Participate constructively with peers
- Ask questions when concepts are unclear
- Apply critical thinking and ethical reasoning to all work
- Respect the diverse perspectives and backgrounds of your classmates
Appendix: Performance Metrics Definitions
MTTS (Mean Time to Triage): Time from alert generation to initial assessment of the alert's nature and severity. Measures how quickly the system (human + AI) understands the problem.
MTTP (Mean Time to Protect): Time from triage to implementation of protective measures (e.g., isolating a system, blocking traffic). Measures speed of protective response.
MTTSol (Mean Time to Solve): Time from alert generation to complete resolution (root cause addressed, normal operations restored). Measures overall incident resolution speed.
MTTI (Mean Time to Isolate): Time from alert to containment (threat isolated, spread prevented). Measures containment speed.
aMTTR (Augmented Mean Time to Respond): Overall time from alert generation to resolution, accounting for human decision time and AI analysis time. Demonstrates the efficiency gain from AI augmentation.
Course Last Updated: March 4, 2026
Instructor: [Name]
Contact: [Email and office information]