23% — Meetings & operational tasks
19% — Code maintenance & debugging
13% — Security & compliance
12% — Testing & code review
Only 16% — Writing new feature code
DevNexus 2026
github.com/joshkurz
Josh Kurz
23% — Meetings & operational tasks
19% — Code maintenance & debugging
13% — Security & compliance
12% — Testing & code review
Only 16% — Writing new feature code
84% of your week is spent on work that follows repeatable patterns.
An agent can help with most of it.
What if the tools you already have could handle the work you dread?
Let’s clear these before standup.
Average PR review: 23 minutes of focused attention
Context switching cost: 15 minutes to get back to your own work
4 PRs waiting = 2.5 hours before you write a single line of code
Review fatigue: quality drops after the 2nd or 3rd PR
What slips through: missing null checks, inconsistent naming, untested edge cases
VS Code Copilot
/review-code
Focus on: null safety, thread safety,
naming conventionsReusable prompt file at
.github/prompts/review-code.prompt.md
Or use @github to pull PR context directly.
Claude Code
$ claude /reviewBuilt-in skill that reads the current branch diff,
checks for issues, and writes review comments.
Agent scans every file in the diff — no skimming
Catches mechanical issues you’d miss on review #3
You focus on: architecture, design, business logic
Result: 4 PRs reviewed before standup
Fix this before standup.
VS Code Copilot
/review-code focus=securityPrompt file targets security-specific patterns:
injection, auth bypass, input validation.
Claude Code
$ claude /security-reviewBuilt-in skill that scans for OWASP Top 10
vulnerabilities and suggests fixes.
Before — the vulnerability:
// UserController.java — this is CVE-2026-1847
String query = "SELECT * FROM users WHERE id = '" + userId + "'";
Statement stmt = connection.createStatement();
ResultSet rs = stmt.executeQuery(query);After — the fix:
String query = "SELECT * FROM users WHERE id = ?";
PreparedStatement stmt = connection.prepareStatement(query);
stmt.setString(1, userId);
ResultSet rs = stmt.executeQuery();SQL injection → parameterized query
CVE-2026-1847 resolved before standup, not after deployment
Start with the tests. Then build the feature.
VS Code Copilot
/tests
Generate tests for a sliding-window
rate limiter: 100 req/min per API key,
429 with Retry-After header.
Cover: under limit, at limit, over limit,
window reset, concurrent requests,
missing API key.
Use JUnit 5 and embedded Redis.Claude Code
$ claude
> Generate comprehensive tests for
FEAT-4921: rate limiter, sliding window,
100 req/min per API key, 429 + Retry-After.
Cover edge cases. JUnit 5 + embedded Redis.@Test
@DisplayName("Should return 429 with Retry-After when limit exceeded")
void shouldRejectWhenRateLimitExceeded() {
String apiKey = "test-key-001";
for (int i = 0; i < 100; i++) {
mockMvc.perform(post("/api/payments").header("X-API-Key", apiKey))
.andExpect(status().isOk());
}
mockMvc.perform(post("/api/payments").header("X-API-Key", apiKey))
.andExpect(status().isTooManyRequests())
.andExpect(header().exists("Retry-After"));
}VS Code Copilot + MCP
@github Use the Atlassian MCP to read
FEAT-4921. Implement the rate limiter
for the payment API based on the ticket
requirements. Include integration tests
with embedded Redis.Agent Mode reads the Jira ticket via
Atlassian Rovo MCP, then builds.
Claude Code + MCP
$ claude
> Read FEAT-4921 from Jira via MCP.
Implement the rate limiter for the
payment API. Sliding window,
100 req/min per API key, 429 with
Retry-After. Store in Redis.
Spring interceptor.
Include integration tests.Agent reads ticket requirements from Jira via Atlassian Rovo MCP
Creates: RateLimiter.java, RateLimitInterceptor.java, WebConfig.java
Writes Redis commands using StringRedisTemplate
Adds @SpringBootTest with @EmbeddedRedis
Total: ~200 lines of production code, ~150 lines of tests
FEAT-4921: 0% → working implementation with tests
Weak Prompt
"Add rate limiting to the API"Which endpoints?
What algorithm?
What limit?
What storage?
The agent will guess — badly
Strong Prompt (or just use MCP)
"Read FEAT-4921 from Jira via MCP.
Implement sliding window rate limiter.
100 req/min per API key.
429 + Retry-After header.
Redis via StringRedisTemplate.
Spring HandlerInterceptor.
Integration tests with embedded Redis."Specific scope and algorithm
Clear technology choices
Defined behavior and limits
Testable acceptance criteria
While waiting for PR approvals, tackle the big stuff
$ claude
> Migrate the UserService module from Spring WebMVC
to Spring WebFlux. Convert all blocking calls to
reactive streams. Update controllers to return
Mono/Flux. Migrate RestTemplate to WebClient.
Update tests to use StepVerifier.Analyzes 47 files in the module
Converts @RestController return types to Mono<> and Flux<>
Replaces RestTemplate calls with WebClient
Rewrites repository layer for reactive R2DBC
Updates 23 test files to use StepVerifier
Runs full test suite — iterates on 3 failures
CLAUDE.md (Claude Code)
# Project Standards
## Architecture
- Hexagonal: adapters/ ports/ domain/
- Domain objects: no framework annotations
## Naming
- Services: *Service.java
- DTOs: *Request.java, *Response.java
## Testing
- 80% branch coverage minimum
- @DisplayName on every test
## Security
- Parameterized queries only
- Validate all input with jakarta.validationcopilot-instructions.md (VS Code)
# Copilot Instructions
## When reviewing code
- Check for SQL injection patterns
- Verify null safety on all parameters
- Ensure tests use @DisplayName
## When generating code
- Use hexagonal architecture
- Place DTOs in *Request/*Response pattern
- Always include integration tests
## When fixing security issues
- Convert string concat to PreparedStatement
- Add input validation annotationsStored at .github/copilot-instructions.md
$ claude
> FEAT-4921 needs infrastructure. Scaffold:
- Dockerfile with multi-stage build for the payment service
- docker-compose.yml with Redis and the app
- Flyway migration for the rate_limit_config table
- GitHub Actions CI pipeline with Redis service container
- Kubernetes manifest with Redis sidecarClaude spawns subagents to work in parallel
Subagent 1: Dockerfile + docker-compose with Redis
Subagent 2: Flyway migration + schema
Subagent 3: GitHub Actions CI with Redis service
Subagent 4: Kubernetes manifests
Main agent coordinates and integrates the results
We cleared every notification. Now the hard question.
"I generated 10,000 lines of code today!"
Who reviews 10,000 lines?
Who tests 10,000 lines?
Who maintains 10,000 lines?
Who debugs 10,000 lines at 2 AM?
You do.
Not: lines of code generated
Not: number of PRs merged
Not: features shipped per sprint
But: problems solved with minimal, maintainable code
Wrong Mindset
"I can build features 5x faster"
"I don’t need to understand the code"
"The agent wrote tests so we’re covered"
"Ship it, the agent said it works"
Right Mindset
"I can build features with 5x better quality"
"I review and understand every line"
"The agent drafted tests, I verified coverage"
"I validated the behavior, then shipped"
With these tools, there’s no excuse to skip:
Comprehensive test coverage
Security review before commit
Up-to-date documentation
Consistent code style
Thorough PR reviews
The bar has been raised — meet it
Agents don’t understand your business domain
Agents don’t understand trade-offs and constraints
Agents don’t understand team dynamics and politics
Agents don’t understand what to build next
Agents can’t say "we shouldn’t build this at all"
An agent can write a perfect rate limiter for FEAT-4921. It cannot tell you whether your payment API needs one.
Start Monday morning
Week 1 — Set up your tools: create .github/prompts/review-code.prompt.md, write a CLAUDE.md, configure the Atlassian Rovo MCP server. Use /review-code on your next PR.
Week 2 — Use agent-assisted review for every PR. Run /security-review on your most critical service. Compare agent findings to your own.
Week 3 — Try Agent Mode for a real task: generate tests for an untested class, fix a security finding before standup.
Week 4 — Use Agent Mode + MCP for a feature from Jira. Let the agent read the ticket and build the implementation. Review everything.
Accepting without reading — always review generated code line by line
Skipping tests on generated code — generated code needs more testing, not less
Over-trusting the agent — it’s confident even when it’s wrong
Vague prompts — garbage in, garbage out
Ignoring the learning curve — prompt engineering is a real skill
Identify champions — 1-2 people per team who learn deeply and teach others
Share reusable prompts — commit .github/prompts/*.prompt.md files and .claude/skills/ to your repo so the whole team gets them
Standardize project instructions — a shared CLAUDE.md and copilot-instructions.md means every agent interaction follows your team’s standards
Configure MCP servers — set up Atlassian Rovo MCP once, every engineer benefits
Measure what matters — defect rates, review quality, not lines generated
MCP is here now — we just used it. Atlassian Rovo, GitHub, databases — agents already connect to your tools
Deeper MCP integration — agents that update the Jira ticket when the PR is merged, close the loop automatically
Multi-agent orchestration — teams of specialized agents collaborating on complex tasks (Claude Code subagents are the start)
CI/CD agents — agents that fix failing builds, open PRs, and iterate until green
Continuous code improvement — agents that proactively find tech debt and propose refactoring
Personalized assistance — agents that learn your coding patterns and team conventions over time
Monday: Pick one real task and try Agent Mode
This week: Write a CLAUDE.md for your project
This month: Measure your defect rate — it should drop
From now on: Raise the bar. Better code, not just more code.
Josh Kurz
github.com/joshkurz
Slides and resources: https://github.com/joshkurz/devnexus-2026-agents