Skip to main content

Overview

These advanced techniques help you build more sophisticated and powerful automations by combining multiple capabilities.

Using Multiple MCP Servers

Complex automations can combine multiple services to create powerful workflows. Example Workflow:
1. Check Linear for high-priority bugs (Linear MCP)
2. For each bug, check if there's a GitHub issue (GitHub MCP)
3. If no GitHub issue exists, create one (GitHub MCP)
4. Post summary to Slack (Slack MCP)
Benefits:
  • Keeps systems in sync automatically
  • Reduces manual cross-referencing
  • Ensures nothing falls through the cracks
Tips:
  • Plan the data flow between services
  • Handle API failures gracefully
  • Cache data when appropriate to reduce API calls

Conditional Logic

Use conditional language in your prompts to handle different scenarios. Example:
If the PR has changes to the database schema:
  - Request review from @database-team
  - Run database migration checks
  - Add label "needs-migration-review"
Otherwise:
  - Follow standard review process
Common Conditions:
  • File path patterns (e.g., database files, config files)
  • PR size or complexity thresholds
  • Time-based conditions (business hours, weekends)
  • Priority or severity levels
Best Practices:
  • Keep conditions simple and clear
  • Document edge cases
  • Test all branches of your logic

Iteration and Filtering

Process collections of items with filters and transformations. Example:
For each open PR that:
  - Is older than 7 days
  - Has no reviews
  - Is not in draft status

Do:
  - Comment asking for review
  - Tag the PR author's team lead
  - Add label "needs-review"
Filtering Strategies:
  • Time-based (age, last updated)
  • Status-based (open, closed, draft)
  • Metadata-based (labels, assignees, priority)
  • Content-based (file changes, keywords)
Performance Tips:
  • Filter early to reduce processing
  • Batch operations when possible
  • Set reasonable limits on collection sizes

State Management

Use automation state to remember information between runs. Use Cases:
  • Track last processed item (tag, commit, issue)
  • Store counters or metrics
  • Remember previous actions to avoid duplicates
  • Cache expensive computations
Example:
If a new tag has been cut since {lastProcessedTag}:
  1. Generate release notes
  2. Post to Slack
  3. Update {lastProcessedTag} to the new tag value
Best Practices:
  • Initialize state with sensible defaults
  • Document what state variables mean
  • Clean up old state periodically

Error Recovery

Build automations that handle failures gracefully and recover automatically. Strategies:
  1. Retry Logic
Try up to 3 times:
  - Fetch data from API
  - If successful, proceed
  - If all attempts fail, log error and notify team
  1. Fallback Behavior
Try to post to #team-updates
If channel doesn't exist:
  - Post to #general instead
  - Notify admin about missing channel
  1. Partial Success
Process each item independently
If one fails:
  - Log the error
  - Continue with remaining items
  - Report summary at the end

Dynamic Configuration

Make automations flexible by using variables and configuration. Example:
Configuration:
- STALE_THRESHOLD: 14 days
- TARGET_CHANNEL: #standup
- PRIORITY_THRESHOLD: high

Use these values in your automation prompt:
"Find issues older than {STALE_THRESHOLD} days..."
Benefits:
  • Easy to adjust behavior without changing automation
  • Different configurations for different teams
  • Test and production variants

Performance Optimization

Optimize automations that process large amounts of data. Techniques:
  1. Pagination: Process items in batches
Fetch 50 items at a time
Process each batch
Continue until all items processed
  1. Caching: Store frequently accessed data
Cache team member list for 24 hours
Use cached data instead of API calls
Refresh cache when needed
  1. Parallel Processing: Handle independent items concurrently
For each repository (process in parallel):
  - Check status
  - Generate report
Combine reports at the end

Testing and Debugging

Strategies for testing complex automations. Testing Approaches:
  1. Dry Run Mode: Add a flag to skip actual actions
If DRY_RUN mode:
  - Log what would be done
  - Don't make actual changes
Otherwise:
  - Execute normally
  1. Test Data: Use specific test markers
Only process items with label "automation-test"
This isolates testing from production
  1. Incremental Testing: Test one part at a time
Phase 1: Just fetch and log data
Phase 2: Add filtering logic
Phase 3: Add action logic
Phase 4: Enable for production
Debugging Tips:
  • Log key decision points
  • Include timestamps in logs
  • Save raw API responses for analysis
  • Test with small datasets first

Security Considerations

Protect sensitive data and prevent security issues. Best Practices:
  1. Never Log Secrets: Don’t include tokens or passwords in logs
  2. Validate Input: Check data before using it
  3. Limit Scope: Only request necessary permissions
  4. Audit Actions: Log who/what triggered the automation
Example:
Before posting message:
- Check message doesn't contain API keys
- Validate URLs are from trusted domains
- Ensure user has permission for the action
Then post to channel

Monitoring and Alerting

Track automation performance and catch issues early. Key Metrics:
  • Success/failure rate
  • Execution time
  • Items processed
  • API calls made
Alert Conditions:
  • Failure rate above threshold
  • No successful runs in X hours
  • Unusual execution time
  • API rate limit approached
Implementation:
At end of automation:
- Record success/failure
- Log execution time
- If failure rate > 20% in last 24h:
  - Alert team
  - Include recent error messages