Creating Your First Agent
Learn how to create and configure powerful AI agents that can handle specialized tasks for your team.
Table of Contents
- What You'll Learn
- Prerequisites
- What Are Agents
- Creating an Agent
- Configuring System Prompts
- Choosing the Right Model
- Setting Visibility
- Testing Your Agent
- Advanced Configuration
- Best Practices
- Troubleshooting
What You'll Learn
By the end of this guide, you'll know how to:
- β Create AI agents via UI and API
- β Write effective system prompts
- β Select the appropriate model for your use case
- β Configure visibility and access control
- β Test and iterate on agent behavior
- β Set up specialized agents for different tasks
Time to complete: 20-30 minutes
Prerequisites
Before starting, make sure you have:
- β A TeamDay account (Sign up guide)
- β An organization set up (Org setup guide)
- β A Personal Access Token (PAT guide)
- β Basic understanding of AI capabilities
Understanding Agents
What Are Agents?
Agents are AI assistants that:
- Execute tasks based on natural language instructions
- Have specialized knowledge and capabilities
- Can use tools and access data
- Maintain context across conversations
- Work autonomously or interactively
Agent Components
1. Name
- Identifier for the agent
- Descriptive and specific
- Examples: "Code Reviewer", "Content Writer", "Data Analyst"
2. System Prompt
- Instructions that define the agent's behavior
- Establishes role, expertise, and guidelines
- Affects how the agent responds to tasks
3. Model
- The AI model powering the agent
- Different models have different capabilities
- Choice affects speed, cost, and quality
4. Visibility
- Who can see and use the agent
- Controls access and collaboration
- Options: private, organization, public

Creating an Agent
Method 1: Via UI (Recommended for Beginners)
Step 1: Navigate to Agents
- Log in to TeamDay
- Click "Agents" in the sidebar
- Click "+ New Agent" button

Step 2: Enter Basic Information
Name your agent:
- Be specific and descriptive
- Use a name that reflects the agent's purpose
- Examples:
- β Good: "Python Code Reviewer", "Marketing Content Writer"
- β Bad: "Agent 1", "My Agent", "Test"
Add a description (optional but recommended):
- Brief overview of what the agent does
- Helps team members understand when to use it
- Example: "Reviews Python code for best practices, security issues, and optimization opportunities"

Step 3: Select a Model
Choose the AI model that powers your agent:
Available Models:
- Claude 3.5 Sonnet (Recommended)
- ID:
claude-3-5-sonnet-20241022 - Best balance of speed, quality, and cost
- 200K token context window
- Excellent for general tasks
- ID:
- Claude 3 Opus
- ID:
claude-3-opus-20240229 - Most capable model
- Best for complex reasoning
- Higher cost, slower responses
- ID:
- Claude 3.5 Haiku
- ID:
claude-3-5-haiku-20241022 - Fastest and most cost-effective
- Great for simple tasks
- 200K context window
- ID:
Model Selection Guide:
| Use Case | Recommended Model |
|---|---|
| Code review, analysis | Claude 3.5 Sonnet |
| Content writing | Claude 3.5 Sonnet |
| Simple Q&A | Claude 3.5 Haiku |
| Complex research | Claude 3 Opus |
| Data analysis | Claude 3.5 Sonnet |
| Quick responses | Claude 3.5 Haiku |

Step 4: Configure System Prompt
The system prompt defines your agent's personality, expertise, and behavior.
Click "Edit System Prompt" and enter your instructions.
See Configuring System Prompts below for detailed guidance.

Step 5: Set Visibility
Choose who can access this agent:
Options:
- Private (Default)
- Only you can see and use this agent
- Best for personal experiments or sensitive work
- Organization
- All members of your organization can use it
- Best for team collaboration
- Public
- Anyone with the link can view (read-only)
- Execution requires organization membership
- Best for showcasing or sharing demos

Step 6: Create Agent
Click "Create Agent" to finalize.
You'll be redirected to the agent detail page where you can:
- Test the agent
- View execution history
- Modify configuration
- Add tools and plugins

Method 2: Via API
Create Agent Request:
curl -X POST "https://cc.teamday.ai/api/v1/agents" \
-H "Authorization: Bearer $TEAMDAY_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Python Code Reviewer",
"description": "Reviews Python code for best practices and security",
"systemPrompt": "You are an expert Python developer with 10+ years of experience. Review code for:\n\n1. Best practices and PEP 8 compliance\n2. Security vulnerabilities\n3. Performance optimizations\n4. Code readability and maintainability\n\nProvide specific, actionable feedback with code examples.",
"model": "claude-3-5-sonnet-20241022",
"visibility": "organization"
}'
Response:
{
"id": "char_abc123xyz",
"name": "Python Code Reviewer",
"description": "Reviews Python code for best practices and security",
"systemPrompt": "You are an expert Python developer...",
"model": "claude-3-5-sonnet-20241022",
"visibility": "organization",
"organizationId": "org_xyz789",
"createdAt": "2025-01-15T10:00:00Z",
"updatedAt": "2025-01-15T10:00:00Z",
"metadata": {}
}
Save the agent ID (char_abc123xyz) for future API calls.
Configuring System Prompts
The system prompt is the most important part of your agent configuration. It defines personality, expertise, and behavior.
Anatomy of a Good System Prompt
1. Role and Expertise
You are a senior software engineer specializing in Python and backend development with 10+ years of experience.
2. Core Responsibilities
Your primary responsibilities are:
- Reviewing code for best practices
- Identifying security vulnerabilities
- Suggesting performance optimizations
- Ensuring code maintainability
3. Communication Style
When providing feedback:
- Be constructive and encouraging
- Provide specific examples
- Explain the "why" behind suggestions
- Prioritize issues by severity
4. Constraints and Guidelines
Guidelines:
- Focus on Python 3.10+ features
- Follow PEP 8 style guide
- Prioritize readability over cleverness
- Consider team coding standards
Example System Prompts
Example 1: Code Reviewer Agent
You are an expert code reviewer with deep knowledge of software engineering best practices, security, and performance optimization.
When reviewing code:
1. **Security**: Identify vulnerabilities (SQL injection, XSS, authentication issues)
2. **Performance**: Spot inefficient algorithms, unnecessary loops, memory leaks
3. **Maintainability**: Check for clear naming, proper abstraction, documentation
4. **Best Practices**: Ensure adherence to language conventions and patterns
Provide feedback in this format:
- π΄ Critical: Security vulnerabilities or breaking issues
- π‘ Important: Performance problems or maintainability concerns
- π’ Suggestions: Nice-to-have improvements
Always include code examples showing the fix.
Example 2: Content Writer Agent
You are a professional content writer specializing in technical blog posts and marketing copy.
Your writing style:
- Clear and concise
- Engaging and conversational
- Technically accurate but accessible
- SEO-optimized with natural keyword usage
When creating content:
1. Start with a compelling hook
2. Use short paragraphs (2-3 sentences)
3. Include relevant examples and analogies
4. End with a clear call-to-action
Target audience: Software developers and technical decision-makers
Tone: Professional yet friendly, authoritative but approachable
Example 3: Data Analyst Agent
You are a senior data analyst with expertise in statistical analysis, data visualization, and business intelligence.
When analyzing data:
1. Start with exploratory data analysis (EDA)
2. Identify patterns, trends, and anomalies
3. Perform statistical tests when appropriate
4. Create clear, actionable visualizations
5. Provide business recommendations
Present findings in this structure:
- **Summary**: Key insights in 2-3 sentences
- **Analysis**: Detailed breakdown with supporting data
- **Visualization**: Suggest appropriate charts/graphs
- **Recommendations**: Actionable next steps
Use Python (pandas, matplotlib, seaborn) for data work.
System Prompt Best Practices
Do:
- β Be specific about the agent's role and expertise
- β Define clear responsibilities and priorities
- β Specify output format and structure
- β Include examples of desired behavior
- β Set boundaries and constraints
- β Define communication style and tone
Don't:
- β Be too vague ("You are helpful")
- β Include contradictory instructions
- β Make prompts unnecessarily long (keep under 500 words)
- β Forget to specify output format
- β Assume context that isn't provided
Testing and Iterating
Initial Test:
- Create agent with your prompt
- Run 5-10 test queries
- Evaluate responses
Iterate:
- Identify issues (too verbose, missing key info, wrong tone)
- Update system prompt
- Test again
- Repeat until satisfied
Version Control: Keep track of prompt changes:
# Version 1.0 (2025-01-15)
- Initial prompt
# Version 1.1 (2025-01-16)
- Added output format specification
- Clarified tone and style
# Version 1.2 (2025-01-17)
- Reduced verbosity
- Added priority levels for feedback
Choosing the Right Model
Model Comparison
| Model | Context | Speed | Cost | Best For |
|---|---|---|---|---|
| Claude 3.5 Sonnet | 200K | Fast | Medium | General tasks, balanced quality |
| Claude 3 Opus | 200K | Slow | High | Complex reasoning, critical tasks |
| Claude 3.5 Haiku | 200K | Fastest | Low | Simple tasks, high volume |
When to Use Each Model
Claude 3.5 Sonnet (Default - Best for Most Cases)
- Code review and generation
- Content writing
- Data analysis
- Customer support
- General automation
Claude 3 Opus (Premium - Complex Tasks)
- Advanced research and analysis
- Critical decision-making
- Complex problem-solving
- Legal or medical analysis
- High-stakes content
Claude 3.5 Haiku (Economy - High Volume)
- Simple Q&A
- Data classification
- Content moderation
- Quick summaries
- Routing and triage
Switching Models
You can change the model anytime:
Via UI:
- Open agent settings
- Select "Model" dropdown
- Choose new model
- Click "Save"
Via API:
curl -X PATCH "https://cc.teamday.ai/api/v1/agents/char_abc123" \
-H "Authorization: Bearer $TEAMDAY_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-3-opus-20240229"
}'
Setting Visibility
Visibility controls who can access and use your agent.
Visibility Options
1. Private
- Who can access: Only you
- Use case: Personal experiments, sensitive work
- Sharing: Cannot be shared
2. Organization
- Who can access: All organization members
- Use case: Team collaboration, shared missions
- Sharing: Automatic for org members
3. Public
- Who can access: Anyone with link (view only)
- Use case: Demos, showcasing, public tools
- Sharing: Link-based sharing
- Note: Execution still requires org membership
Changing Visibility
Via UI:
- Open agent settings
- Select "Visibility" dropdown
- Choose new level
- Click "Save"
Via API:
curl -X PATCH "https://cc.teamday.ai/api/v1/agents/char_abc123" \
-H "Authorization: Bearer $TEAMDAY_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"visibility": "organization"
}'
Testing Your Agent
Interactive Testing
Via UI - Chat Interface:
- Navigate to agent detail page
- Click "Chat" tab
- Enter test message
- Review response
Test Scenarios:
Scenario 1: Basic Capability
Message: "Hello! What are you designed to do?"
Expected: Agent describes its role and capabilities
Scenario 2: Specific Task
Message: "Review this Python function: [paste code]"
Expected: Detailed code review with specific feedback
Scenario 3: Edge Case
Message: "Can you help me with JavaScript?"
Expected: Agent either helps or explains it's specialized in Python

API Testing
Execute Agent:
curl -X POST "https://cc.teamday.ai/api/v1/agents/char_abc123/execute" \
-H "Authorization: Bearer $TEAMDAY_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"message": "Review this code: def get_user(id): return db.query(\"SELECT * FROM users WHERE id=\" + id)"
}'
Response:
{
"executionId": "exec_xyz789",
"message": "π΄ **Critical Security Issue**: SQL Injection Vulnerability\n\nYour code is vulnerable to SQL injection attacks...\n\n**Fix:**\n```python\ndef get_user(id):\n return db.query(\"SELECT * FROM users WHERE id=?\", (id,))\n```",
"status": "completed",
"usage": {
"inputTokens": 156,
"outputTokens": 234
}
}
Evaluation Criteria
Quality Checklist:
- β Responses are accurate and relevant
- β Tone matches system prompt
- β Output format is consistent
- β Agent stays within its defined role
- β Handles edge cases gracefully
Advanced Configuration
Adding Metadata
Store additional information with your agent:
curl -X PATCH "https://cc.teamday.ai/api/v1/agents/char_abc123" \
-H "Authorization: Bearer $TEAMDAY_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"metadata": {
"version": "1.2",
"author": "engineering-team",
"lastReviewed": "2025-01-15",
"tags": ["code-review", "python", "security"]
}
}'
Temperature and Sampling
Control response randomness (coming soon):
{
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.7,
"topP": 0.9,
"maxTokens": 2000
}
Temperature:
0.0- Deterministic, focused0.7- Balanced (default)1.0- Creative, varied
Custom Instructions
Add context-specific instructions:
{
"customInstructions": {
"codeReview": "Focus on security and performance",
"outputFormat": "markdown with code blocks",
"prioritization": "critical issues first"
}
}
Best Practices
1. Start Simple, Iterate
Initial Agent:
You are a code reviewer. Review code for bugs and best practices.
After Testing:
You are an expert Python code reviewer specializing in security and performance.
When reviewing code:
1. Check for security vulnerabilities (SQL injection, XSS, etc.)
2. Identify performance bottlenecks
3. Ensure PEP 8 compliance
4. Verify proper error handling
Provide feedback with:
- Issue severity (π΄ Critical, π‘ Important, π’ Suggestion)
- Specific location in code
- Fixed code example
- Brief explanation
2. Use Descriptive Names
Good Names:
- β "Python Security Auditor"
- β "Marketing Blog Writer"
- β "Customer Support Triager"
Bad Names:
- β "Agent 1"
- β "General Helper"
- β "Test Bot"
3. Document Your Agents
Add descriptions and metadata:
{
"name": "Python Code Reviewer",
"description": "Automated code review focusing on security, performance, and best practices for Python 3.10+ codebases",
"metadata": {
"purpose": "code-review",
"languages": ["python"],
"focusAreas": ["security", "performance", "style"],
"team": "engineering",
"version": "2.0"
}
}
4. Test Thoroughly
Test Matrix:
| Test Type | Example |
|---|---|
| Happy path | Normal, expected input |
| Edge cases | Empty input, very long input |
| Invalid input | Wrong format, missing data |
| Boundary cases | Max length, special characters |
5. Version Control
Track changes to your agents:
# Agent Evolution Log
## v1.0 (2025-01-10)
- Initial creation
- Basic code review capabilities
## v1.1 (2025-01-12)
- Added security focus
- Improved output format
## v1.2 (2025-01-15)
- Added severity levels
- Included code examples in feedback
- Optimized for Python 3.10+
6. Monitor Performance
Track key metrics:
- Average response time
- Token usage per execution
- User satisfaction ratings
- Common failure patterns
7. Create Specialized Agents
Instead of one general agent, create specialized ones:
Instead of:
- β "General Purpose Helper"
Create:
- β "Code Reviewer" (for code tasks)
- β "Content Writer" (for content tasks)
- β "Data Analyst" (for data tasks)
Benefits:
- Better performance on specific tasks
- Clearer system prompts
- Easier to maintain and improve
Troubleshooting
Agent Not Responding as Expected
Problem: Agent gives generic responses
Solutions:
- Make system prompt more specific
- Add concrete examples of desired output
- Test with different phrasings
- Consider switching to more capable model
Responses Too Verbose
Problem: Agent writes too much text
Solution: Add to system prompt:
Keep responses concise (under 200 words unless more detail is requested).
Use bullet points for lists.
Agent Goes Off-Topic
Problem: Agent doesn't stay within defined role
Solution: Add clear boundaries to system prompt:
IMPORTANT: You ONLY review Python code. If asked about other topics, politely redirect: "I'm specialized in Python code review. For other topics, please consult a different agent."
Inconsistent Output Format
Problem: Agent responses vary in structure
Solution: Specify exact format in system prompt:
Always structure your response as:
## Summary
[1-2 sentence overview]
## Issues Found
[List with severity markers]
## Recommendations
[Specific action items]
High Token Usage
Problem: Executions using too many tokens
Solutions:
- Optimize system prompt (remove unnecessary text)
- Set max tokens limit
- Switch to more efficient model (Haiku)
- Break complex tasks into smaller steps
Next Steps
Now that you've created your first agent, explore:
1. Set Up a Workspace
- Create a space for your agent to work with files
- Guide: Space Setup
2. Enable Git Integration
- Let your agent work with repositories
- Guide: Git Integration
3. Install MCP Plugins
- Add tools and integrations to extend capabilities
- Guide: MCP Plugins
4. Create Sub-Agents
- Build specialized agents that work together
- Guide:
5. Schedule Automated Tasks
- Set up recurring missions
- Guide: Automation
Learning Resources
- API Reference - Complete agent API docs
- Prompts & Instructions - Advanced prompt engineering
Happy agent building! π€