A comprehensive TypeScript framework for building sophisticated AI crew orchestration systems. Create specialized agents that work together to solve complex problems through modular, composable workflows.
The Ninja Agents SDK follows a hierarchical architecture inspired by martial arts concepts:
Persona-driven orchestrators that manage multiple specialized agents (Kata) and coordinate complex workflows. Each Shinobi embodies a unique role with rich backstory and expertise.
const travelExpert = new Shinobi(runtime, {
role: 'Expert Travel Assistant',
description: 'Knowledgeable travel expert with comprehensive planning abilities',
backstory: '15+ years in travel industry, helped thousands plan perfect trips...',
katas: [weatherAnalyst, costCalculator, destinationAdvisor]
});
Specialized AI agents focused on specific tasks and workflows. Each Kata encapsulates domain expertise and can use multiple tools (Shuriken) to accomplish its goals.
const weatherAnalyst = new Kata(runtime, {
model: 'gpt-4o-mini',
title: 'Weather Analysis Specialist',
description: 'Analyze weather conditions and provide travel recommendations',
shurikens: [weatherShuriken, calculatorShuriken]
});
Atomic, reusable capabilities that can be invoked by agents. Each Shuriken represents a specific function, API call, or capability with validated parameters.
const weatherShuriken = new Shuriken(
'get_weather',
'Get current weather information for a specific city',
z.object({
city: z.string(),
unit: z.enum(['celsius', 'fahrenheit']).optional()
}),
async (params) => {
// Implementation logic
return { temperature: 25, condition: 'Sunny' };
}
);
Workflow orchestration system with fluent API for creating complex agent workflows with conditional branching and parallel execution.
const analysisWorkflow = new Dojo(runtime)
.start(dataCollector)
.then(primaryAnalyst)
.parallel([technicalReviewer, businessReviewer])
.if(needsDeepDive, deepAnalysisSpecialist)
.then(synthesizer);
Agent networks for coordinating multiple Shinobi with different execution strategies: sequential, parallel, competitive, collaborative, or conditional.
const analysisTeam = new Clan(runtime, {
name: 'Market Analysis Team',
description: 'Comprehensive market analysis from multiple perspectives',
strategy: 'collaborative',
shinobi: [technicalAnalyst, businessAnalyst, userResearcher]
});
The SDK internally leverages a sophisticated reasoning system that is encapsulated and not exposed externally:
This internal architecture ensures that agents can perform complex reasoning while maintaining a clean, simple external API.
npm install ninja-agents zod openai
npm install ninja-agents zod openai @supabase/supabase-js chalk dotenv
# Clone the repository
git clone https://github.com/ninja-agents/ninja-agents
cd ninja-agents
# Install dependencies
npm install
# Build the package
npm run build
# Run tests
npm test
Create a .env
file in your project root:
# Required
OPENAI_API_KEY=your_openai_api_key_here
# Optional - for logging and memory features
SUPABASE_URL=your_supabase_project_url
SUPABASE_ANON_KEY=your_supabase_anon_key
ENABLE_DATABASE_LOGGING=true
ENABLE_FILE_LOGGING=false
DEFAULT_LOG_FILE_PATH=./logs/execution.jsonl
import { Shinobi, Kata, Shuriken, KataRuntime, Logger, Memory } from 'ninja-agents';
import { z } from 'zod';
import OpenAI from 'openai';
// Initialize core services
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const memory = new Memory({
supabaseUrl: process.env.SUPABASE_URL,
supabaseKey: process.env.SUPABASE_ANON_KEY,
enableDatabaseLogging: true,
enableFileLogging: false
});
const logger = new Logger('info', 'MyApp', memory);
const runtime = new KataRuntime(openai, logger, memory);
// Create specialized capabilities
const calculatorShuriken = new Shuriken(
'calculate',
'Perform mathematical calculations',
z.object({
operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
a: z.number(),
b: z.number()
}),
(params) => {
switch (params.operation) {
case 'add': return params.a + params.b;
case 'subtract': return params.a - params.b;
case 'multiply': return params.a * params.b;
case 'divide': return params.a / params.b;
}
}
);
// Create a math expert agent
const mathExpert = new Shinobi(runtime, {
role: 'Mathematics Expert',
description: 'Expert mathematician and problem solver',
backstory: 'PhD in Mathematics with 20 years of teaching experience',
shurikens: [calculatorShuriken],
katas: [
{
model: 'gpt-4o-mini',
title: 'Problem Solver',
description: 'Solve mathematical problems step by step'
},
{
model: 'gpt-4o-mini',
title: 'Concept Explainer',
description: 'Explain mathematical concepts clearly'
}
]
});
// Execute complex problem solving
const result = await mathExpert.execute('What is 15% of 240, then multiply by 3?');
console.log(result.result.finalAnswer);
// Create a research kata with thought integration
const researchKata = new Kata(runtime, {
model: 'gpt-4o-mini',
title: 'Research Specialist',
description: 'Conduct comprehensive research with advanced reasoning',
thoughtModule: {
strategy: 'chain-of-thought',
maxSteps: 5,
temperature: 0.6
},
shurikens: [webSearchShuriken],
parameters: {
temperature: 0.6,
max_tokens: 1500
}
});
const research = await researchKata.execute('Research the latest AI developments');
console.log('Enhanced reasoning:', research.result.enhancedReasoning);
console.log('Thought process:', research.result.reasoning);
// Define multiple expert agents
const technicalAnalyst = {
role: 'Technical Analyst',
description: 'Expert in technical analysis and system architecture',
backstory: 'Senior technical architect with 15+ years of experience',
thoughtModules: [
{ strategy: 'step-by-step', maxSteps: 4 }
],
katas: [
{
model: 'gpt-4o-mini',
title: 'Technical Evaluator',
description: 'Evaluate technical feasibility and architecture'
}
]
};
const businessAnalyst = {
role: 'Business Analyst',
description: 'Strategic business analysis expert',
backstory: 'MBA with 12+ years in business strategy and market analysis',
thoughtModules: [
{ strategy: 'multi-perspective', perspectives: ['cost', 'benefit', 'risk'] }
],
katas: [
{
model: 'gpt-4o-mini',
title: 'Market Researcher',
description: 'Conduct market analysis and competitive research'
}
]
};
// Create a collaborative clan
const analysisTeam = new Clan(runtime, {
name: 'Multi-Perspective Analysis Team',
description: 'Collaborative team providing comprehensive analysis',
strategy: 'collaborative',
shinobi: [technicalAnalyst, businessAnalyst],
maxConcurrency: 2,
timeout: 300000 // 5 minutes
});
// Execute collaborative analysis
const teamResult = await analysisTeam.execute('Analyze the market opportunity for AI-powered customer service');
console.log('Collaborative synthesis:', teamResult.result.synthesis);
// Create a structured workflow
const researchWorkflow = new Dojo(runtime, {
name: 'Comprehensive Research Pipeline',
description: 'Multi-stage research and analysis workflow',
errorHandling: 'continue',
maxRetries: 2
});
// Build the workflow with fluent API
const workflow = researchWorkflow
.start(dataCollector)
.then(primaryResearcher)
.parallel([
technicalAnalyst,
marketAnalyst,
competitorAnalyst
])
.if(
(context) => context.complexity > 0.8,
deepAnalysisSpecialist
)
.then(synthesizer);
// Execute the complete workflow
const workflowResult = await workflow.execute('Comprehensive analysis of emerging AI technologies');
console.log('Workflow steps:', workflowResult.result.steps.length);
console.log('Final result:', workflowResult.result.finalResult);
Clan
class for multi-agent coordinationDojo
class for workflow orchestrationShinobi
and Kata
with Thought System integrationthoughtModule
configuration for individual KatathoughtModules
configuration for Shinobi orchestrators// Advanced runtime setup
const runtime = new KataRuntime(openai, logger, memory);
// Configure memory with custom settings
const memory = new Memory({
supabaseUrl: process.env.SUPABASE_URL,
supabaseKey: process.env.SUPABASE_ANON_KEY,
enableDatabaseLogging: true,
enableFileLogging: false,
defaultFilePath: './logs/ninja-agents.jsonl'
});
// Configure logger with tracking
const logger = new Logger('info', 'MyApp', memory);
// Configure different models for different tasks
const researchKata = new Kata(runtime, {
model: 'gpt-4o', // Use GPT-4 for complex research
title: 'Senior Researcher',
description: 'Advanced research with GPT-4',
parameters: {
temperature: 0.7,
max_tokens: 2000,
top_p: 0.9
}
});
const summaryKata = new Kata(runtime, {
model: 'gpt-4o-mini', // Use mini for summaries
title: 'Summary Writer',
description: 'Efficient summarization',
parameters: {
temperature: 0.3,
max_tokens: 500
}
});
// Configure advanced reasoning
const advancedShinobi = new Shinobi(runtime, {
role: 'Strategic Analyst',
description: 'Multi-perspective strategic analysis',
backstory: 'Expert in complex reasoning and analysis',
thoughtModules: [
{
strategy: 'chain-of-thought',
maxSteps: 5,
temperature: 0.6,
validation: true
},
{
strategy: 'reflection',
maxSteps: 3,
temperature: 0.4
},
{
strategy: 'multi-perspective',
perspectives: ['technical', 'business', 'user'],
temperature: 0.7
}
],
katas: [/* kata configurations */]
});
// Get comprehensive execution statistics
const stats = await memory.getExecutionStats();
console.log(`Total executions: ${stats.total_executions}`);
console.log(`Total cost: $${stats.total_cost.toFixed(6)}`);
console.log(`Average execution time: ${stats.avg_execution_time}ms`);
// Get agent-specific statistics
const agentStats = await shinobi.getTotalBillingInfo();
console.log(`Agent cost: $${shinobi.getTotalEstimatedCost().toFixed(6)}`);
console.log(`Token usage:`, shinobi.getTotalTokenUsage());
// Query specific logs
const recentErrors = await memory.queryLogs({
level: 'error',
limit: 10,
start_time: new Date(Date.now() - 24 * 60 * 60 * 1000) // Last 24 hours
});
// Get logs for specific agent
const shinobiLogs = await memory.queryLogs({
shinobi_id: shinobi.getId(),
limit: 50
});
// Get performance logs
const performanceLogs = await memory.queryLogs({
level: 'info',
limit: 100
});
// Set up real-time monitoring
const logger = new Logger('info', 'Production', memory);
// Track execution metrics
logger.timing('agent_execution', executionTime);
logger.billing('$0.001234', 'gpt-4o-mini');
logger.tokenUsage({ prompt: 150, completion: 75, total: 225 });
// Use environment variables
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
// Validate API key presence
if (!process.env.OPENAI_API_KEY) {
throw new Error('OPENAI_API_KEY environment variable is required');
}
// Always validate Shuriken inputs
const secureShuriken = new Shuriken(
'secure_operation',
'Secure operation with validation',
z.object({
input: z.string().min(1).max(1000),
type: z.enum(['safe_type_1', 'safe_type_2'])
}),
async (params) => {
// Additional validation
if (containsUnsafeContent(params.input)) {
throw new Error('Unsafe input detected');
}
return processSecurely(params);
}
);
// Configure secure memory
const secureMemory = new Memory({
supabaseUrl: process.env.SUPABASE_URL,
supabaseKey: process.env.SUPABASE_ANON_KEY,
enableDatabaseLogging: true,
enableFileLogging: false // Avoid file logging in production
});
// Use appropriate models for tasks
const efficientWorkflow = new Dojo(runtime)
.start(new Kata(runtime, {
model: 'gpt-4o-mini', // Fast for initial processing
title: 'Quick Analyzer'
}))
.if(
(context) => context.needsDeepAnalysis,
new Kata(runtime, {
model: 'gpt-4o', // Powerful for complex analysis
title: 'Deep Analyzer'
})
);
// Use Clan for parallel processing
const parallelTeam = new Clan(runtime, {
name: 'Parallel Analysis Team',
strategy: 'parallel',
shinobi: [analyst1, analyst2, analyst3],
maxConcurrency: 3
});
// Implement result caching
const cachedResults = new Map();
const cachingShuriken = new Shuriken(
'cached_operation',
'Operation with caching',
schema,
async (params) => {
const key = JSON.stringify(params);
if (cachedResults.has(key)) {
return cachedResults.get(key);
}
const result = await expensiveOperation(params);
cachedResults.set(key, result);
return result;
}
);
import { describe, it, expect } from 'vitest';
import { Shuriken, Kata, Shinobi } from 'ninja-agents';
describe('Ninja Agents', () => {
it('should create and execute a Shuriken', async () => {
const testShuriken = new Shuriken(
'test_operation',
'Test operation',
z.object({ value: z.number() }),
(params) => params.value * 2
);
const result = await testShuriken.execute({ value: 5 });
expect(result.result).toBe(10);
});
it('should execute a Kata with Shuriken', async () => {
const kata = new Kata(runtime, {
model: 'gpt-4o-mini',
title: 'Test Kata',
description: 'Testing kata execution',
shurikens: [testShuriken]
});
const result = await kata.execute('Test query');
expect(result.result).toBeDefined();
});
});
describe('Integration Tests', () => {
it('should orchestrate multiple agents', async () => {
const clan = new Clan(runtime, {
name: 'Test Clan',
strategy: 'collaborative',
shinobi: [testShinobi1, testShinobi2]
});
const result = await clan.execute('Integration test query');
expect(result.result.strategy).toBe('collaborative');
});
});
describe('Performance Tests', () => {
it('should complete execution within time limit', async () => {
const startTime = Date.now();
const result = await shinobi.execute('Performance test');
const executionTime = Date.now() - startTime;
expect(executionTime).toBeLessThan(30000); // 30 seconds
expect(result.executionTime).toBeDefined();
});
});
// Check API key configuration
if (!process.env.OPENAI_API_KEY) {
console.error('Missing OPENAI_API_KEY environment variable');
}
// Test API connection
try {
const response = await openai.models.list();
console.log('API connection successful');
} catch (error) {
console.error('API connection failed:', error.message);
}
// Test memory configuration
const memory = new Memory(config);
if (!memory.isDatabaseLoggingEnabled()) {
console.warn('Database logging is disabled');
}
// Test database connection
try {
await memory.log({
level: 'info',
message: 'Test log entry'
});
console.log('Memory logging successful');
} catch (error) {
console.error('Memory logging failed:', error.message);
}
// Debug Shuriken validation
const validation = shuriken.validate(params);
if (!validation.success) {
console.error('Validation failed:', validation.error);
// Fix parameters based on error message
}
// Configure timeouts
const clan = new Clan(runtime, {
name: 'Timeout Test',
strategy: 'parallel',
shinobi: [shinobi1, shinobi2],
timeout: 60000 // 1 minute timeout
});
// Enable debug logging
const debugLogger = new Logger('debug', 'DebugMode', memory);
const debugRuntime = new KataRuntime(openai, debugLogger, memory);
// Use debug runtime for troubleshooting
const debugShinobi = new Shinobi(debugRuntime, config);
We welcome contributions! Please see our Contributing Guide for details.
# Clone the repository
git clone https://github.com/ninja-agents/ninja-agents
cd ninja-agents
# Install dependencies
npm install
# Build the package
npm run build
# Run tests
npm test
# Generate documentation
npm run docs:generate
npm test
MIT License - see LICENSE file for details.
Built with โค๏ธ for the AI development community
Transform your ideas into intelligent, collaborative AI workflows with the Ninja Agents SDK.