AI Integration
Complete guide to using LaraDashboard's AI-powered content generation with OpenAI and Claude, including configuration and the Agentic CMS command center.
AI Integration
LaraDashboard features an Agentic CMS with AI-powered content generation capabilities. This guide covers configuring AI providers, using the command center, and integrating AI into your workflows.
Overview
What is Agentic CMS?
LaraDashboard's Agentic CMS is an AI-powered command center that allows you to:
- Generate content using natural language commands
- Enhance and modify existing content
- Automate repetitive content tasks
- Get AI assistance for writing and editing
Supported Providers
| Provider | Models | Best For |
|---|---|---|
| OpenAI | GPT-4, GPT-4 Turbo, GPT-3.5 | General content generation |
| Anthropic | Claude 3 Opus, Sonnet, Haiku | Long-form content, analysis |
Configuration
Environment Variables
Add AI credentials to your .env file:
# Default AI Provider
AI_DEFAULT_PROVIDER=openai
# OpenAI Configuration
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4
OPENAI_MAX_TOKENS=2000
OPENAI_TEMPERATURE=0.7
# Claude Configuration
ANTHROPIC_API_KEY=sk-ant-...
CLAUDE_MODEL=claude-3-opus-20240229
CLAUDE_MAX_TOKENS=2000
Config File
Detailed configuration in config/ai.php:
<?php
return [
/*
|--------------------------------------------------------------------------
| Default AI Provider
|--------------------------------------------------------------------------
*/
'default' => env('AI_DEFAULT_PROVIDER', 'openai'),
/*
|--------------------------------------------------------------------------
| OpenAI Configuration
|--------------------------------------------------------------------------
*/
'openai' => [
'api_key' => env('OPENAI_API_KEY'),
'model' => env('OPENAI_MODEL', 'gpt-4'),
'max_tokens' => (int) env('OPENAI_MAX_TOKENS', 2000),
'temperature' => (float) env('OPENAI_TEMPERATURE', 0.7),
'organization' => env('OPENAI_ORGANIZATION'),
],
/*
|--------------------------------------------------------------------------
| Claude Configuration
|--------------------------------------------------------------------------
*/
'claude' => [
'api_key' => env('ANTHROPIC_API_KEY'),
'model' => env('CLAUDE_MODEL', 'claude-3-opus-20240229'),
'max_tokens' => (int) env('CLAUDE_MAX_TOKENS', 2000),
],
/*
|--------------------------------------------------------------------------
| Command Center Settings
|--------------------------------------------------------------------------
*/
'commands' => [
'enabled' => env('AI_COMMANDS_ENABLED', true),
'rate_limit' => env('AI_RATE_LIMIT', 20), // per minute
'log_commands' => env('AI_LOG_COMMANDS', true),
],
];
Using the Command Center
Accessing the Command Center
The AI Command Center is available when editing content:
- Navigate to any post/page editor
- Click the AI Assistant button (🤖)
- The command center panel opens
Command Syntax
Enter natural language commands:
Generate a 500-word blog post about Laravel best practices
Summarize this article in 3 bullet points
Rewrite this paragraph to be more engaging
Available Commands
Content Generation
| Command | Description |
|---|---|
Generate [description] |
Create new content |
Write [description] |
Alias for generate |
Create [type] about [topic] |
Generate specific content type |
Examples:
Generate a product description for a wireless mouse
Write an introduction for a blog about AI
Create a headline for an article about climate change
Content Enhancement
| Command | Description |
|---|---|
Improve [selection] |
Enhance selected text |
Rewrite [selection] |
Rewrite in different style |
Expand [selection] |
Make content longer |
Summarize [selection] |
Create a summary |
Simplify [selection] |
Make easier to read |
Examples:
Improve the introduction paragraph
Rewrite this in a professional tone
Expand the features section
Summarize this article in 100 words
SEO Optimization
| Command | Description |
|---|---|
Generate meta description |
Create SEO description |
Suggest keywords for [topic] |
Get keyword ideas |
Optimize title for SEO |
Improve title |
Translation
| Command | Description |
|---|---|
Translate to [language] |
Translate content |
Localize for [region] |
Adapt for region |
Streaming Responses
AI responses stream in real-time for better UX:
// Response appears progressively
"Generating... ▌"
"Here is your content: Lorem ipsum... ▌"
"Here is your content: Lorem ipsum dolor sit amet..."
Programmatic Usage
AiContentGeneratorService
Use AI in your code:
use App\Services\AiContentGeneratorService;
class ContentController extends Controller
{
public function __construct(
private AiContentGeneratorService $aiService
) {}
public function generateContent(Request $request)
{
$result = $this->aiService->generate([
'prompt' => $request->prompt,
'max_tokens' => 1000,
'temperature' => 0.7,
]);
return response()->json([
'content' => $result->content,
'usage' => $result->usage,
]);
}
}
Available Methods
// Generate content
$result = $aiService->generate([
'prompt' => 'Write a product description',
'context' => 'E-commerce website selling electronics',
]);
// Modify existing text
$result = $aiService->modify([
'text' => $existingContent,
'instruction' => 'Make this more concise',
]);
// Generate with streaming
$aiService->generateStream([
'prompt' => 'Generate a long article',
'callback' => function ($chunk) {
echo $chunk;
},
]);
// Use specific provider
$result = $aiService->withProvider('claude')->generate([
'prompt' => 'Analyze this code...',
]);
API Endpoints
# Get available providers
GET /api/v1/ai/providers
# Generate content
POST /api/v1/ai/generate-content
{
"prompt": "Write a blog post about...",
"provider": "openai",
"options": {
"max_tokens": 1000
}
}
# Modify text
POST /api/v1/ai/modify-text
{
"text": "Original text here...",
"instruction": "Make it shorter"
}
# Stream generation
POST /admin/ai/command/process-stream
{
"command": "Generate article about AI",
"context": {}
}
Command Logging
Viewing Logs
All AI commands are logged for auditing:
- Navigate to Settings → AI → Command History
- View command, response, user, and timestamp
- Filter by date, user, or command type
Log Structure
// ai_command_logs table
[
'user_id' => 1,
'command' => 'Generate blog post...',
'provider' => 'openai',
'model' => 'gpt-4',
'parameters' => ['max_tokens' => 1000],
'response_tokens' => 850,
'duration_ms' => 2500,
'status' => 'success',
'created_at' => '2024-01-15 10:30:00',
]
Cost Management
Token Usage
Monitor token usage:
// Get usage statistics
$stats = $aiService->getUsageStats([
'user_id' => auth()->id(),
'period' => 'month',
]);
// Returns
[
'total_tokens' => 50000,
'total_requests' => 150,
'estimated_cost' => 1.50,
]
Rate Limiting
Configure rate limits per user:
// config/ai.php
'rate_limits' => [
'default' => 20, // requests per minute
'premium' => 100, // for premium users
],
Cost Estimation
// Estimate before generating
$estimate = $aiService->estimateCost([
'prompt' => $prompt,
'max_tokens' => 1000,
]);
// Returns
[
'input_tokens' => 150,
'output_tokens' => 1000,
'estimated_cost' => 0.045,
]
Prompt Engineering
System Prompts
Configure default system prompts:
// In service or config
$systemPrompts = [
'content_writer' => "You are a professional content writer...",
'seo_expert' => "You are an SEO specialist...",
'translator' => "You are a professional translator...",
];
Context Injection
Provide context for better results:
$result = $aiService->generate([
'prompt' => 'Write a product description',
'context' => [
'brand' => 'TechCorp',
'product_type' => 'Wireless Mouse',
'features' => ['ergonomic', 'long battery', 'multi-device'],
'tone' => 'professional',
'target_audience' => 'business professionals',
],
]);
Template Prompts
Create reusable prompt templates:
// Store templates
$templates = [
'blog_post' => [
'system' => 'You are a skilled blog writer...',
'template' => "Write a {length} word blog post about {topic}. Include {sections} sections. Tone: {tone}.",
],
'product_description' => [
'system' => 'You are an e-commerce copywriter...',
'template' => "Write a compelling product description for {product}. Highlight: {features}. Target audience: {audience}.",
],
];
// Use template
$result = $aiService->generateFromTemplate('blog_post', [
'length' => 800,
'topic' => 'Laravel Performance',
'sections' => 5,
'tone' => 'informative',
]);
Error Handling
Common Errors
try {
$result = $aiService->generate($params);
} catch (AiProviderException $e) {
// Provider API error
Log::error('AI Provider Error: ' . $e->getMessage());
} catch (RateLimitException $e) {
// Rate limit exceeded
return response()->json([
'error' => 'Too many requests. Please wait.',
'retry_after' => $e->retryAfter,
], 429);
} catch (InsufficientCreditsException $e) {
// No API credits
return response()->json([
'error' => 'API credits exhausted.',
], 402);
}
Fallback Providers
Configure fallback when primary fails:
// config/ai.php
'fallback' => [
'enabled' => true,
'order' => ['openai', 'claude'],
'on_errors' => ['rate_limit', 'timeout', 'server_error'],
],
Security Considerations
API Key Security
- Never commit API keys to version control
- Use environment variables
- Rotate keys periodically
- Use separate keys for production/development
Content Moderation
Enable content filtering:
// config/ai.php
'moderation' => [
'enabled' => true,
'block_categories' => ['hate', 'violence', 'sexual'],
'log_blocked' => true,
],
User Permissions
Control AI access:
// Permissions
'ai.use' // Can use AI features
'ai.settings' // Can configure AI
'ai.view_logs' // Can view command logs
Hooks
AI Hooks
use TorMorten\Eventy\Facades\Events as Eventy;
// Before AI generation
Eventy::action('ai.before_generate', function ($params, $user) {
// Log or modify params
});
// After AI generation
Eventy::action('ai.after_generate', function ($result, $params, $user) {
// Post-process result
});
// Filter AI prompt
$prompt = Eventy::filter('ai.prompt', $prompt, $context);
// Filter AI result
$content = Eventy::filter('ai.result', $content, $command);
Best Practices
Prompt Design
- Be specific and clear
- Provide context
- Specify format requirements
- Set appropriate length limits
- Include examples when helpful
Performance
- Use streaming for long content
- Cache repeated queries
- Batch similar requests
- Monitor token usage
Quality
- Review AI-generated content
- Edit and personalize
- Check facts and accuracy
- Maintain brand voice
Troubleshooting
No Response
- Check API key is valid
- Verify network connectivity
- Check rate limits
- Review error logs
Poor Quality Output
- Improve prompt specificity
- Provide more context
- Adjust temperature (lower = more focused)
- Try different model
Slow Generation
- Reduce max_tokens
- Use faster model (gpt-3.5 vs gpt-4)
- Enable streaming
- Check network latency
Next Steps
- Content Management - Use AI in content
- Email Templates - AI for emails
- Module Development - Build AI modules