Setting Up AI Models
Available Models
WPiko Chatbot supports various OpenAI models, each with different capabilities and pricing:
Primary Models
- GPT-4.1
- Full GPT-4.1 capability
- Best for complex tasks
- Highest quality responses
- Most resource-intensive
- GPT-4.1 Mini
- Balanced performance
- Good for most use cases
- Efficient resource usage
- Recommended for general chatbot operations
- GPT-4.1 Nano
- Efficient and cost-effective
- Fast response times
- Suitable for basic interactions
- Best for high-volume, straightforward queries
- GPT-4o
- Full version of optimized model
- Enhanced capabilities
- Best for complex interactions
- GPT-4o Mini
- Optimized for business use
- Balanced performance and cost
- Recommended for most use cases
- GPT-4-Turbo
- Advanced language understanding
- More nuanced and detailed responses
- Higher accuracy but more expensive
- GPT-3.5-Turbo
- Fast and cost-effective
- Good for general conversations
- Basic understanding and responses
Model Selection
Choosing the Right Model
- Navigate to AI Configuration > Edit Assistant
- Find the “Model” dropdown menu
- Select from available models based on your needs:
- Consider complexity of tasks
- Balance cost vs. performance
- Account for response time requirements
Model Capabilities
- GPT-4.1: Choose when you need:
- Maximum accuracy
- Complex reasoning
- Advanced language understanding
- GPT-4.1 Mini: Ideal for:
- General website support
- Product inquiries
- Customer service
- GPT-4.1 Nano: Best for:
- Quick responses
- Basic information retrieval
- High-traffic websites
- GPT-4o/GPT-4o Mini: Perfect for:
- Balanced performance
- Cost-effective operation
- Consistent response quality
- GPT-4-Turbo: Suitable for:
- Real-time chat
- Fast-paced interactions
- Dynamic responses
- GPT-3.5-Turbo: Good for:
- Basic chat functionality
- Cost-sensitive operations
- Simple Q&A
Performance Considerations
Response Speed
- Faster models: GPT-4-Turbo, GPT-4.1 Nano
- Balanced: GPT-4.1 Mini, GPT-4o Mini
- More thorough but slower: GPT-4.1
Cost Efficiency
- Consider your usage volume
- Monitor token consumption
- Balance quality vs. cost
- Start with lower-tier models and upgrade as needed
File Search Compatibility
Important Note: File search functionality is not available on the GPT-4 model. For file search capabilities:
- Recommended models: GPT-4o Mini, GPT-4.1 Mini, and GPT-4.1 Nano.
- Always test the model to ensure it meets your needs.
Best Practices
Model Selection Tips
- Start with GPT-4o-mini for most use cases
- Test different models with your specific content
- Monitor performance and costs
- Adjust based on user feedback
Optimization Strategies
- Use the simplest model that meets your needs
- Regular performance reviews
- Monitor usage patterns
- Adjust model selection based on:
- Response quality
- Speed requirements
- Budget constraints
- User satisfaction
Troubleshooting
Common Issues
- Slow Responses
- Consider switching to a faster model
- Check server resources
- Quality Issues
- Upgrade to a more capable model
- Review system instructions
- Test with different prompts
- Cost Management
- Monitor usage patterns
- Set usage alerts
- Consider model downgrades during low-traffic periods
Updates and Maintenance
- Regularly check for new model versions
- Test new models in staging environment
- Keep track of model performance metrics
- Update documentation with model changes