Groq Setup

Configure Groq LLM for ultra-fast AI inference.

Overview

Groq provides the LLM infrastructure for ClawChan, offering sub-second response times through their LPU (Language Processing Unit) technology. This guide covers setup and configuration.

Getting Your API Key

  • Visit console.groq.com and create an account
  • Navigate to the API Keys section
  • Generate a new API key
  • Copy and store the key securely
  • Never share or commit your API key

Configuration

Add your Groq API key to the environment:

  • Open your .env file in the project root
  • Add GROQ_API_KEY with your key value
  • Save the file and restart the development server

Available Models

Groq supports several models for different use cases:

  • llama-3.1-70b-versatile - Recommended for general use, excellent reasoning
  • llama-3.1-8b-instant - Faster responses for simple tasks
  • mixtral-8x7b-32768 - Alternative with longer context window

Performance Optimization

  • Use streaming for real-time response display
  • Cache common prompts to reduce latency
  • Manage context window size for efficiency
  • Batch requests when possible

Rate Limits

  • Free tier: 30 requests per minute, 14,400 per day
  • Monitor usage in the Groq console
  • Implement retry logic for rate limit errors
  • Contact Groq for higher limits if needed

Troubleshooting

  • Invalid API key - Verify the key is correct and active
  • Rate limit errors - Reduce request frequency
  • Timeout errors - Check network connection
  • Model unavailable - Try an alternative model
Documentation Coming Soon

Detailed documentation with code examples is being prepared.