Overview
Refactron v1.0.15+ integrates Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG) to provide context-aware, intelligent code refactoring and documentation generation.Core Components
LLM Orchestrator
Coordinates between analyzer, retriever, and LLM backends
RAG System
Vector database (ChromaDB) for code indexing and retrieval
LLM Backends
Support for Groq, local models, and custom providers
Safety Gate
Validates LLM-generated code for syntax and safety
Setup
Prerequisites
AI features require additional dependencies (already included with Refactron):- ChromaDB for vector storage
- Sentence Transformers for embeddings
- Groq API access (or alternative LLM provider)
Configuration
Set your LLM API key:.refactron.yaml:
.refactron.yaml
RAG Indexing
Index Your Codebase
Before using AI features, index your project:Check Index Status
Search Codebase
Semantic search across your codebase:AI-Powered Refactoring
Enable AI Suggestions
Use the--ai flag to enable LLM-powered refactoring:
How It Works
Example
AI Documentation Generation
Generate Docstrings
Automatically create docstrings using AI:Line-Specific Suggestions
Get AI suggestions for specific lines:Available LLM Providers
Groq (Recommended)
Groq (Recommended)
Fast, cloud-based LLM provider with free tierModels:
llama3-70b-8192- Best qualityllama3-8b-8192- Faster, good qualitymixtral-8x7b-32768- Long context window
Custom Providers
Custom Providers
Bring your own LLM providerConfigure in
.refactron.yaml:RAG Workflow
How RAG Enhances Refactoring
- Parsing: Code files split into semantic chunks (classes, methods)
- Embedding: Chunks converted to vector representations
- Indexing: Vectors stored in ChromaDB with metadata
- Retrieval: Relevant chunks retrieved when analyzing issues
- Context: LLM receives project-specific context for better suggestions
Update Index
Re-index after significant code changes:Feedback and Learning
Provide feedback on AI suggestions to improve quality:AI suggestions integrate with Pattern Learning to improve over time
Configuration Options
LLM provider (groq, custom)
Model identifier for the provider
Temperature for generation (0.0 - 1.0, lower = more deterministic)
Maximum tokens in LLM response
Enable RAG system
Directory for RAG index storage
Sentence transformer model for embeddings
Best Practices
Keep Index Updated
Keep Index Updated
Re-run
refactron rag index after significant code changes for accurate contextUse Higher-Parameter Models
Use Higher-Parameter Models
Models like Llama 3 70B provide better refactoring logic than smaller models
Always Preview AI Suggestions
Always Preview AI Suggestions
Use
--preview to review AI-generated code before applyingProvide Feedback
Provide Feedback
Record feedback to improve AI suggestions over time
Test After Applying
Test After Applying
Run your test suite after applying AI refactorings
Troubleshooting
API Key Not Found
API Key Not Found
Ensure Export it in your shell profile for persistence
GROQ_API_KEY is set:RAG Index Not Found
RAG Index Not Found
Create index first:
Slow AI Responses
Slow AI Responses
- Use smaller models (llama3-8b-8192)
- Reduce
max_tokens - Limit context retrieval chunks
