Amigo 1.0 - (Coding Specialist)
Your AI Coding Partner
Created by Jan Francis Israel
Part of the Swordfish Project
Model Details
Model Description
Amigo 1.0 is a specialized AI assistant fine-tuned for code generation and debugging. Built on CodeLlama-7B with LoRA adapters, Amigo excels at:
- Writing clean, efficient code in multiple programming languages
- Debugging complex codebases
- Explaining code logic and algorithms
- Providing code examples and implementations
Amigo means "Friend" in Spanish - your reliable coding companion.
- Developed by: Jan Francis Israel (The Swordfish)
- Model type: Causal Language Model with LoRA fine-tuning (PEFT)
- Language(s): English (code-focused)
- License: MIT
- Finetuned from: CodeLlama-7B-hf
Model Sources
- Repository: Part of the Swordfish Project
- Demo: Amihan 1.0 Space (Ensemble with Bandila)
- Sister Model: Bandila 1.0 (Reasoning Specialist)
Uses
Direct Use
Amigo 1.0 is designed for:
- Code Generation: Writing functions, classes, and complete programs
- Debugging Assistance: Identifying and fixing bugs in code
- Code Explanation: Understanding complex algorithms and patterns
- Programming Education: Learning best practices and patterns
Recommended Use with Amihan Ensemble
For best results, use Amigo alongside Bandila 1.0 through the Amihan Ensemble, which intelligently routes queries to the appropriate specialist.
Out-of-Scope Use
- Not suitable for: Production-critical code without human review
- Limitations: May generate code with bugs or security vulnerabilities
- Important: Always review and test generated code before deployment
How to Get Started
Installation
pip install transformers peft torch
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"codellama/CodeLlama-7b-hf",
load_in_4bit=True,
device_map="auto",
torch_dtype=torch.float16
)
# Load Amigo LoRA adapter
model = PeftModel.from_pretrained(base_model, "swordfish7412/Amigo_1.0")
tokenizer = AutoTokenizer.from_pretrained("swordfish7412/Amigo_1.0")
# Generate code
prompt = "Instruction: Write a Python function to calculate factorial\nInput: \nOutput: "
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.7,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
Amigo 1.0 was trained on:
- HumanEval (164 samples): Python code generation tasks
- Identity Dataset (495 samples): Custom identity and capability descriptions
- Total: 659 training samples across 3.62 epochs
Training Procedure
Training Configuration:
- Method: LoRA (Low-Rank Adaptation) fine-tuning with 4-bit quantization
- Base Model: codellama/CodeLlama-7b-hf
- Training Steps: 300
- Training Time: ~17 minutes on RTX A5000 (24GB)
- Hardware: RunPod Cloud GPU (RTX A5000)
- Framework: HuggingFace Transformers + PEFT
Hyperparameters:
- Batch Size: 2
- Gradient Accumulation: 4
- Learning Rate: 2e-4
- Max Length: 512 tokens
- LoRA Rank: 32
- LoRA Alpha: 64
- Optimizer: paged_adamw_8bit
- FP16: True
Training Results
- Initial Loss: 13.81
- Final Loss: 12.68
- Training Speed: 3.35s/step
- Model Size: 129MB (LoRA adapter only)
Identity & Capabilities
Amigo 1.0 knows its identity and purpose:
Name: Amigo 1.0
Creator: Jan Francis Israel (The Swordfish)
Role: Super Debugger AI - Coding Specialist
Specialties: Code generation, debugging, algorithm implementation
Evaluation
Testing Results
Amigo successfully generates:
- β Python functions with proper docstrings
- β Recursive algorithms (factorial, Fibonacci)
- β Clean, readable code
- β Correct identity responses
Example Outputs
Query: "Write a Python function to calculate factorial"
Amigo's Response:
def factorial(n):
"""
This function returns the factorial of a given number.
"""
return 1 if n == 0 else n * factorial(n - 1)
Bias, Risks, and Limitations
Known Limitations
- Code Quality: May generate inefficient or buggy code
- Security: Does not guarantee secure code practices
- Context: Limited to 512 tokens per query
- Languages: Primarily trained on Python, other languages may be less accurate
Recommendations
- Always review generated code
- Test code thoroughly before production use
- Use appropriate security scanning tools
- Consider this as a coding assistant, not a replacement for developers
Environmental Impact
- Hardware: RunPod RTX A5000 (24GB)
- Training Time: ~17 minutes
- Power Consumption: Minimal (single GPU, short training)
- Carbon Footprint: Negligible due to short training duration
Technical Specifications
Model Architecture
- Base: CodeLlama-7B (7 billion parameters)
- Adapter: LoRA with rank 32
- Quantization: 4-bit (nf4) via bitsandbytes
- Adapter Size: 129MB
- Total Parameters (with base): ~7B
Compute Infrastructure
- Provider: RunPod Cloud
- GPU: NVIDIA RTX A5000 (24GB VRAM)
- Training Framework: PyTorch + HuggingFace Transformers
- Quantization: bitsandbytes 4-bit
Filipino AI Squad
Amigo is part of a trio of specialized AI models:
- ** Amigo 1.0** (You are here) - Coding Specialist
- ** Bandila 1.0** - Reasoning Specialist
- ** Amihan 1.0** - Intelligent Ensemble
Citation
@misc{amigo2024,
author = {Jan Francis Israel},
title = {Amigo 1.0: Super Debugger AI - Coding Specialist},
year = {2024},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/swordfish7412/Amigo_1.0}},
note = {Part of the Swordfish Project}
}
Model Card Authors
Jan Francis Israel (The Swordfish)
License
MIT License - Free to use with attribution
Part of the Swordfish Project
- Downloads last month
- 17
Model tree for swordfish7412/Amigo_1.0
Base model
codellama/CodeLlama-7b-hf