Makandal Continue Pre-trained from qwen3-0.6b

Model Details

This model has been continued pre-trained from qwen3-0.6b by Palmis Labs AI. .

Model Description

  • Developed by: Palmis Labs AI
  • Funded by: Jean Sauvenel Beaudry
  • Model type: GPT (Generative Pre-trained Transformer)
  • Language(s) (NLP): Haitian Creole
  • License: MIT
  • Model size: 0.6B parameters
  • Architecture: qwen3

Direct Use

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

def generate(model, tokenizer, prompt, device):
    inputs = tokenizer(prompt, return_tensors="pt", padding=True).to(device)
    output = model.generate(
        **inputs,
        max_new_tokens=100,
        do_sample=True,
        repetition_penalty=1.2,
        no_repeat_ngram_size=3,
        temperature=0.9,
        top_k=40,
        top_p=0.85,
        pad_token_id=tokenizer.pad_token_id,
        eos_token_id=tokenizer.eos_token_id
    )
    return tokenizer.decode(output[0], skip_special_tokens=True)

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("jsbeaudry/makandal-v2")
model = AutoModelForCausalLM.from_pretrained("jsbeaudry/makandal-v2")

# Set device
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)

# Generate text
prompt = "matematik"
response = generate(model, tokenizer, prompt, device)
print(response)

# Answer:
# Matematik se yon disiplin matematik ki konsantre sou kalkil, estatistik, ak analiz matematik.
# Li pèmèt nou konprann enfòmasyon ak fòmèlman analize done pou jwenn pwopriyete oswa fòmèlman verifye yon konpreyansyon.

Out-of-Scope Use

This model should NOT be used for:

  • Critical decision-making systems
  • Any application requiring reliable or factual outputs
  • Commercial deployment without significant additional training

Bias, Risks, and Limitations

  • Insufficient training data: Only 4.7 MB of training data used
  • Limited training time: Only 4.5 hours of training
  • High hallucination rate: Model frequently generates inaccurate or nonsensical content
  • Language coverage: Limited Haitian Creole language understanding due to minimal dataset
  • Bias: May reflect biases present in the small training dataset

Recommendations

  • Do not rely on outputs for factual information
  • Supervise usage in educational settings

Training Infrastructure

  • GPU: Tesla T4 (15GB)
  • Framework: Transformers/PyTorch

Citation

@misc{makandal2025,
  title={Makandal-pretrain: An Educational Haitian Creole Language Model},
  author={Jean Sauvenel Beaudry},
  year={2025},
  howpublished={\url{https://huggingface.co/jsbeaudry/makandal-pre-trained}},
  note={Educational demonstration model}
}

Glossary

Makandal: Named after François Makandal, an 18th-century Haitian revolutionary leader, symbolizing the model's connection to Haitian culture and education.

Downloads last month
11
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jsbeaudry/makandal-v2

Unable to build the model tree, the base model loops to the model itself. Learn more.

Space using jsbeaudry/makandal-v2 1