Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
To build this finetuned CLIP using LORA on Turkish language.
|
| 3 |
+
|
| 4 |
+
You can get more information (and code 🎉) on how to train or use the model on my [github].
|
| 5 |
+
|
| 6 |
+
[github]: https://github.com/kesimeg/LORA-turkish-clip
|
| 7 |
+
|
| 8 |
+
# How to use the model?
|
| 9 |
+
|
| 10 |
+
You can use the model like shown in below:
|
| 11 |
+
|
| 12 |
+
```Python
|
| 13 |
+
from PIL import Image
|
| 14 |
+
from transformers import CLIPProcessor, CLIPModel
|
| 15 |
+
|
| 16 |
+
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
|
| 17 |
+
model.load_adapter("kesimeg/lora-turkish-clip")
|
| 18 |
+
model.eval()
|
| 19 |
+
|
| 20 |
+
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
img = Image.open("dog.png") # A dog image
|
| 24 |
+
inputs = processor(text=["Çimenler içinde bir köpek.","Bir köpek.","Çimenler içinde bir kuş."], images=img, return_tensors="pt", padding=True)
|
| 25 |
+
outputs = model(**inputs)
|
| 26 |
+
logits_per_image = outputs.logits_per_image
|
| 27 |
+
probs = logits_per_image.softmax(dim=1)
|
| 28 |
+
print(probs)
|
| 29 |
+
|
| 30 |
+
```
|