Datasets:
Tasks:
Object Detection
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,565 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- object-detection
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- food
|
| 9 |
+
- ingrident
|
| 10 |
+
- recipe
|
| 11 |
+
- object-detection
|
| 12 |
+
size_categories:
|
| 13 |
+
- 1K<n<10K
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
"Eyes on Eats," aims to address the challenge many individuals face when they are unsure of what to cook with the ingredients available. This uncertainty often leads to wasted time contemplating meal options or even unnecessary spending on ordering food. "Eyes on Eats" offers a solution by employing advanced deep learning techniques in object detection and text generation. By analyzing images of ingredients, the system generates personalized recipes tailored to the user's available ingredients. This innovative approach not only streamlines the cooking process but also
|
| 17 |
+
encourages culinary creativity. With "Eyes on Eats," users can confidently embark on their culinary journey without the stress of meal planning, ultimately saving time and potentially reducing unnecessary expenses.
|
| 18 |
+
|
| 19 |
+
### **Objectives:**
|
| 20 |
+
|
| 21 |
+
- Develop a robust object detection model capable of accurately identifying various ingredients depicted in images.
|
| 22 |
+
- Implement an efficient text generation model to seamlessly translate detected ingredients into personalized recipe recommendations.
|
| 23 |
+
- Ensure the scalability and adaptability of the system to accommodate a wide range of ingredients and recipes.
|
| 24 |
+
|
| 25 |
+
# Datasets
|
| 26 |
+
|
| 27 |
+
We need two types of data, for this project one going to be the image data of the ingredients to train the model on the object detection and we need textual data second we need the to train the other model to generate the recipe according to the depicted ingredients.
|
| 28 |
+
|
| 29 |
+
## Object Detection Data
|
| 30 |
+
|
| 31 |
+
We explored various ingredients datasets around the internet and they lack the requirements we need, and very short for training the complex model, so we manually do the web scraping using the Bing image downloader, but we noticed there’s inconsistency in the image formats and the bounded by the limitation of which we can only download one class at a time. So we modified it for our requirements and scrape 100 classes of images utilizing this.
|
| 32 |
+
|
| 33 |
+
<aside>
|
| 34 |
+
🔗 Access the tool [here!](https://github.com/REDDITARUN/Snap_Swift)
|
| 35 |
+
|
| 36 |
+
</aside>
|
| 37 |
+
|
| 38 |
+
Here's the list of ingredients we are going to using the tool:
|
| 39 |
+
|
| 40 |
+
| all_purpose_flour | almonds | apple | apricot | asparagus |
|
| 41 |
+
| --- | --- | --- | --- | --- |
|
| 42 |
+
| avocado | bacon | banana | barley | basil |
|
| 43 |
+
| basmati_rice | beans | beef | beets | bell_pepper |
|
| 44 |
+
| berries | biscuits | blackberries | black_pepper | blueberries |
|
| 45 |
+
| bread | bread_crumbs | bread_flour | broccoli | brownie_mix |
|
| 46 |
+
| brown_rice | butter | cabbage | cake | cardamom |
|
| 47 |
+
| carrot | cashews | cauliflower | celery | cereal |
|
| 48 |
+
| cheese | cherries | chicken | chickpeas | chocolate |
|
| 49 |
+
| chocolate_chips | chocolate_syrup | cilantro | cinnamon | clove |
|
| 50 |
+
| cocoa_powder | coconut | cookies | corn | cucumber |
|
| 51 |
+
| dates | eggplant | eggs | fish | garlic |
|
| 52 |
+
| ginger | grapes | honey | jalapeno | kidney_beans |
|
| 53 |
+
| lemon | mango | marshmallows | milk | mint |
|
| 54 |
+
| muffins | mushroom | noodles | nuts | oats |
|
| 55 |
+
| okra | olive | onion | orange | oreo_cookies |
|
| 56 |
+
| pasta | pear | pepper | pineapple | pistachios |
|
| 57 |
+
| pork | potato | pumpkin | radishes | raisins |
|
| 58 |
+
| red_chilies | rice | rosemary | salmon | salt |
|
| 59 |
+
| shrimp | spinach | strawberries | sugar | sweet_potato |
|
| 60 |
+
| tomato | vanilla_ice_cream | walnuts | watermelon | yogurt |
|
| 61 |
+
|
| 62 |
+
After the scraping the data is stored in a directory in a structured format, where everything is every category is subdivided into separate directories and each containing the images of the respective category. For now we collected the image classification data. But we need the object detection data. Before that we need to clean and verify the collected data.
|
| 63 |
+
|
| 64 |
+
## Correcting the initial data
|
| 65 |
+
|
| 66 |
+
The class name in the above table has underscore but we cant use such names to scrape the data form the web, as that can lead to less accurate results, so we to scrape the keywords are provided in such a way that it wont effect the data search. As you can see form the given example.
|
| 67 |
+
|
| 68 |
+
```python
|
| 69 |
+
queries = [ "baking powder", "basil", "cereal", "cheese", "chicken"]
|
| 70 |
+
|
| 71 |
+
for query in queries:
|
| 72 |
+
if len(sys.argv) == 3:
|
| 73 |
+
filter = sys.argv[2]
|
| 74 |
+
else:
|
| 75 |
+
filter = ""
|
| 76 |
+
|
| 77 |
+
downloader.download(
|
| 78 |
+
query,
|
| 79 |
+
limit=50,
|
| 80 |
+
output_dir="dataset_dem",
|
| 81 |
+
adult_filter_off=True,
|
| 82 |
+
force_replace=False,
|
| 83 |
+
timeout=120,
|
| 84 |
+
filter=filter,
|
| 85 |
+
verbose=True,
|
| 86 |
+
)
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
The above process in the scraping lead to creating the directory names to “baking powder” which can lead to various inconsistences in the further processes. So we created these steps to ensure consistency:
|
| 90 |
+
|
| 91 |
+
- **Convert Spaces in Directory Names to Underscores**: Rename directories to replace spaces with underscores to avoid inconsistencies. For example, rename "all purpose flour" to "all_purpose_flour".
|
| 92 |
+
|
| 93 |
+
```python
|
| 94 |
+
Renamed 'all purpose flour' to 'all_purpose_flour'
|
| 95 |
+
Renamed 'basmati rice' to 'basmati_rice'
|
| 96 |
+
Renamed 'bell pepper' to 'bell_pepper'
|
| 97 |
+
Renamed 'black pepper' to 'black_pepper'
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
- **Verify Folder Names Against Class List**: Ensure all folder names match exactly with the classes listed in a "Final_classes.txt" file. This step checks for both missing directories and extra directories not listed in the class list.
|
| 101 |
+
|
| 102 |
+
```python
|
| 103 |
+
All classes in 'Final_classes.txt' have corresponding directories in the dataset.
|
| 104 |
+
No extra directories in the dataset that are not listed in 'Final_classes.txt'.
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
- **Remove Non-JPG Files**: Execute a script to traverse the dataset directories and remove any files that are not in .jpg format. This is crucial for maintaining consistency in the file format across the dataset.
|
| 108 |
+
|
| 109 |
+
```python
|
| 110 |
+
|
| 111 |
+
def remove_non_jpg_images(dataset_dir):
|
| 112 |
+
removed_files = []
|
| 113 |
+
for root, dirs, files in os.walk(dataset_dir):
|
| 114 |
+
for file in files:
|
| 115 |
+
# Check if the file extension is not .jpg
|
| 116 |
+
if not file.lower().endswith('.jpg'):
|
| 117 |
+
file_path = os.path.join(root, file)
|
| 118 |
+
os.remove(file_path) # Remove the non-JPG file
|
| 119 |
+
removed_files.append(file_path)
|
| 120 |
+
return removed_files
|
| 121 |
+
|
| 122 |
+
dataset_dir = r'C:\Users\Kiyo\Desktop\DL\Project\image_data\initial_data'
|
| 123 |
+
removed_files = remove_non_jpg_images(dataset_dir)
|
| 124 |
+
|
| 125 |
+
if removed_files:
|
| 126 |
+
print(f"Removed {len(removed_files)} non-JPG files:")
|
| 127 |
+
for file in removed_files:
|
| 128 |
+
print(file)
|
| 129 |
+
else:
|
| 130 |
+
print("No non-JPG files found in the dataset.")
|
| 131 |
+
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
- **Check for Class Image Count**: Ensure that each class directory contains exactly 50 images. If a class has more than 50 images, randomly remove the excess images to limit each class to 50.
|
| 135 |
+
|
| 136 |
+
```python
|
| 137 |
+
all_purpose_flour: 50 images
|
| 138 |
+
almonds: 50 images
|
| 139 |
+
apple: 50 images
|
| 140 |
+
apricot: 50 images
|
| 141 |
+
asparagus: 50 images
|
| 142 |
+
avocado: 50 images
|
| 143 |
+
bacon: 50 images
|
| 144 |
+
..
|
| 145 |
+
..
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
- **Augment Images for Underrepresented Classes**: For classes with fewer than 50 images, perform image augmentation to increase the total to 50 images per class. This ensures uniformity in the number of images across all classes.
|
| 149 |
+
|
| 150 |
+
```python
|
| 151 |
+
Completed augmentation for class 'all_purpose_flour'.
|
| 152 |
+
Completed augmentation for class 'almonds'.
|
| 153 |
+
Completed augmentation for class 'apple'.
|
| 154 |
+
Completed augmentation for class 'apricot'.
|
| 155 |
+
Completed augmentation for class 'asparagus'.
|
| 156 |
+
Completed augmentation for class 'avocado'.
|
| 157 |
+
Completed augmentation for class 'bacon'.
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
---
|
| 161 |
+
|
| 162 |
+
## Annotating the object detection data
|
| 163 |
+
|
| 164 |
+
The dataset is ready, it consists of 100 classes present in the dataset, and each class contains 50 samples. But this is in the image classification format, which means to satisfy the stated objective of the project, The model is to be provided with pictures of one ingredient at a time and, after all the ingredients are collected by the user, with the help of the encoded vectors, it generates recipes through text generation. Which was very inconvenient, but also annotating the image data is very inconvenient. Then we discovered Grounding Dino. A zero-shot object detection model.
|
| 165 |
+
|
| 166 |
+
### Step 1: Check GPU Availability
|
| 167 |
+
|
| 168 |
+
Use `!nvidia-smi` to check if a GPU is available for faster processing.
|
| 169 |
+
|
| 170 |
+
### Step 2: Set Home Directory
|
| 171 |
+
|
| 172 |
+
Define a `HOME` constant to manage datasets, images, and models easily:
|
| 173 |
+
|
| 174 |
+
```python
|
| 175 |
+
import os
|
| 176 |
+
HOME = os.getcwd()
|
| 177 |
+
print(HOME)
|
| 178 |
+
|
| 179 |
+
```
|
| 180 |
+
|
| 181 |
+
### Step 3: Install Grounding DINO
|
| 182 |
+
|
| 183 |
+
Clone the Grounding DINO repository, switch to a specific feature branch (if necessary), and install the dependencies:
|
| 184 |
+
|
| 185 |
+
```python
|
| 186 |
+
%cd {HOME}
|
| 187 |
+
!git clone https://github.com/IDEA-Research/GroundingDINO.git
|
| 188 |
+
%cd {HOME}/GroundingDINO
|
| 189 |
+
|
| 190 |
+
# we use latest Grounding DINO model API that is not official yet
|
| 191 |
+
!git checkout feature/more_compact_inference_api
|
| 192 |
+
|
| 193 |
+
!pip install -q -e .
|
| 194 |
+
!pip install -q roboflow dataclasses-json onemetric
|
| 195 |
+
```
|
| 196 |
+
|
| 197 |
+
### Step 4: Additional Dependencies & Verify CUDA and PyTorch
|
| 198 |
+
|
| 199 |
+
Ensure CUDA and PyTorch are correctly installed and compatible:
|
| 200 |
+
|
| 201 |
+
```python
|
| 202 |
+
import torch
|
| 203 |
+
!nvcc --version
|
| 204 |
+
TORCH_VERSION = ".".join(torch.__version__.split(".")[:2])
|
| 205 |
+
CUDA_VERSION = torch.__version__.split("+")[-1]
|
| 206 |
+
print("torch: ", TORCH_VERSION, "; cuda: ", CUDA_VERSION)
|
| 207 |
+
|
| 208 |
+
import roboflow
|
| 209 |
+
import supervision
|
| 210 |
+
|
| 211 |
+
print(
|
| 212 |
+
"roboflow:", roboflow.__version__,
|
| 213 |
+
"; supervision:", supervision.__version__
|
| 214 |
+
)
|
| 215 |
+
```
|
| 216 |
+
|
| 217 |
+
```python
|
| 218 |
+
# confirm that configuration file exist
|
| 219 |
+
|
| 220 |
+
import os
|
| 221 |
+
|
| 222 |
+
CONFIG_PATH = os.path.join(HOME, "GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py")
|
| 223 |
+
print(CONFIG_PATH, "; exist:", os.path.isfile(CONFIG_PATH))
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
### Step 5: Download Configuration and Weights
|
| 227 |
+
|
| 228 |
+
Ensure the configuration file exists within the cloned repository and download the model weights:
|
| 229 |
+
|
| 230 |
+
```python
|
| 231 |
+
# download weights file
|
| 232 |
+
|
| 233 |
+
%cd {HOME}
|
| 234 |
+
!mkdir {HOME}/weights
|
| 235 |
+
%cd {HOME}/weights
|
| 236 |
+
|
| 237 |
+
!wget -q https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
|
| 238 |
+
|
| 239 |
+
# confirm that weights file exist
|
| 240 |
+
|
| 241 |
+
import os
|
| 242 |
+
|
| 243 |
+
WEIGHTS_PATH = os.path.join(HOME, "weights", "groundingdino_swint_ogc.pth")
|
| 244 |
+
print(WEIGHTS_PATH, "; exist:", os.path.isfile(WEIGHTS_PATH))
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
+
### Step 6: Download and Prepare Your Dataset
|
| 248 |
+
|
| 249 |
+
If your dataset is zipped in your drive, unzip it to a local directory:
|
| 250 |
+
|
| 251 |
+
```python
|
| 252 |
+
import zipfile
|
| 253 |
+
|
| 254 |
+
# Path to the zip file
|
| 255 |
+
zip_file_path = "/content/drive/MyDrive/....[your file path]"
|
| 256 |
+
|
| 257 |
+
# Directory to extract the contents of the zip file
|
| 258 |
+
extract_dir = "/content/data"
|
| 259 |
+
|
| 260 |
+
# Unzip the file
|
| 261 |
+
with zipfile.ZipFile(zip_file_path, 'r') as zip_ref:
|
| 262 |
+
zip_ref.extractall(extract_dir)
|
| 263 |
+
|
| 264 |
+
print("Extraction complete.")
|
| 265 |
+
|
| 266 |
+
```
|
| 267 |
+
|
| 268 |
+
### Step 7: Load the Grounding DINO Model
|
| 269 |
+
|
| 270 |
+
Load the model using the configuration and weights path:
|
| 271 |
+
|
| 272 |
+
```python
|
| 273 |
+
%cd {HOME}/GroundingDINO
|
| 274 |
+
|
| 275 |
+
from groundingdino.util.inference import Model
|
| 276 |
+
|
| 277 |
+
model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH)
|
| 278 |
+
```
|
| 279 |
+
|
| 280 |
+
### Step 8: Annotate Dataset and Save to PASCAL voc
|
| 281 |
+
|
| 282 |
+
Use the model to annotate images. You can run inference in different modes like `caption`, `classes`, or `enhanced classes` depending on your needs. After inference, use the detections and labels to annotate images using your preferred method or the provided utility functions.
|
| 283 |
+
|
| 284 |
+
Automate the annotation process for your entire dataset by iterating over your images, running the model to detect objects, and saving both the annotated images and their PASCAL VOC XML files.
|
| 285 |
+
|
| 286 |
+
```python
|
| 287 |
+
import os
|
| 288 |
+
import cv2
|
| 289 |
+
import xml.etree.ElementTree as ET
|
| 290 |
+
from groundingdino.util.inference import Model
|
| 291 |
+
from tqdm import tqdm
|
| 292 |
+
|
| 293 |
+
# Define the home directory and the path to the dataset
|
| 294 |
+
HOME = "/content"
|
| 295 |
+
DATASET_DIR = os.path.join(HOME, "data", "ingredients_images_dataset")
|
| 296 |
+
|
| 297 |
+
# Load the Grounding DINO model
|
| 298 |
+
MODEL_CONFIG_PATH = os.path.join(HOME, "GroundingDINO", "groundingdino", "config", "GroundingDINO_SwinT_OGC.py")
|
| 299 |
+
WEIGHTS_PATH = os.path.join(HOME, "weights", "groundingdino_swint_ogc.pth")
|
| 300 |
+
model = Model(model_config_path=MODEL_CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH)
|
| 301 |
+
|
| 302 |
+
# Load class labels from the file
|
| 303 |
+
LABELS_FILE_PATH = "[ txt file path containing your images labels one per line]"
|
| 304 |
+
with open(LABELS_FILE_PATH, "r") as f:
|
| 305 |
+
CLASSES = [line.strip() for line in f.readlines()]
|
| 306 |
+
|
| 307 |
+
# Define annotation thresholds
|
| 308 |
+
BOX_THRESHOLD = 0.35
|
| 309 |
+
TEXT_THRESHOLD = 0.25
|
| 310 |
+
|
| 311 |
+
# Function to enhance class names
|
| 312 |
+
def enhance_class_name(class_names):
|
| 313 |
+
return [f"all {class_name}s" for class_name in class_names]
|
| 314 |
+
|
| 315 |
+
# Function to create Pascal VOC format XML annotation
|
| 316 |
+
def create_pascal_voc_xml(image_filename, image_shape, boxes, labels):
|
| 317 |
+
annotation = ET.Element("annotation")
|
| 318 |
+
|
| 319 |
+
folder = ET.SubElement(annotation, "folder")
|
| 320 |
+
folder.text = "ingredient_annotations" # Folder name for annotations
|
| 321 |
+
|
| 322 |
+
filename = ET.SubElement(annotation, "filename")
|
| 323 |
+
filename.text = image_filename
|
| 324 |
+
|
| 325 |
+
source = ET.SubElement(annotation, "source")
|
| 326 |
+
database = ET.SubElement(source, "database")
|
| 327 |
+
database.text = "Unknown"
|
| 328 |
+
|
| 329 |
+
size = ET.SubElement(annotation, "size")
|
| 330 |
+
width = ET.SubElement(size, "width")
|
| 331 |
+
height = ET.SubElement(size, "height")
|
| 332 |
+
depth = ET.SubElement(size, "depth")
|
| 333 |
+
|
| 334 |
+
width.text = str(image_shape[1])
|
| 335 |
+
height.text = str(image_shape[0])
|
| 336 |
+
depth.text = str(image_shape[2])
|
| 337 |
+
|
| 338 |
+
segmented = ET.SubElement(annotation, "segmented")
|
| 339 |
+
segmented.text = "0"
|
| 340 |
+
|
| 341 |
+
for box, label in zip(boxes, labels):
|
| 342 |
+
object = ET.SubElement(annotation, "object")
|
| 343 |
+
name = ET.SubElement(object, "name")
|
| 344 |
+
pose = ET.SubElement(object, "pose")
|
| 345 |
+
truncated = ET.SubElement(object, "truncated")
|
| 346 |
+
difficult = ET.SubElement(object, "difficult")
|
| 347 |
+
bndbox = ET.SubElement(object, "bndbox")
|
| 348 |
+
xmin = ET.SubElement(bndbox, "xmin")
|
| 349 |
+
ymin = ET.SubElement(bndbox, "ymin")
|
| 350 |
+
xmax = ET.SubElement(bndbox, "xmax")
|
| 351 |
+
ymax = ET.SubElement(bndbox, "ymax")
|
| 352 |
+
|
| 353 |
+
name.text = label
|
| 354 |
+
pose.text = "Unspecified"
|
| 355 |
+
truncated.text = "0"
|
| 356 |
+
difficult.text = "0"
|
| 357 |
+
xmin.text = str(int(box[0]))
|
| 358 |
+
ymin.text = str(int(box[1]))
|
| 359 |
+
xmax.text = str(int(box[2]))
|
| 360 |
+
ymax.text = str(int(box[3]))
|
| 361 |
+
|
| 362 |
+
# Format the XML for better readability
|
| 363 |
+
xml_string = ET.tostring(annotation, encoding="unicode")
|
| 364 |
+
|
| 365 |
+
return xml_string
|
| 366 |
+
|
| 367 |
+
# Function to annotate images in a directory and save annotated images in Pascal VOC format
|
| 368 |
+
def annotate_images_in_directory(directory):
|
| 369 |
+
for class_name in CLASSES:
|
| 370 |
+
class_dir = os.path.join(directory, class_name)
|
| 371 |
+
annotated_dir = os.path.join(directory, f"{class_name}_annotated")
|
| 372 |
+
os.makedirs(annotated_dir, exist_ok=True)
|
| 373 |
+
|
| 374 |
+
print("Processing images in directory:", class_dir)
|
| 375 |
+
if os.path.isdir(class_dir):
|
| 376 |
+
for image_name in tqdm(os.listdir(class_dir)):
|
| 377 |
+
image_path = os.path.join(class_dir, image_name)
|
| 378 |
+
image = cv2.imread(image_path)
|
| 379 |
+
if image is None:
|
| 380 |
+
print("Failed to load image:", image_path)
|
| 381 |
+
continue
|
| 382 |
+
|
| 383 |
+
detections = model.predict_with_classes(
|
| 384 |
+
image=image,
|
| 385 |
+
classes=enhance_class_name([class_name]),
|
| 386 |
+
box_threshold=BOX_THRESHOLD,
|
| 387 |
+
text_threshold=TEXT_THRESHOLD
|
| 388 |
+
)
|
| 389 |
+
# Drop potential detections with phrase not part of CLASSES set
|
| 390 |
+
detections = detections[detections.class_id != None]
|
| 391 |
+
# Drop potential detections with area close to area of the whole image
|
| 392 |
+
detections = detections[(detections.area / (image.shape[0] * image.shape[1])) < 0.9]
|
| 393 |
+
# Drop potential double detections
|
| 394 |
+
detections = detections.with_nms()
|
| 395 |
+
|
| 396 |
+
# Create the Pascal VOC XML annotation for this image
|
| 397 |
+
xml_annotation = create_pascal_voc_xml(image_filename=image_name, image_shape=image.shape, boxes=detections.xyxy, labels=[class_name])
|
| 398 |
+
|
| 399 |
+
# Save the Pascal VOC XML annotation to a file
|
| 400 |
+
xml_filename = os.path.join(annotated_dir, f"{os.path.splitext(image_name)[0]}.xml")
|
| 401 |
+
with open(xml_filename, "w") as xml_file:
|
| 402 |
+
xml_file.write(xml_annotation)
|
| 403 |
+
|
| 404 |
+
# Save the annotated image
|
| 405 |
+
annotated_image_path = os.path.join(annotated_dir, image_name)
|
| 406 |
+
cv2.imwrite(annotated_image_path, image)
|
| 407 |
+
|
| 408 |
+
# Annotate images in the dataset directory
|
| 409 |
+
annotate_images_in_directory(DATASET_DIR)
|
| 410 |
+
|
| 411 |
+
```
|
| 412 |
+
|
| 413 |
+
Now, wr’re use it to automate the process of annotating the dataset in Pascal VOC format. Which will be in the following format, An image that belong to some class and xml file for that respective image.
|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
|
| 417 |
+
```python
|
| 418 |
+
<annotation>
|
| 419 |
+
<folder>ingredient_annotations</folder>
|
| 420 |
+
<filename>Image_1.jpg</filename>
|
| 421 |
+
<source>
|
| 422 |
+
<database>Unknown</database>
|
| 423 |
+
</source>
|
| 424 |
+
<size>
|
| 425 |
+
<width>1920</width>
|
| 426 |
+
<height>1280</height>
|
| 427 |
+
<depth>3</depth>
|
| 428 |
+
</size>
|
| 429 |
+
<segmented>0</segmented>
|
| 430 |
+
<object>
|
| 431 |
+
<name>almonds</name>
|
| 432 |
+
<pose>Unspecified</pose>
|
| 433 |
+
<truncated>0</truncated>
|
| 434 |
+
<difficult>0</difficult>
|
| 435 |
+
<bndbox>
|
| 436 |
+
<xmin>252</xmin>
|
| 437 |
+
<ymin>650</ymin>
|
| 438 |
+
<xmax>803</xmax>
|
| 439 |
+
<ymax>920</ymax>
|
| 440 |
+
</bndbox>
|
| 441 |
+
</object>
|
| 442 |
+
</annotation>
|
| 443 |
+
```
|
| 444 |
+
|
| 445 |
+
## Verifying the Annotated Data
|
| 446 |
+
|
| 447 |
+
- Check if all the images are annotated by checking if the image file has the respective xml file, if not we remove and manually create a new annotated sample.
|
| 448 |
+
|
| 449 |
+
```python
|
| 450 |
+
|
| 451 |
+
def check_dataset_integrity(dataset_directory):
|
| 452 |
+
for class_name in os.listdir(dataset_directory):
|
| 453 |
+
class_path = os.path.join(dataset_directory, class_name)
|
| 454 |
+
if os.path.isdir(class_path):
|
| 455 |
+
jpg_files = set()
|
| 456 |
+
xml_files = set()
|
| 457 |
+
other_files = set()
|
| 458 |
+
|
| 459 |
+
# Collect file names for each extension
|
| 460 |
+
for file_name in os.listdir(class_path):
|
| 461 |
+
if file_name.endswith('.jpg'):
|
| 462 |
+
jpg_files.add(os.path.splitext(file_name)[0])
|
| 463 |
+
elif file_name.endswith('.xml'):
|
| 464 |
+
xml_files.add(os.path.splitext(file_name)[0])
|
| 465 |
+
else:
|
| 466 |
+
other_files.add(file_name)
|
| 467 |
+
|
| 468 |
+
# Check for discrepancies
|
| 469 |
+
missing_xmls = jpg_files - xml_files
|
| 470 |
+
missing_jpgs = xml_files - jpg_files
|
| 471 |
+
is_perfect = len(missing_xmls) == 0 and len(missing_jpgs) == 0 and len(other_files) == 0
|
| 472 |
+
|
| 473 |
+
# Report
|
| 474 |
+
print(f"Class '{class_name}':", "Perfect" if is_perfect else "Discrepancies Found")
|
| 475 |
+
if missing_xmls:
|
| 476 |
+
print(f" Missing XML files for: {', '.join(sorted(missing_xmls))}")
|
| 477 |
+
if missing_jpgs:
|
| 478 |
+
print(f" Missing JPG files for: {', '.join(sorted(missing_jpgs))}")
|
| 479 |
+
if other_files:
|
| 480 |
+
print(f" Non-JPG/XML files: {', '.join(sorted(other_files))}")
|
| 481 |
+
else:
|
| 482 |
+
print(f"'{class_name}' is not a directory. Skipping.")
|
| 483 |
+
|
| 484 |
+
# Specify the path to the dataset directory
|
| 485 |
+
dataset_directory = r'C:\Users\Kiyo\Desktop\DL\Project\image_data\initial_data_annotated'
|
| 486 |
+
check_dataset_integrity(dataset_directory)
|
| 487 |
+
```
|
| 488 |
+
|
| 489 |
+
```python
|
| 490 |
+
# Output Sample
|
| 491 |
+
Class 'all_purpose_flour_annotated': Perfect
|
| 492 |
+
Class 'almonds_annotated': Perfect
|
| 493 |
+
Class 'apple_annotated': Perfect
|
| 494 |
+
Class 'apricot_annotated': Perfect
|
| 495 |
+
Class 'asparagus_annotated': Perfect
|
| 496 |
+
```
|
| 497 |
+
|
| 498 |
+
- Renamed all the directories containing samples, as you can see the dir names changed after annoateion, they are named as <some class name>_annotated. Now we remove that so that ensuring that the class name in the text file matches with the dir names.
|
| 499 |
+
- Again after these changes we checked if all the images have the respective annotations and dir names matches with the class list text file.
|
| 500 |
+
|
| 501 |
+
This completes our dataset preparation which is the major part in our project and took lot of time to reach the consistency through various trail and error approaches we created and perfected the dataset for the object detection training.
|
| 502 |
+
|
| 503 |
+
## Text Generation Data
|
| 504 |
+
|
| 505 |
+
The RecipeNLG dataset, available in RecipeNLG_dataset.csv, encompasses 2,231,142 cooking recipes sourced from RecipeNLG. This extensive dataset, totaling 2.14 GB, contains crucial recipe details such as titles, ingredients, directions, links, sources, and Named Entity Recognition (NER) labels. With label distribution categorized into various ranges and a vast array of unique values, the dataset showcases a diverse and comprehensive collection of cooking recipes. This dataset serves as a valuable resource for training and evaluating models in a multitude of natural language processing tasks, particularly in the context of generating cooking-related text.
|
| 506 |
+
|
| 507 |
+
<aside>
|
| 508 |
+
🔗
|
| 509 |
+
|
| 510 |
+
[Access Here!](https://recipenlg.cs.put.poznan.pl/)
|
| 511 |
+
|
| 512 |
+
</aside>
|
| 513 |
+
|
| 514 |
+
**Sample**
|
| 515 |
+
|
| 516 |
+
| Title | Ingredients | Link | Directions | NER |
|
| 517 |
+
| --- | --- | --- | --- | --- |
|
| 518 |
+
| No-Bake Nut Cookies | ["1 c. firmly packed brown sugar", "1/2 c. evaporated milk", "1/2 tsp. vanilla", "1/2 c. broken nuts... | www.cookbooks.com/Recipe-Details.aspx?id=44874 | ["In a heavy 2-quart saucepan, mix brown sugar, nuts, evaporated milk and butter or margarine.", "St... | ["brown sugar", "milk", "vanilla", "nuts", "butter", "bite size shredded rice biscuits"] |
|
| 519 |
+
|
| 520 |
+
For training the BART transformer model, we need to prepare the tokenized data. First, the dataset is extracted using the unzip command to access the recipe data. Next, we imported libraries such as pandas, transformers, tqdm, numpy, and TensorFlow are imported.
|
| 521 |
+
|
| 522 |
+
```python
|
| 523 |
+
!unzip '/user/bhanucha/recipe_data.zip' -d '/user/bhanucha/data'
|
| 524 |
+
import pandas as pd
|
| 525 |
+
from transformers import BartTokenizer
|
| 526 |
+
from tqdm import tqdm
|
| 527 |
+
import numpy as np
|
| 528 |
+
import tensorflow as tf
|
| 529 |
+
from transformers import TFBartForConditionalGeneration
|
| 530 |
+
import numpy as np
|
| 531 |
+
```
|
| 532 |
+
|
| 533 |
+
The BART tokenizer is initialized from the pretrained BART model, and if the tokenizer lacks a padding token, it is added. The dataset is then loaded into a pandas DataFrame.
|
| 534 |
+
|
| 535 |
+
```python
|
| 536 |
+
|
| 537 |
+
model_checkpoint = "facebook/bart-base"
|
| 538 |
+
tokenizer = BartTokenizer.from_pretrained(model_checkpoint)
|
| 539 |
+
if tokenizer.pad_token is None:
|
| 540 |
+
tokenizer.add_special_tokens({'pad_token': tokenizer.eos_token})
|
| 541 |
+
|
| 542 |
+
data = pd.read_csv('/user/bhanucha/data/dataset/full_dataset.csv')
|
| 543 |
+
|
| 544 |
+
```
|
| 545 |
+
|
| 546 |
+
Subsequently, the ingredients and directions from each recipe are concatenated into text strings and tokenized using the BART tokenizer. The tokenized data is then processed to ensure consistency in length and format, with the tokenized inputs saved for training.
|
| 547 |
+
|
| 548 |
+
```python
|
| 549 |
+
texts = ["Ingredients: " + row['ingredients'] + " Directions: " + row['directions'] for _, row in data.iterrows()]
|
| 550 |
+
|
| 551 |
+
tokenized_inputs = []
|
| 552 |
+
for texts_text in tqdm(texts, desc="Tokenizing Data"):
|
| 553 |
+
tokenized_input = tokenizer(
|
| 554 |
+
texts_text,
|
| 555 |
+
padding="max_length",
|
| 556 |
+
truncation=True,
|
| 557 |
+
max_length=512,
|
| 558 |
+
return_tensors="np"
|
| 559 |
+
)
|
| 560 |
+
tokenized_inputs.append(tokenized_input['input_ids'])
|
| 561 |
+
|
| 562 |
+
train_data = np.concatenate(tokenized_inputs, axis=0)
|
| 563 |
+
np.save('/user/bhanucha/train_data.npy', train_data)
|
| 564 |
+
|
| 565 |
+
```
|