Datasets:
updated analysis of LILA data (#2)
Browse files- Updated CSV downloaded from LILA for updated analysis (c7db1bab4d496332ef9723f04158321ca7917c12)
- Add paired py file for notebook from first analysis (e2e760ec3065e81274ee8a7ab98a37ae7496308a)
- Add the taxonomy mapping from original download (f2d596714c46bf30edf1f45efe88b3a09b3c5f81)
- Update taxonomy mapping file to match LILA data in c7db1bab4d496332ef9723f04158321ca7917c12 (c11297c2b2f54fece6996097a57beb841d6a931e)
- remove entries labeled as empty (0835cc581789a47e2efe8fdd58bd7e00ee268974)
- Remove other non-creature images (24 unique original labels with no corresponding scientific name) (15fccf29242d51b7b0aefc91d26fb034c59d6a04)
- analysis and generation of metadata in commit 15fccf29 (010ecf0c6a2e0c99c9481cea793d8b1556b5c71e)
- Describe latest analysis of lila metadata (9a3ceb2654393632f4fc3d7072aead6cb10dc57a)
- .gitattributes +1 -0
- README.md +70 -28
- data/lila-taxonomy-mapping_release.csv +0 -0
- data/lila_image_urls_and_labels.csv +2 -2
- data/lila_image_urls_and_labels_wHumans.csv +3 -0
- notebooks/lilabc_CT.ipynb +0 -0
- notebooks/lilabc_CT.py +375 -0
|
@@ -55,3 +55,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 56 |
lila_image_urls_and_labels.csv filter=lfs diff=lfs merge=lfs -text
|
| 57 |
data/lila_image_urls_and_labels_species.csv filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 56 |
lila_image_urls_and_labels.csv filter=lfs diff=lfs merge=lfs -text
|
| 57 |
data/lila_image_urls_and_labels_species.csv filter=lfs diff=lfs merge=lfs -text
|
| 58 |
+
data/lila_image_urls_and_labels_wHumans.csv filter=lfs diff=lfs merge=lfs -text
|
|
@@ -61,15 +61,17 @@ Escape underscores ("_") with a "\". Example: image\_RGB
|
|
| 61 |
|
| 62 |
<!--This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).-->
|
| 63 |
|
| 64 |
-
This dataset contains the LILA BC full camera trap information with notebook exploring available data.
|
|
|
|
| 65 |
|
| 66 |
-
|
| 67 |
|
|
|
|
| 68 |
|
| 69 |
-
|
| 70 |
|
|
|
|
| 71 |
|
| 72 |
-
This was potentially to use for testingi BioCLIP, but data had been processed elsewhere.
|
| 73 |
|
| 74 |
|
| 75 |
### Supported Tasks and Leaderboards
|
|
@@ -80,37 +82,77 @@ This was potentially to use for testingi BioCLIP, but data had been processed el
|
|
| 80 |
|
| 81 |
## Dataset Structure
|
| 82 |
|
| 83 |
-
<!-- Provide format of the dataset, ex:
|
| 84 |
-
|
| 85 |
```
|
| 86 |
/dataset/
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
...
|
| 96 |
-
File_n
|
| 97 |
-
...
|
| 98 |
-
Folder_N/
|
| 99 |
-
File_1
|
| 100 |
-
File_2
|
| 101 |
-
...
|
| 102 |
-
File_n
|
| 103 |
-
metadata.csv
|
| 104 |
```
|
| 105 |
|
| 106 |
-
-->
|
| 107 |
|
| 108 |
### Data Instances
|
| 109 |
-
[More Information Needed]
|
| 110 |
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
### Data Fields
|
| 116 |
[More Information Needed]
|
|
|
|
| 61 |
|
| 62 |
<!--This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).-->
|
| 63 |
|
| 64 |
+
This dataset contains the LILA BC full camera trap information with notebook ([`lilabc_CT.ipynb`](https://huggingface.co/datasets/imageomics/lila-bc-camera/blob/main/notebooks/lilabc_CT.ipynb)) exploring available data. The last run of this (in [commit 010ecf0](https://huggingface.co/datasets/imageomics/lila-bc-camera/commit/010ecf0c6a2e0c99c9481cea793d8b1556b5c71e)) uses and produces the lila CSVs found [here](https://huggingface.co/datasets/imageomics/lila-bc-camera/tree/010ecf0c6a2e0c99c9481cea793d8b1556b5c71e/data).
|
| 65 |
+
More details on this are below in [Data Instances](#data-instances).
|
| 66 |
|
| 67 |
+
**Repo file description at [commit 87e2e4d](https://huggingface.co/datasets/imageomics/lila-bc-camera/tree/87e2e4d46cf1e8daadd74b7738856a1e30754de3) when we were considering it for BioCLIP v1 testing:**
|
| 68 |
|
| 69 |
+
Images have been deduplicated and reduced down to species designation, with the main CSV filtered to just those with species labels and only one animal per image. This was done by pulling the first instance of an animal so that there are not repeat images of the same animal from essentially the same time.
|
| 70 |
|
| 71 |
+
The deduplicated collection ([lila_image_urls_and_labels_species.csv](https://huggingface.co/datasets/imageomics/lila-bc-camera/blob/f2d596714c46bf30edf1f45efe88b3a09b3c5f81/data/lila_image_urls_and_labels_species.csv)) has 6,365,985 images (compared to the full dataset of 16,833,848 at time of download). Its [associated taxonomy mapping release](https://huggingface.co/datasets/imageomics/lila-bc-camera/blob/f2d596714c46bf30edf1f45efe88b3a09b3c5f81/data/lila-taxonomy-mapping_release.csv).
|
| 72 |
|
| 73 |
+
See the [LILA BC HF Dataset](https://huggingface.co/datasets/society-ethics/lila_camera_traps) for more inforamtion and updated data.
|
| 74 |
|
|
|
|
| 75 |
|
| 76 |
|
| 77 |
### Supported Tasks and Leaderboards
|
|
|
|
| 82 |
|
| 83 |
## Dataset Structure
|
| 84 |
|
|
|
|
|
|
|
| 85 |
```
|
| 86 |
/dataset/
|
| 87 |
+
data/
|
| 88 |
+
lila-taxonomy-mapping_release.csv
|
| 89 |
+
lila_image_urls_and_labels.csv
|
| 90 |
+
lila_image_urls_and_labels_species.csv # Outdated
|
| 91 |
+
lila_image_urls_and_labels_wHumans.csv
|
| 92 |
+
notebooks/
|
| 93 |
+
lilabc_CT.ipynb
|
| 94 |
+
lilabc_CT.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 95 |
```
|
| 96 |
|
|
|
|
| 97 |
|
| 98 |
### Data Instances
|
|
|
|
| 99 |
|
| 100 |
+
The [`data/lila_image_urls_and_labels.csv`](https://huggingface.co/datasets/imageomics/lila-bc-camera/blob/010ecf0c6a2e0c99c9481cea793d8b1556b5c71e/data/lila_image_urls_and_labels.csv) has all images with non-taxa (identified by `scientific_name`, `common_name`, and `kingdom` are null) or `human` original labels filtered out and has 10,104,328 images.
|
| 101 |
+
7,521,712 have full 7-rank taxa, with 891 unique 7-tuple strings (908 unique including subranks), with 890 unique scientific names -- this count is from before humans were removed (there are 257,159 images with humans listed and they do have full 7-rank taxa).
|
| 102 |
+
Final version at this stage has 9,849,119 images, 907 unique scientific names.
|
| 103 |
+
|
| 104 |
+
**annotation_level**
|
| 105 |
+
```
|
| 106 |
+
sequence 4156306
|
| 107 |
+
image 2892394
|
| 108 |
+
unknown 2886844
|
| 109 |
+
```
|
| 110 |
+
|
| 111 |
+
**non-taxa labels:**
|
| 112 |
+
```
|
| 113 |
+
original_label
|
| 114 |
+
problem 288579
|
| 115 |
+
blurred 184620
|
| 116 |
+
ignore 177546
|
| 117 |
+
vehicle 26445
|
| 118 |
+
unknown 26170
|
| 119 |
+
snow on lens 17552
|
| 120 |
+
foggy lens 15832
|
| 121 |
+
vegetation obstruction 6994
|
| 122 |
+
malfunction 5640
|
| 123 |
+
unclassifiable 3484
|
| 124 |
+
motorcycle 3423
|
| 125 |
+
misdirected 2832
|
| 126 |
+
other 2474
|
| 127 |
+
unidentifiable 1472
|
| 128 |
+
foggy weather 1380
|
| 129 |
+
lens obscured 866
|
| 130 |
+
sun 835
|
| 131 |
+
end 616
|
| 132 |
+
fire 578
|
| 133 |
+
misfire 400
|
| 134 |
+
eye_shine 328
|
| 135 |
+
start 321
|
| 136 |
+
tilted 56
|
| 137 |
+
unidentified 39
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
**Datasets with the non-taxa labels:**
|
| 141 |
+
```
|
| 142 |
+
dataset_name
|
| 143 |
+
SWG Camera Traps 650745
|
| 144 |
+
Idaho Camera Traps 66339
|
| 145 |
+
NACTI 26015
|
| 146 |
+
WCS Camera Traps 18320
|
| 147 |
+
Wellington Camera Traps 3484
|
| 148 |
+
Orinoquia Camera Traps 1280
|
| 149 |
+
Island Conservation Camera Traps 1269
|
| 150 |
+
Snapshot Serengeti 568
|
| 151 |
+
ENA24 293
|
| 152 |
+
Channel Islands Camera Traps 159
|
| 153 |
+
Snapshot Mountain Zebra 7
|
| 154 |
+
Snapshot Camdeboo 3
|
| 155 |
+
```
|
| 156 |
|
| 157 |
### Data Fields
|
| 158 |
[More Information Needed]
|
|
The diff for this file is too large to render.
See raw diff
|
|
|
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1cb6c5e264db95cbeb52634763a5581ebe705461f22316c9deadf8c6ad20c84d
|
| 3 |
+
size 7452744161
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dfd1599bec3da3c252810b7bef4a3162a439677cbdd49735b1788f39adf1af62
|
| 3 |
+
size 7631553713
|
|
The diff for this file is too large to render.
See raw diff
|
|
|
|
@@ -0,0 +1,375 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ---
|
| 2 |
+
# jupyter:
|
| 3 |
+
# jupytext:
|
| 4 |
+
# formats: ipynb,py:percent
|
| 5 |
+
# text_representation:
|
| 6 |
+
# extension: .py
|
| 7 |
+
# format_name: percent
|
| 8 |
+
# format_version: '1.3'
|
| 9 |
+
# jupytext_version: 1.16.0
|
| 10 |
+
# kernelspec:
|
| 11 |
+
# display_name: std
|
| 12 |
+
# language: python
|
| 13 |
+
# name: python3
|
| 14 |
+
# ---
|
| 15 |
+
|
| 16 |
+
# %%
|
| 17 |
+
import pandas as pd
|
| 18 |
+
import seaborn as sns
|
| 19 |
+
|
| 20 |
+
sns.set_style("whitegrid")
|
| 21 |
+
|
| 22 |
+
# %%
|
| 23 |
+
df = pd.read_csv("../data/lila_image_urls_and_labels.csv", low_memory = False)
|
| 24 |
+
df.head()
|
| 25 |
+
|
| 26 |
+
# %%
|
| 27 |
+
df.columns
|
| 28 |
+
|
| 29 |
+
# %%
|
| 30 |
+
df.annotation_level.value_counts()
|
| 31 |
+
|
| 32 |
+
# %% [markdown]
|
| 33 |
+
# Annotation level indicates iimage vs sequence (or unknown), not analogous to `taxonomy_level` from lila-taxonomy-mapping_release.csv. It seems `original_label` may be the analogous column.
|
| 34 |
+
#
|
| 35 |
+
# We'll likely want to pull out the image-level before doing any sequence checks and such since those should be "clean" images. Though we will want to label them with how many distinct species are in the image first.
|
| 36 |
+
#
|
| 37 |
+
# We now have 66 less sequence-level annotations and 2,517,374 more image-level! That's quite the update! The unknown count has not changed.
|
| 38 |
+
#
|
| 39 |
+
# ### Check Dataset Counts
|
| 40 |
+
#
|
| 41 |
+
# 1. Make sure we have all datasets expected.
|
| 42 |
+
# 2. Check which/how many datasets are labeled to the image level (and check for match to [Andrey's spreadsheet](https://docs.google.com/spreadsheets/d/1sC90DolAvswDUJ1lNSf0sk_norR24LwzX2O4g9OxMZE/edit?usp=drive_link)).
|
| 43 |
+
|
| 44 |
+
# %%
|
| 45 |
+
df.dataset_name.value_counts()
|
| 46 |
+
|
| 47 |
+
# %%
|
| 48 |
+
df.groupby(["dataset_name"]).annotation_level.value_counts()
|
| 49 |
+
|
| 50 |
+
# %% [markdown]
|
| 51 |
+
# It seems all the unknown annotation level images are in NACTI (North American Camera Trap Images). At first glance I don't see annotation level information on HF or on [their LILA page](https://lila.science/datasets/nacti)--will require more looking.
|
| 52 |
+
#
|
| 53 |
+
# Desert Lion Conservation Camera Traps & Trail Camera Images of New Zealand Animals are _not_ included in the [Hugging Face dataset](https://huggingface.co/datasets/society-ethics/lila_camera_traps).
|
| 54 |
+
#
|
| 55 |
+
# There are definitely more in [Andrey's spreadsheet](https://docs.google.com/spreadsheets/d/1sC90DolAvswDUJ1lNSf0sk_norR24LwzX2O4g9OxMZE/edit?usp=drive_link) that aren't included here. We'll have him go through those too.
|
| 56 |
+
|
| 57 |
+
# %%
|
| 58 |
+
df.sample(10)
|
| 59 |
+
|
| 60 |
+
# %% [markdown]
|
| 61 |
+
# Observe that we also now get multiple URL options; `url_aws` will likely be best/fastest for use with [`distributed-downloader`](https://github.com/Imageomics/distributed-downloader) to get the images.
|
| 62 |
+
|
| 63 |
+
# %%
|
| 64 |
+
df.info(show_counts = True)
|
| 65 |
+
|
| 66 |
+
# %% [markdown]
|
| 67 |
+
# The overall dataset has grown by about 3 million images, we'll see how much of this is non-empty. I'm encouraged by the number of non-null `scientific_name` values seeming to also grow by about 3 million; most of these also seem to have genus now.
|
| 68 |
+
#
|
| 69 |
+
# We'll definitely want to check on the scientifc name choices where genus and species aren't available, similarly for other ranks, as it is guarunteed as much as kingdom (which is hopefully aligned with all non-empty images).
|
| 70 |
+
#
|
| 71 |
+
# No licensing info, we'll get that from HF or the datasets themselves (Andrey can check this; most seem to be [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/)).
|
| 72 |
+
|
| 73 |
+
# %%
|
| 74 |
+
df.nunique()
|
| 75 |
+
|
| 76 |
+
# %% [markdown]
|
| 77 |
+
# We have 739 unique species indicated, though the 908 unique `scientific_name` values is likely more indicative of the diversity.
|
| 78 |
+
#
|
| 79 |
+
# Interesting also to note that there are duplicate URLs here; these would be the indicators of multiple species in an image as they correspond to the number of unique image IDs. We'll check this out once we remove the images labeled as "empty".
|
| 80 |
+
|
| 81 |
+
# %%
|
| 82 |
+
#check for humans
|
| 83 |
+
df.loc[df.species == "homo sapien"]
|
| 84 |
+
|
| 85 |
+
# %% [markdown]
|
| 86 |
+
# Let's start by removing entries with `original_label`: `empty`.
|
| 87 |
+
|
| 88 |
+
# %%
|
| 89 |
+
df_cleaned = df.loc[df.original_label != "empty"].copy()
|
| 90 |
+
|
| 91 |
+
# %% [markdown]
|
| 92 |
+
# ## Save the Reduced Data (no more "empty" labels)
|
| 93 |
+
|
| 94 |
+
# %%
|
| 95 |
+
df_cleaned.to_csv("../data/lila_image_urls_and_labels.csv", index = False)
|
| 96 |
+
|
| 97 |
+
# %% [markdown]
|
| 98 |
+
# Let's check where we are with annotations now that we've removed all the images labeled as empty.
|
| 99 |
+
|
| 100 |
+
# %%
|
| 101 |
+
df.groupby(["dataset_name"]).annotation_level.value_counts()
|
| 102 |
+
|
| 103 |
+
# %% [markdown]
|
| 104 |
+
# We started with 19,351,156 entries, and are left with 10,965,902 after removing all labeled as `empty`, so more than half the images now; it's an increase of about 2.5M from the last version.
|
| 105 |
+
#
|
| 106 |
+
# Note that there are still about 3.4 million that don't have the species label, 1.5 million that are missing genus designation. 10,192,703 of them have scientific and common name, though! That's nearly all of them.
|
| 107 |
+
|
| 108 |
+
# %%
|
| 109 |
+
df_cleaned.info(show_counts = True)
|
| 110 |
+
|
| 111 |
+
# %%
|
| 112 |
+
df_cleaned.nunique()
|
| 113 |
+
|
| 114 |
+
# %%
|
| 115 |
+
print(df_cleaned.phylum.value_counts())
|
| 116 |
+
print()
|
| 117 |
+
print(df_cleaned["class"].value_counts())
|
| 118 |
+
|
| 119 |
+
# %% [markdown]
|
| 120 |
+
# We have 10,965,902 total - 10,864,013 unique URLs, suggesting at most 101,889 images have more than one species in them. That's only 1% of our images here and even smaller at the scale we're looking for the next ToL dataset. It is interesting to note though and we should explore this more.
|
| 121 |
+
#
|
| 122 |
+
# I'm curious about the single "variety", since I thought that was more of a plant label and these are all animals.
|
| 123 |
+
#
|
| 124 |
+
# All images are in Animalia, as expected; we have 2 phyla represented and 8 classes:
|
| 125 |
+
# - Predominantly Chordata, and within that phylum, Mammalia is the vast majority, though aves is about 10%.
|
| 126 |
+
# - Note that not every image with a phylum label has a class label.
|
| 127 |
+
# - Insecta, malacostraca, arachnida, and diplopoda are all in the class Arthropoda.
|
| 128 |
+
#
|
| 129 |
+
# ### Label Multi-Species Images
|
| 130 |
+
# We'll go by both the URL and image ID, which do seem to correspond to the same images (for uniqueness).
|
| 131 |
+
|
| 132 |
+
# %%
|
| 133 |
+
df_cleaned["multi_species"] = df_cleaned.duplicated(subset = ["url_aws", "image_id"], keep = False)
|
| 134 |
+
|
| 135 |
+
df_cleaned.loc[df_cleaned["multi_species"]].nunique()
|
| 136 |
+
|
| 137 |
+
# %% [markdown]
|
| 138 |
+
# We've got just under 100K images that have multiple species. We can figure out how many each of them have, and then move on to looking at images per sequence and other labeling info.
|
| 139 |
+
|
| 140 |
+
# %%
|
| 141 |
+
multi_sp_imgs = list(df_cleaned.loc[df_cleaned["multi_species"], "image_id"].unique())
|
| 142 |
+
|
| 143 |
+
# %%
|
| 144 |
+
for img in multi_sp_imgs:
|
| 145 |
+
df_cleaned.loc[df_cleaned["image_id"] == img, "num_species"] = df_cleaned.loc[df_cleaned["image_id"] == img].shape[0]
|
| 146 |
+
|
| 147 |
+
df_cleaned.head()
|
| 148 |
+
|
| 149 |
+
# %% [markdown]
|
| 150 |
+
# #### Save this to CSV now we got those counts
|
| 151 |
+
|
| 152 |
+
# %%
|
| 153 |
+
df_cleaned.to_csv("../data/lila_image_urls_and_labels.csv", index = False)
|
| 154 |
+
|
| 155 |
+
# %%
|
| 156 |
+
df_cleaned.loc[df_cleaned["multi_species"]].head()
|
| 157 |
+
|
| 158 |
+
# %% [markdown]
|
| 159 |
+
# How many different species do we generally have when we have multiple species in an image?
|
| 160 |
+
|
| 161 |
+
# %%
|
| 162 |
+
df_cleaned.num_species.value_counts()
|
| 163 |
+
|
| 164 |
+
# %% [markdown]
|
| 165 |
+
# We have 97,567 images with 2 different species (most multi-species instances), 2,023 with 3 different species, and 92 with 4.
|
| 166 |
+
#
|
| 167 |
+
# We will want to dedicate some more time to exploring some of these taxonomic counts, but we'll first look at the number of unique taxa (by Linnean 7-rank (`unique_7_tuple`) and then by all taxonomic labels (`unique_taxa`) available). We'll compare these to the number of unique scientific and common names, then perhaps add a count of number of creatures based on one of those labels. At that point we may save another copy of this CSV and start a new analysis notebook.
|
| 168 |
+
|
| 169 |
+
# %%
|
| 170 |
+
df_cleaned.annotation_level.value_counts()
|
| 171 |
+
|
| 172 |
+
# %% [markdown]
|
| 173 |
+
# We've got ~3M labeled to the image and another 3M unknonwn labeling (all from NACTI, which Andrey will check on), leaving ~5M labeled only to at the sequence level. This _should_ give Jianyang something to work with to start exploring near-duplicate de-duplication.
|
| 174 |
+
#
|
| 175 |
+
# Let's update the non-multi species images to show 1 in the `num_species` column, then move on to checking the taxonomy strings.
|
| 176 |
+
|
| 177 |
+
# %%
|
| 178 |
+
df_cleaned.loc[df_cleaned["num_species"].isna(), "num_species"] = 1.0
|
| 179 |
+
|
| 180 |
+
df_cleaned.num_species.value_counts()
|
| 181 |
+
|
| 182 |
+
# %% [markdown]
|
| 183 |
+
# ### Taxonomic String Exploration
|
| 184 |
+
|
| 185 |
+
# %%
|
| 186 |
+
lin_taxa = ['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'species']
|
| 187 |
+
all_taxa = ['kingdom',
|
| 188 |
+
'phylum',
|
| 189 |
+
'subphylum',
|
| 190 |
+
'superclass',
|
| 191 |
+
'class',
|
| 192 |
+
'subclass',
|
| 193 |
+
'infraclass',
|
| 194 |
+
'superorder',
|
| 195 |
+
'order',
|
| 196 |
+
'suborder',
|
| 197 |
+
'infraorder',
|
| 198 |
+
'superfamily',
|
| 199 |
+
'family',
|
| 200 |
+
'subfamily',
|
| 201 |
+
'tribe',
|
| 202 |
+
'genus',
|
| 203 |
+
'species',
|
| 204 |
+
'subspecies',
|
| 205 |
+
'variety']
|
| 206 |
+
|
| 207 |
+
# %% [markdown]
|
| 208 |
+
# #### How many have all 7 Linnean ranks?
|
| 209 |
+
|
| 210 |
+
# %%
|
| 211 |
+
df_all_taxa = df_cleaned.dropna(subset = lin_taxa)
|
| 212 |
+
df_all_taxa[all_taxa].info(show_counts = True)
|
| 213 |
+
|
| 214 |
+
# %% [markdown]
|
| 215 |
+
# That's pretty good coverage: 7,521,712 out of 10,965,902. It looks like many of them also have the other taxonomic ranks too. Now how many different 7-tuples are there?
|
| 216 |
+
#
|
| 217 |
+
# #### How many unique 7-tuples?
|
| 218 |
+
|
| 219 |
+
# %%
|
| 220 |
+
#number of unique 7-tuples in full dataset
|
| 221 |
+
df_cleaned['lin_duplicate'] = df_cleaned.duplicated(subset = lin_taxa, keep = 'first')
|
| 222 |
+
df_unique_lin_taxa = df_cleaned.loc[~df_cleaned['lin_duplicate']].copy()
|
| 223 |
+
df_unique_lin_taxa.info(show_counts = True)
|
| 224 |
+
|
| 225 |
+
# %% [markdown]
|
| 226 |
+
# Interesting, we have 891 unique 7-tuple taxonomic strings, but 1 scientific and common name seem to be missing.
|
| 227 |
+
# What's the uniqueness count here?
|
| 228 |
+
|
| 229 |
+
# %%
|
| 230 |
+
df_unique_lin_taxa.nunique()
|
| 231 |
+
|
| 232 |
+
# %% [markdown]
|
| 233 |
+
# They're across all datasets. We have 890 unique scientific names and 886 unique common names (from 885 original labels).
|
| 234 |
+
|
| 235 |
+
# %%
|
| 236 |
+
df_unique_lin_taxa.loc[(df_unique_lin_taxa["scientific_name"].isna()) | (df_unique_lin_taxa["common_name"].isna())]
|
| 237 |
+
|
| 238 |
+
# %% [markdown]
|
| 239 |
+
# It's a car...We need to remove cars...
|
| 240 |
+
|
| 241 |
+
# %%
|
| 242 |
+
df_cleaned.loc[df_cleaned["original_label"] == "car"].shape
|
| 243 |
+
|
| 244 |
+
# %%
|
| 245 |
+
df_cleaned.loc[df_cleaned["original_label"] == "car", "dataset_name"].value_counts()
|
| 246 |
+
|
| 247 |
+
# %% [markdown]
|
| 248 |
+
# #### How many unique full taxa (sub ranks included)?
|
| 249 |
+
|
| 250 |
+
# %%
|
| 251 |
+
#number of unique 7-tuples in full dataset
|
| 252 |
+
df_cleaned['full_duplicate'] = df_cleaned.duplicated(subset = all_taxa, keep = 'first')
|
| 253 |
+
df_unique_all_taxa = df_cleaned.loc[~df_cleaned['full_duplicate']].copy()
|
| 254 |
+
df_unique_all_taxa.info(show_counts = True)
|
| 255 |
+
|
| 256 |
+
# %% [markdown]
|
| 257 |
+
# When we consider the sub-ranks as well we wind up with 909 unique taxa (still with one scientific and common name missing--the car!).
|
| 258 |
+
|
| 259 |
+
# %%
|
| 260 |
+
df_unique_all_taxa.nunique()
|
| 261 |
+
|
| 262 |
+
# %% [markdown]
|
| 263 |
+
# We have now captured all 908 unique scientific names, but only 901 of the 999 unique common names.
|
| 264 |
+
|
| 265 |
+
# %%
|
| 266 |
+
df_unique_all_taxa.loc[(df_unique_all_taxa["scientific_name"].isna()) | (df_unique_all_taxa["common_name"].isna())]
|
| 267 |
+
|
| 268 |
+
# %% [markdown]
|
| 269 |
+
# #### Let's remove those cars
|
| 270 |
+
|
| 271 |
+
# %%
|
| 272 |
+
df_cleaned = df_cleaned[df_cleaned["original_label"] != "car"].copy()
|
| 273 |
+
df_cleaned[["original_label", "scientific_name", "common_name", "kingdom"]].info(show_counts = True)
|
| 274 |
+
|
| 275 |
+
# %% [markdown]
|
| 276 |
+
# Now we have 10,961,185 instead of 10,965,902 images; they all have `original_label`, but only 10,192,703 of them have `scientific_name`, `common_name`, and `kingdom`. What are the `original_label`s for those ~800K images?
|
| 277 |
+
|
| 278 |
+
# %%
|
| 279 |
+
no_taxa = df_cleaned.loc[(df_cleaned["scientific_name"].isna()) & (df_cleaned["common_name"].isna()) & (df_cleaned["kingdom"].isna())].copy()
|
| 280 |
+
|
| 281 |
+
print(no_taxa[["dataset_name", "original_label"]].nunique())
|
| 282 |
+
no_taxa[["dataset_name", "original_label"]].info(show_counts = True)
|
| 283 |
+
|
| 284 |
+
# %% [markdown]
|
| 285 |
+
# What are these 24 other labels and how are the 768,482 images with them distributed across these 12 datasets?
|
| 286 |
+
|
| 287 |
+
# %%
|
| 288 |
+
no_taxa["original_label"].value_counts()
|
| 289 |
+
|
| 290 |
+
# %%
|
| 291 |
+
no_taxa["dataset_name"].value_counts()
|
| 292 |
+
|
| 293 |
+
# %%
|
| 294 |
+
no_taxa.groupby(["dataset_name"])["original_label"].value_counts()
|
| 295 |
+
|
| 296 |
+
# %% [markdown]
|
| 297 |
+
# Interesting. It seems like all of these should also be removed. Vegetation obstruction could of course be labeled in Plantae, but we're not going to be labeling 7K images for this project.
|
| 298 |
+
#
|
| 299 |
+
# Let's remove them, then we should have 10,192,703 images.
|
| 300 |
+
|
| 301 |
+
# %%
|
| 302 |
+
non_taxa_labels = list(no_taxa["original_label"].unique())
|
| 303 |
+
|
| 304 |
+
# %%
|
| 305 |
+
df_clean = df_cleaned.loc[~df_cleaned["original_label"].isin(non_taxa_labels)].copy()
|
| 306 |
+
df_clean.info(show_counts = True)
|
| 307 |
+
|
| 308 |
+
# %%
|
| 309 |
+
df_clean.nunique()
|
| 310 |
+
|
| 311 |
+
# %% [markdown]
|
| 312 |
+
# Let's check out our top ten labels, scientific names, and common names. Then we'll save this cleaned metadata file.
|
| 313 |
+
|
| 314 |
+
# %%
|
| 315 |
+
df_clean["original_label"].value_counts()[:10]
|
| 316 |
+
|
| 317 |
+
# %%
|
| 318 |
+
df_clean["scientific_name"].value_counts()[:10]
|
| 319 |
+
|
| 320 |
+
# %%
|
| 321 |
+
df_clean["common_name"].value_counts()[:10]
|
| 322 |
+
|
| 323 |
+
# %% [markdown]
|
| 324 |
+
# There are also 257,159 humans in here! Glad the number agrees across labels. We'll probably need to remove the humans, though I may save a copy with them still on the HF repo (it is just our dev repo). Which datasets have them? I thought humans were filtered out previously (though I could be mistaken as they seem to be in 15 of the 20 datasets).
|
| 325 |
+
|
| 326 |
+
# %%
|
| 327 |
+
df_clean.loc[df_clean["original_label"] == "human", "dataset_name"].value_counts()
|
| 328 |
+
|
| 329 |
+
# %% [markdown]
|
| 330 |
+
# What do human labels look like (as in do they have the full taxa structure)?
|
| 331 |
+
|
| 332 |
+
# %%
|
| 333 |
+
df_clean.loc[df_clean["original_label"] == "human"].sample(5)
|
| 334 |
+
|
| 335 |
+
# %% [markdown]
|
| 336 |
+
# It does seem to have full taxa...interesting.
|
| 337 |
+
|
| 338 |
+
# %%
|
| 339 |
+
df_clean.to_csv("../data/lila_image_urls_and_labels_wHumans.csv", index = False)
|
| 340 |
+
|
| 341 |
+
# %%
|
| 342 |
+
df_clean.loc[df_clean["original_label"] != "human"].to_csv("../data/lila_image_urls_and_labels.csv", index = False)
|
| 343 |
+
|
| 344 |
+
# %%
|
| 345 |
+
taxa = [col for col in list(df_clean.columns) if col in all_taxa or col =="original_label"]
|
| 346 |
+
|
| 347 |
+
df_taxa = df_clean[taxa].copy()
|
| 348 |
+
df_taxa.loc[df_taxa["original_label"] == "human"].sample(7)
|
| 349 |
+
|
| 350 |
+
# %%
|
| 351 |
+
df_clean.loc[df_clean["original_label"] != "human"].info(show_counts = True)
|
| 352 |
+
|
| 353 |
+
# %%
|
| 354 |
+
df_clean.loc[df_clean["original_label"] != "human"].nunique()
|
| 355 |
+
|
| 356 |
+
# %% [markdown]
|
| 357 |
+
# We have 1,198,696 distinct sequence IDs for the 9,849,119 unique image IDs, suggesting an average of 8 images per sequence?
|
| 358 |
+
|
| 359 |
+
# %%
|
| 360 |
+
df_clean.loc[df_clean["original_label"] != "human", "annotation_level"].value_counts()
|
| 361 |
+
|
| 362 |
+
# %% [markdown]
|
| 363 |
+
# #### Check Number of Images per Scientific Name?
|
| 364 |
+
|
| 365 |
+
# %%
|
| 366 |
+
|
| 367 |
+
# %%
|
| 368 |
+
|
| 369 |
+
# %%
|
| 370 |
+
sns.histplot(df_clean.loc[df_clean["original_label"] != "human"], y = 'class')
|
| 371 |
+
|
| 372 |
+
# %%
|
| 373 |
+
sns.histplot(df_clean.loc[df_clean["original_label"] != "human"], y = 'order')
|
| 374 |
+
|
| 375 |
+
# %%
|