Felguk's picture
Update README.md
b0ae5be verified
metadata
license: apache-2.0
datasets:
  - keremberke/license-plate-object-detection
language:
  - ru
  - pl
  - en
  - zh
  - tk
  - ar
  - es
  - el
  - fr
  - ae
metrics:
  - bertscore
base_model:
  - yainage90/fashion-object-detection
new_version: yainage90/fashion-object-detection
pipeline_tag: zero-shot-classification
library_name: transformers
tags:
  - Suno

felguk-suno-or-people

Hugging Face Profile

My Logo Character Logo

This model is designed to classify audio clips into two categories: "Suno" music or "People" music. It is trained on a dataset containing examples of both types of music and can be used for various applications such as music recommendation, genre classification, and more.


Model Details

  • Model Name: felguk-suno-or-people
  • Task: Audio Classification
  • Input: Audio clip (WAV format)
  • Output: Classification label (suno or people)

Usage

This model is not currently available via third-party inference providers or the Hugging Face Inference API. However, you can easily use it locally by following the steps below.

Step 1: Install Required Libraries

Make sure you have the transformers and datasets libraries installed:

pip install transformers datasets

load model

from transformers import AutoModelForAudioClassification, AutoFeatureExtractor
import torch

# Load the model and feature extractor
model = AutoModelForAudioClassification.from_pretrained("Felguk/Felguk-suno-or-people")
feature_extractor = AutoFeatureExtractor.from_pretrained("Felguk/Felguk-suno-or-people")
from datasets import load_dataset, Audio

# Load an example audio file (replace with your own file)
dataset = load_dataset("common_voice", "en", split="train", streaming=True)
audio_sample = next(iter(dataset))["audio"]

# Preprocess the audio
inputs = feature_extractor(audio_sample["array"], sampling_rate=audio_sample["sampling_rate"], return_tensors="pt")
# Perform inference
with torch.no_grad():
    logits = model(**inputs).logits

# Get the predicted label
predicted_class_id = logits.argmax().item()
label = model.config.id2label[predicted_class_id]
print(f"Predicted label: {label}")