How to recognize sample music using Python and Gracenote?

I recently discovered the GNSDK (Gracenote SDK), which seems to give examples in several programming languages ​​for recognizing musical samples by their fingerprints, and then query their audio database to get the appropriate artist and song title.

But the documentation is terrible.

How can I use Python and GNSDK to recognize an example audio file? There are no examples or manuals in the submitted documents.

Edit: I really want to use GNSDK with Python. Do not post anything unrelated; you will be wasting your time.

+4
source share
3 answers

I ended up using ACRCloud , which works very well. Everyone who wants to use Gracenote seems to be returning to ACRCloud for reasons ... Now I know why.

Python example:

from acrcloud.recognizer import ACRCloudRecognizer

config = {
    'host': 'eu-west-1.api.acrcloud.com',
    'access_key': 'access key',
    'access_secret': 'secret key',
    'debug': True,
    'timeout': 10
}

acrcloud = ACRCloudRecognizer(config)

print(acrcloud.recognize_by_file('sample of a track.wav', 0))
+2
source

Key words: beat spectrum analysis and rhythm detection.

This well-known Python library may contain a solution for your question: https://github.com/aubio/aubio

I also recommend that you check this page for other libraries: https://wiki.python.org/moin/PythonInMusic

Finally, this project offers a Python-friendly solution and an easy way to get started: https://github.com/librosa/librosa

Librosa ( ) :

# Beat tracking example
from __future__ import print_function
import librosa

# 1. Get the file path to the included audio example
filename = librosa.util.example_audio_file()

# 2. Load the audio as a waveform `y`
#    Store the sampling rate as `sr`
y, sr = librosa.load(filename)

# 3. Run the default beat tracker
tempo, beat_frames = librosa.beat.beat_track(y=y, sr=sr)

print('Estimated tempo: {:.2f} beats per minute'.format(tempo))

# 4. Convert the frame indices of beat events into timestamps
beat_times = librosa.frames_to_time(beat_frames, sr=sr)

print('Saving output to beat_times.csv')
librosa.output.times_csv('beat_times.csv', beat_times)

, , . , .

:

Web API, Gracenote: https://developer.gracenote.com/web-api#python

Python:

https://github.com/cweichen/pygn

, , . - Ruby Python;

Ruby:

https://github.com/JDiPierro/tmsapi

require 'tmsapi'

# Create Instace of the API
tms = TMSAPI::API.new :api_key => 'API_KEY_HERE'

# Get all movie showtimes for Austin Texas
movie_showings = tms.movies.theatres.showings({ :zip => "78701" })

# Print out the movie name, theatre name, and date/time of the showing.
movie_showings.each do |movie|
  movie.showtimes.each do |showing|
    puts "#{movie.title} is playing at '#{showing.theatre.name}' at #{showing.date_time}."
  end
end

# 12 Years a Slave is playing at 'Violet Crown Cinema' at 2013-12-23T12:45.
# A Christmas Story is playing at 'Alamo Drafthouse at the Ritz' at 2013-12-23T16:00.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T11:00.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T13:40.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T16:20.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T19:00.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T21:40.

Ruby Ruby on Rails, - Python.

+2

, , GNSDK, ,
:

DejaVu

An audio sensor and recognition algorithm implemented in Python, see explanation here:

Dejavu can remember the sound by listening to it once and fingerprints This. Then, playing a song and recording the microphone input, Deja Vu attempts to match the sound with the fingerprints stored in the databases, returning the song being played back.

https://github.com/worldveil/dejavu

seems right.

+1
source

All Articles