Polskie Forum Emule - Lossless Music Download, Audio HQ Releases
Home arrow Knownledge arrow Tau Analyzer
poniedziałek, 25 wrzesień 2017
ED2K CATEGORIES
Discography
Lossless Music
Lossy Music
DVD Audio / Multichannel
DVD-Music Video
LQ concert's / Document's
Covers
Windows Software
Linux
[RS] CATEGORIES
Lossless Music
DVD-Music Video
LOGIN FORM ( only TEAM )
Username

Password

Remember me
Forgotten your password?

Tau Analyzer
sobota, 04 wrzesień 2010
Tau Analyzer - Project Description


The aims of the CD authenticity detector project are authenticity control of the music records when purchasing musical discs and automatical determination whether the record is an authentic one or is recovered from the lossy encoded data (for example, MP3). The necessity of such a program and possible solutions are widely known, correct solution is original one.

The Aim of the Project


It is no secret that the Internet and file-sharing networks are literally overloaded with music. Lossy formats - like MP3 - offer the highest degree of compression, and are easily the most popular means of file-sharing. When played through relatively low-fidelity audio systems, the sound offered by such files is often indistinguishable from CD quality audio - to the average human ear. However, lossy-encoded audio tracks fare poorly when played on Hi-Fi systems, or when enjoyed through a pair of quality stereo headphones.

Since the untrained ear is an unreliable detector, the purchase of traditional audio CDs has given rise to a new concern - the authenticity of the audio data they contain. Here is a sample collection of audio CDs, purchased between 2000 and 2004 in various music stores. As you can see, approximately 25% turned out to be so-called - fake-audio CDs, produced from lossy audio sources - most likely from low quality MP3s.

One method for auditing such CDs is MPEG artifact searching. The most useful and simple artifact to address is the frequency cut-off, which signal is scarcely affected.or even missed.at about 18kHz. The frequency cut-off (produced by an ear-model of the sound, used by the MPEG algorithm) is usually quite sharp at one or more high frequencies (16-20kHz). Music produced from such files is less dynamic, exhibiting fewer sound distinctions (as with drums and other sharp sounds).

Other artifacts of MPEG-coding add a specific type of noise, corresponding to MPEG-coding errors (numerical noise) and Fourier transformations, and a decreased correlation between channels (known as - sound center fluctuations.).

Some studio companies remove live recording noise very inaccurately, by simple cutting-off all the frequencies above 16-18kHz, using digital filtration. The resulting music suffers from similar sound artifacts to those exhibited by MPEG-coded audio recordings.

MPEG-coding errors could be decreased using a dithering technique (a smart-smoothing of the audio). Sounds can be made to seem more distinct through the addition of noise and smoothing of the signal spectrum (a technique known as noise-shaping). Still, despite such techniques, data from the original music is irretrievably lost; instead, one is left with music of inferior quality, after a few processing tricks and the addition of old and new artifacts.

The Aucdtect algorithm was developed to detect whether MPEG artifacts are present in recordings. The program and could be used, for example, to detect inaccurate numerical processing of a music recording that results in the loss of sound quality.




Aucdtect (CD authenticity detector) algorithm has been developed to help determine the authenticity of musical CDs. By evaluating the character of audio data a CD contains, Aucdtect can distinguish between original studio-based recordings and those that have been "reconstructed" using a lossy audio source, such as MP3.


The mathematical algorithm analyzes the Fourier-spectra of the signal-time segments throughout a recording, and makes calculations of the character-bound frequency, and phase, on every time segment. The ultimate determination of recording source is made through the analysis of the spectral statistical properties. The testing protocol employs the use of both modern and antique recordings of a varying quality levels. Aucdtect's algorithm has proved to be extremely accurate in its analyses and its conclusions on recordings' origins.

It is no secret that the Internet and file-sharing networks are literally overloaded with music. Lossy formats-like MP3-offer the highest degree of compression, and are easily the most popular means of file-sharing. When played through relatively low-fidelity audio systems, the sound offered by such files is often indistinguishable from CD quality audio-to the average human ear. However, lossy-encoded audio tracks fare poorly when played on Hi-Fi systems, or when enjoyed through a pair of quality stereo headphones.

Since the untrained ear is an unreliable detector, the purchase of traditional audio CDs has given rise to a new concern-the authenticity of the audio data they contain. Here is a sample collection of audio CDs, purchased between 2000 and 2004 in Russian stores. As you can see, approximately 25% turned out to be so-called "fake" audio CDs, produced from lossy audio sources-most likely from low quality MP3s.

The suggested algorithm for addressing the question of a recording's authenticity is the following:

1. On each time segment of data the logarithm of the spectral power [2] with small additive constant is calculated. A small constant ε is used to exclude the logarithm calculation error:

2. A scatter of the logarithm of the spectrum values is calculated. The maximum frequency on which scatter sharply increases is considered to be a bound one, that corresponds to a maximum character frequency in the audio signal spectrum not connected with numerical or statistical noise σ [3].



,where || s(f,T,Δf) || is scatter of the s(f,T) values in the frequency interval [f,f + Δf].
3. Based on the statistical distribution of the bound frequencies the set of character frequencies are determined - average bound frequency and most probable bound frequency.

4. Phase characteristics of the signal spectra are statistically processed to obtain the phase distribution of the high-band part of the signal.

5. Aucdtect's algorithm uses the calculated signal characteristics as input for a specially created and learned (by the genetic algorithm technique) original neural network with multilevel processing, and uses results from this network output to statistically conclude whether of not the artifacts exist (by Bayes-like algorithms).

6. The conclusion for the whole audio disk can be made by the maximum likelihood method (providing the most accurate solution) using neural network outputs for all audio tracks and statistical distributions, obtained in algorithm learning mode.

Our testing has demonstrated the stability of the Aucdtect algorithm in recognizing as authentic both modern and older recordings, spanning genres from classical to pop music. Predefined discs of lossy encoding reconstruction have shown an equally stable record of accuracy and reliability.

References

1. R.E.Blahut, Fast algorithms for Digital Signal Processing, (1985), 446. 2. G.A.Korn, T.M.Korn, Mathematical handbook for scientists and engineers (1968) 832.

Download from: http://en.true-audio.com/Free_Downloads
Last Updated ( sobota, 04 wrzesień 2010 )
Design by Spanner © Velvet Underground Forum
Pliki lub dane, do których linki można znaleźć na forum, nie są zgromadzone na tej stronie internetowej lub serwerze strony. Ta strona pokazuje tylko i wyłącznie sumy kontrolne znalezione wewnątrz sieci Edonkey2000.
NO files or data the links on this website are pointing to are stored on this website or server the website is hosted on in any way or form. This site is only pointing out links found inside the Edonkey2000 Network, hosted on OTHER computers connected to that network
Bajo ningún concepto, ni este sitio ni su servidor albergan ficheros. Este sitio sólo recaba enlaces que se encuentran ya en la red de Edonkey2000, concretamente, en ordenadores conectados a susodicha red.