C RUBY-ON-RAILS MYSQL ASP.NET DEVELOPMENT RUBY .NET LINUX SQL-SERVER REGEX WINDOWS ALGORITHM ECLIPSE VISUAL-STUDIO STRING SVN PERFORMANCE APACHE-FLEX UNIT-TESTING SECURITY LINQ UNIX MATH EMAIL OOP LANGUAGE-AGNOSTIC VB6 MSBUILD

# From Amplitude or FFT to dB

By : K'Opiyo
Date : November 22 2020, 04:01 AM
To fix the issue you can do A decibel meter measures a signal's mean power. So from your time signal recording you can calculate the mean signal power with:
code :
``````chunk_size = 44100
num_chunk  = len(signal) // chunk_size
sn = []
for chunk in range(0, num_chunk):
sn.append(np.mean(signal[chunk*chunk_size:(chunk+1)*chunk_size]**2))
``````
``````logsn = 10*np.log10(sn)
``````

Share :

## Mean amplitude of a .wav in C#

By : Rob
Date : March 29 2020, 07:55 AM
Any of those help Here is a snip that reads in a stereo wav and puts the data in two arrays. It's untested because I had to remove some code (converting to mono and calculate a moving average)
code :
``````    /// <summary>
///  Read in wav file and put into Left and right array
/// </summary>
/// <param name="fileName"></param>
{

int startByte = 0;

{
var x = 0;
while (x < fa.Length)
{
if (fa[x]     == 'd' && fa[x + 1] == 'a' &&
fa[x + 2] == 't' && fa[x + 3] == 'a')
{
startByte = x + 8;
break;
}
x++;
}
}

// Split out channels from sample
var sLeft = new short[fa.Length / 4];
var sRight = new short[fa.Length / 4];

{
var x = 0;
var length = fa.Length;
for (int s = startByte; s < length; s = s + 4)
{
sLeft[x] = (short)(fa[s + 1] * 0x100 + fa[s]);
sRight[x] = (short)(fa[s + 3] * 0x100 + fa[s + 2]);
x++;
}
}

// do somthing with the wav data in sLeft and sRight
}
``````

## How to get the Amplitude of TTS

By : Peter B
Date : March 29 2020, 07:55 AM
fixed the issue. Will look into that further If you need to synchronize the audio with visual actions, you'll have to set an TextToSpeech.OnUtteranceCompletedListener (or since ICS: an UtteranceProgressListener) to the TTS engine. That way you can determine when a specific piece of text has been synthesized.
Alternatively, you can synthesize the text to an audio file using the synthesizeToFile(String text, HashMap params, String filename) method.

## Scaling Amplitude After Windowing FFT to Recover Correct Amplitude

By : Arno Agten
Date : March 29 2020, 07:55 AM
it helps some times The mean value of the von Hann window is (approximately) 0.5, for N=1000 you have
code :
``````>>> N=1000 ; print sum(np.hanning(N))/N
0.4995
>>>
``````

## Amplitude and phase spectrum. Shifting the phase leaving amplitude untouched

By : Nirina Rakotobe
Date : March 29 2020, 07:55 AM
fixed the issue. Will look into that further Here is your code with some modifications. We apply a Fourier transform, phase shift the transformed signal, and then perform the inverse Fourier transform to produce the phase shifted time domain signal.
Notice that the transforms are done with rfft() and irfft(), and that the phase shift is done by simply multiplying the transformed data by cmath.rect(1.,phase). The phase shift is equivalent to multiplying the complex transformed signal by exp( i * phase ).
code :
``````#!/usr/bin/python

import matplotlib.pyplot as plt
import numpy as np
import cmath

# Generate a model signal
t0 = 1250.0
dt = 0.152
freq = (1./dt)/128

t = np.linspace( t0, t0+1024*dt, 1024, endpoint=False )
signal = np.sin( t*(2*np.pi)*freq )

## Fourier transform of real valued signal
signalFFT = np.fft.rfft(signal)

## Get Power Spectral Density
signalPSD = np.abs(signalFFT) ** 2
signalPSD /= len(signalFFT)**2

## Get Phase
signalPhase = np.angle(signalFFT)

## Phase Shift the signal +90 degrees
newSignalFFT = signalFFT * cmath.rect( 1., np.pi/2 )

## Reverse Fourier transform
newSignal = np.fft.irfft(newSignalFFT)

## Uncomment this line to restore the original baseline
# newSignal += signalFFT[0].real/len(signal)

# And now, the graphics -------------------

## Get frequencies corresponding to signal
fftFreq = np.fft.rfftfreq(len(signal), dt)

plt.figure( figsize=(10, 4) )

ax1 = plt.subplot( 1, 2, 1 )
ax1.plot( t, signal, label='signal')
ax1.plot( t, newSignal, label='new signal')
ax1.set_ylabel( 'Signal' )
ax1.set_xlabel( 'time' )
ax1.legend()

ax2 = plt.subplot( 1, 2, 2 )
ax2.plot( fftFreq, signalPSD )
ax2.set_ylabel( 'Power' )
ax2.set_xlabel( 'frequency' )

ax2b = ax2.twinx()
ax2b.plot( fftFreq, signalPhase, alpha=0.25, color='r' )
ax2b.set_ylabel( 'Phase', color='r' )

plt.tight_layout()

plt.show()
``````

## Amplitude of input wave is massively different to Fourier coefficient amplitude

By : user2572308
Date : March 29 2020, 07:55 AM
wish of those help As I described in another answer, there is an approximate relationship between the amplitude in the time-domain and the frequency-domain, which I stated under the usual Discrete Fourier Transform definition. Since R's fft follows the same definition (see the documentation), you may expect a similar approximate 0.5*N scaling of the amplitude when going from the time-domain to the frequency-domain.
Note that since you clearly do not have a pure sinusoidal signal, the different frequency component may start to interfere and make the relationship more approximate than absolute truth, but it should still be in the right order of magnitude.