- Remove unnecessary window member from lapped_transform.
- Add comment indicated that Blocker does not take ownership of
the window passed to its constructor.
- Streamline LappedTransform constructor so members can be const.
Also use a range-based for loop in audio_processing_impl.cc for clarity.
R=aluebs@webrtc.org, andrew@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/41229004
Cr-Commit-Position: refs/heads/master@{#8708}
git-svn-id: http://webrtc.googlecode.com/svn/trunk@8708 4adac7df-926f-26a2-2b94-8c16560cd09d
Clang version changed 223108:230914
Details: e144d30..6fdb142/tools/clang/scripts/update.sh
Removes the OVERRIDE macro defined in:
* webrtc/base/common.h
* webrtc/typedefs.h
The majority of the source changes were done by running this in src/:
perl -0pi -e "s/virtual\s([^({;]*(\([^({;]*\)[^({;]*))(OVERRIDE|override)/\1override/sg" `find {talk,webrtc} -name "*.h" -o -name "*.cc*" -o -name "*.mm*"`
which converted all:
virtual Foo() OVERRIDE
functions to:
Foo() override
Then I manually edited:
* talk/media/webrtc/fakewebrtccommon.h
* webrtc/test/fake_common.h
Remaining uses of OVERRIDE was fixed by search+replace.
Manual edits were done to fix virtual destructors that were
overriding inherited ones.
Finally a build error related to the pure virtual definitions of
Read, Write and Rewind in common_types.h required a bit of
refactoring in:
* webrtc/common_types.cc
* webrtc/common_types.h
* webrtc/system_wrappers/interface/file_wrapper.h
* webrtc/system_wrappers/source/file_impl.cc
This roll should make it possible for us to finally re-enable deadlock
detection for TSan on the buildbots.
BUG=4106
R=pbos@webrtc.org, tommi@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/41069004
Cr-Commit-Position: refs/heads/master@{#8596}
git-svn-id: http://webrtc.googlecode.com/svn/trunk@8596 4adac7df-926f-26a2-2b94-8c16560cd09d
Now the ChannelBuffer has 2 separate arrays, one for the full-band data and one for the splitted one. The corresponding accessors are added to the ChannelBuffer.
This is done to avoid having to refresh the bands pointers in AudioBuffer. It will also allow us to have a general accessor like data()[band][channel][sample].
All the files using the ChannelBuffer needed to be re-factored.
Tested with modules_unittests, common_audio_unittests, audioproc, audioproc_f, voe_cmd_test.
R=andrew@webrtc.org, kwiberg@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/36999004
Cr-Commit-Position: refs/heads/master@{#8318}
git-svn-id: http://webrtc.googlecode.com/svn/trunk@8318 4adac7df-926f-26a2-2b94-8c16560cd09d
Previously only mic level calculated by the legacy agc was logged in aecdebug dumps.
Now we log it for any agc.
In addition, it is now possible to turn on and off debug recording in the test tool voe_cmd_test.
BUG=4274
TESTED=verified using voe_cmd_test
R=andrew@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/39839004
Cr-Commit-Position: refs/heads/master@{#8274}
git-svn-id: http://webrtc.googlecode.com/svn/trunk@8274 4adac7df-926f-26a2-2b94-8c16560cd09d
Take the 50% quantile of the mask and compare it to certain threshold to determine if the desired signal is present. A hold is applied to avoid fast switching between states.
is_signal_present_ has been plotted and looks as expected. The AGC adaptation sounds promising, specially for the cases when the speaker fades in and out from the beam direction.
R=andrew@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/28329005
git-svn-id: http://webrtc.googlecode.com/svn/trunk@8078 4adac7df-926f-26a2-2b94-8c16560cd09d
This provides more flexibility if some component in AudioProcessing wants to operate before downmixing.
Now the AudioProcessing does only track the processing rate, but not the processing number of channels. This is tracked by the AudioBuffer itself and can be changed at any time to one smaller or equal the input number of channels. For each chunk it is reset to input number of channels and the end it should be equal to the output number of channels.
R=andrew@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/28169004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@7879 4adac7df-926f-26a2-2b94-8c16560cd09d
This doesn't change the behavior at all.
The logic behind this is having one class which manages all the splitting filters, because in the future we plan to add a 3 band one for 48kHz support.
It also breaks the dependency of the AudioBuffer with the filter states of these filters (which are going to be different for the 3 band one). The AudioBuffer is complicated enough and is going to need changes to support 3 bands in the future, so any simplification is a good idea.
On top of that it eliminates repeated code in the APM (now only iterating over channels, but then also deciding in how many bands to split). This should be managed by the AudioBuffer directly.
BUG=webrtc:3146
R=bjornv@webrtc.org, kwiberg@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/32469004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@7705 4adac7df-926f-26a2-2b94-8c16560cd09d
In practice, we have been doing this since time immemorial, but have
relied on the user to do the downmixing (first voice engine then
Chromium). It's more logical for this burden to fall on AudioProcessing,
however, who can be expected to know that this is a reasonable approach
for AEC. Permitting two render channels results in running two AECs
serially.
Critically, in my recent change to have Chromium adopt the float
interface:
https://codereview.chromium.org/420603004
I removed the downmixing by Chromium, forgetting that we hadn't yet
enabled this feature in AudioProcessing. This corrects that oversight.
The change in paths hit by production users is very minor. As commented
it required adding downmixing to the int16_t path to satisfy
bit-exactness tests.
For reference, find the ApmTest.Process errors here:
https://paste.googleplex.com/6372007910309888
BUG=webrtc:3853
TESTED=listened to the files output from the Process test, and verified
that they sound as expected: higher echo while the AEC is adapting, but
afterwards very close.
R=aluebs@webrtc.org, bjornv@webrtc.org, kwiberg@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/31459004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@7292 4adac7df-926f-26a2-2b94-8c16560cd09d
This attenuates the noise pumping generated from the NS adapting to the AEC comfort noise.
When there is echo present the AEC suppresses it and adds comfort noise. This is underestimated on purpose to avoid adding more than the original background noise. The NS has to be called after the AEC, because every non-linear processing before it can ruin its performance. Therefore the noise estimation can adapt to this comfort noise, making it less aggressive and generating noise pumping.
By putting the noise estimation analysis stage from the NS before the AEC, this effect can be avoided. This has been tested manually on recordings where noise pumping was present: Two long recordings done in our team by bjornv and kwiberg plus the most noisy (5) recordings in the QA set.
On the other hand, one risk of doing this is to not adapt to the comfort noise and therefore suppress too much. As verified in the tested files, this is not a problem in practice.
BUG=webrtc:3763
R=andrew@webrtc.org, bjornv@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/24679004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@7289 4adac7df-926f-26a2-2b94-8c16560cd09d
Select "processing" rates based on the input and output sampling rates.
Resample the input streams to those rates, and if necessary to the
output rate.
- Remove deprecated stream format APIs.
- Remove deprecated device sample rate APIs.
- Add a ChannelBuffer class to help manage deinterleaved channels.
- Clean up the splitting filter state.
- Add a unit test which verifies the output against known-working
native format output.
BUG=2894
R=aluebs@webrtc.org, bjornv@webrtc.org, xians@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/9919004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@5959 4adac7df-926f-26a2-2b94-8c16560cd09d
- Add an Initialize() overload to allow specification of format
parameters. This is mainly useful for testing, but could be used in
the cases where a consumer knows the format before the streams arrive.
- Add a reverse_sample_rate_hz_ parameter to prepare for mismatched
capture and render rates. There is no functional change as it is
currently constrained to match the capture rate.
- Fix a bug in the float dump: we need to use add_ rather than set_.
- Add a debug dump test for both int and float interfaces.
- Enable unpacking of float dumps.
- Enable audioproc to read float dumps.
- Move more shared functionality to test_utils.h, and generally tidy up
a bit by consolidating repeated code.
BUG=2894
TESTED=Verified that the output produced by the float debug dump test is
correct. Processed the resulting debug dump file with audioproc and
ensured that we get identical output. (This is crucial, as we need to
be able to exactly reproduce online results offline.)
R=aluebs@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/9489004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@5676 4adac7df-926f-26a2-2b94-8c16560cd09d
This is mainly to support the native audio format in Chrome. Although
this implementation just moves the float->int conversion under the hood,
we will transition AudioProcessing towards supporting this format
throughout.
- Add a test which verifies we get identical output with the float and
int interfaces.
- The float and int wrappers are tasked with conversion to the
AudioBuffer format. A new shared Process/Analyze method does most of
the work.
- Add a new field to the debug.proto to hold deinterleaved data.
- Add helpers to audio_utils.cc, and start using numeric_limits.
- Note that there was no performance difference between numeric_limits
and a literal value when measured on Linux using gcc or clang.
BUG=2894
R=aluebs@webrtc.org, bjornv@webrtc.org, henrikg@webrtc.org, tommi@webrtc.org, turaj@webrtc.org, xians@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/9179004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@5641 4adac7df-926f-26a2-2b94-8c16560cd09d
Instead have ProcessStream transparently handle changes to the stream
audio parameters (sample rate and channels). This removes two locks
per 10 ms ProcessStream call taken by VoiceEngine (four total with the
audio level indicator.)
Also, prepare future improvements by having the splitting filter take
a length parameter. This will allow it to work at different sample
rates. Remove the useless splitting_filter wrapper.
TESTED=voe_cmd_test with audio processing enabled and switching between
codecs; unit tests.
R=aluebs@webrtc.org, bjornv@webrtc.org, turaj@webrtc.org, xians@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/3949004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@5346 4adac7df-926f-26a2-2b94-8c16560cd09d