Normally, the DownsamplePadding skips scaling if the target
size is the same as the source size, assuming that the caller
will use the source data pointer in that case. This is true
for the base layer (the first call to DownsamplePadding in
SingleLayerPreprocess), but when downsampling the other layers,
there is no special handling for the case when the target
is the same size as the source.
Previously, the encoding of such spatial layers will use
completely uninitialized data, encoding complete garbage.
Instead force DownsamplePadding to make a copy if no scaling
is required, for the dependency layers. The base layer still
avoids a copy unless scaling of that layer is required.
Whether it actually makes sense to have lower spatial layers
the same size as the original one is a different question
though - currently the code allows it, and
EncodeDecodeTestAPI.SetOptionEncParamExt will try to use it.
If the calling test hasn't set m_iPicResSize, it is set to the
maximum frame size, which takes much longer to initialize than the
current actual frame size.
This reduces the runtime of EncoderInterfaceTest.SkipFrameCheck
in valgrind from 229 seconds to 8 seconds, and the total runtime
of all the test cases in EncoderInterfaceTest from 405 seconds
to 89 seconds.
They are still used slightly differently in the encoder and decoder;
the decoder uses plain functions while the encoder uses one object
keeping track of the number of allocated bytes, and keeping track
of the requested alignment.
When generating a new version of the header, that includes the
actual git hash, don't overwrite the file that is tracked by git.
Instead create a new file, and include this only if the build system
indicates that it exists (by setting a define). This allows the
untouched source tree to be built from within an IDE even if make
has not been run.
This reduces the hassle with a file that needs to be ignored in the
git configuration.
The downside is that the generated file isn't used if building
from within an IDE, if the header has been updated by calling make
before (since the IDE configuration doesn't know whether the user
actually has run make). Since users of the IDE might not build via
make in the command line at all (in the same source checkout at least),
this should not be an issue in practice. The previous way things worked,
the version hash (generated by make) when used in an IDE could actually
be outdated and misleading.
This function actually zero-initializes the allocated memory, thus
make this clear in the function name.
This makes the function name match the same behaviour in the encoder.