This unifies the event creation interface, even if the event
name itself is unused on windows, allowing use the exact same
code to initialize events regardless of the actual platform.
Some ifdefs still remain in the event initialization code, since
some events are only used on windows.
There is no point in doing a timed wait here - there's no work
that we can do if the wait timed out, and sleeping for 1 ms
inbetween doesn't help, it only adds potential extra latency
to reacting to threads that need more work to do.
Typedeffing WELS_EVENT as sem_t* makes the typedef behave similarly
to the windows version (typedeffed as HANDLE), unifying the code
that allocates and uses these event objects (getting rid of
most of the need for separate codepaths and ifdefs).
The caller of the function should not need to know exactly which
implementation of it is being used.
For the variants that don't support detecting the number of cores,
the pNumberOfLogicProcessors parameter can be left untouched
and the caller will use a higher level API for finding it out.
This simplifies all the calling code, and simplifies adding
more implementations of cpu feature detection.
The two different variants of the threadlib basically are
win32 and unix - use _WIN32 to check for this consistently,
instead of occasionally using __GNUC__ to enable the unix
codepath. (__GNUC__ is also defined on mingw, which still is
a windows platform and should use the _WIN32 code.)
The iFrameWidth/iFrameHeight fields are already aligned by the
SetActualPicResolution() function. Previously when iFrameWidth was
aligned directly in ParamBaseTranscode, this aligned value was used
to set iActualWidth/iActualHeight - losing the original, cropped
size.
This makes sure the output bitstream from the test of encoding
res/Static_152_100.yuv actually is cropped as it should.
On processors without HTT, WelsCPUFeatureDetect can't return
a number of cores but might still return a nonzero set of
CPU feature flags. Previously the nonzero cpu feature flag
indicated that cpuid worked and the encoder wouldn't use the
higher level API for getting the number of cores, even though the
number of cores was left at 1.
Previously the loop filter was unconditionally enabled
regardless of what encoder parameter was set. If using
SEncParamBase instead, the loop filter was always disabled.
Previously, these fields kept whatever value was set by
FillDefault. The corresponding fields were set properly within
sSpatialLayers, but the fields within the main struct were left
with the default values.
This doesn't change the hashes in the unit test, since these
fields don't seem to be used in the produced bitstream at all.