Quantcast
Channel: Uncategorized – Fooling Around
Viewing all 204 articles
Browse latest View live

Reference Signal Source: Hello, UWP!

$
0
0

Media Foundation version of Reference Signal Source is UWP friendly.

It can be played using stock MediaElement control (whether it is a C# or JS project), is also Direct3D aware and friendly and as such integrates well with the MediaElement. Then on top of that the whole bundle is even WACK friendly and is ready for new adventures.


Video Processor MFT pixel format conversion bug

$
0
0

Not the first, not the last. A Direct3D 11 enabled Media Foundation transfer fails to transfer sample attributes while doing the conversion.

Why attributes are important in first place? Because we can associate data with samples/frames and have them passed through attached to specific frame as the conversion goes and as the data transits through the pipeline.

There is no strict rule whether a transform needs to copy attributes from input to output samples. Attributes are flexible and in this case it’s so flexible that it is not clear what the transforms actually do. Microsoft attempted to bring some order with MFPKEY_EXATTRIBUTE_SUPPORTED property. Let us have a look at what documentation says about the processing model:

The input samples might have attributes that must be copied to the corresponding output samples.

  • If the MFT returns VARIANT_TRUE for the MFPKEY_EXATTRIBUTE_SUPPORTED property, the MFT must copy the attributes.
  • If the MFPKEY_EXATTRIBUTE_SUPPORTED property is either VARIANT_FALSE or is not set, the client must copy the attributes.

Words “client must copy the attributes” should be read as this: MFT does not give a damn about the attributes and go copy them yourself the way you like.

Needless to say that Video Processor MFT itself has not faintest idea about this MFPKEY_EXATTRIBUTE_SUPPORTED attribute in first place, and so is the behavior it defines.

Microsoft designed Video Processor MFT as a Swiss army knife for basic conversions. The MFT has zero degrees of customization and has multiple code paths inside to perform this or that conversion.

All together it means that small bugs inside are endless and MFT behavior is not consistent across different conversions.

So I approached the bug itself: unlike other scenarios when the MFT does pixel format conversion it fails to copy the sample attributes. I feed a sample with attributes attached and I get output with zero attributes.

In my case the workaround is this a wrapper MFT that intercepts IMFTransform::ProcessInput and IMFTransform::ProcessOutput calls and copies the missing attributes.

UWP Media Element fullscreen playback bug

$
0
0

New platform, Universal Windows Platform (UWP), offers new Media Foundation bugs.

UWP Media Element embeds a Media Foundation video renderer to present media video frames played back. Under certain conditions, the Media Element control fails to present video inline with just incrementing the presentation time indicating playback in progress. Nevertheless once expanded full screen the video frames are presented as expected.

Xbox Live outage

$
0
0

So Xbox Live fails to log in certain users (for the third day!) and I happen to be located “near the epicenter”.

No problem description that makes sense, just 0x80a40010 code, no outage announcement, support is ridiculous. Development is blocked because application deployment to device fails with the message:

—————————
Microsoft Visual Studio
—————————
Unable to activate Windows Store app […]. The activation request failed with error ‘The device you are trying to deploy to requires a signed-in user to run the app. Please sign into the device and try again’.

See help for advice on troubleshooting the issue.
—————————
OK Help
—————————

Xbox One Dev Mode indicates the problem by removing check from item under Test Account and the only symptom that shows that something is wrong is inability to get that check in (with, again, undescriptive message).

So what the hell is going on?

This thread Cant sign in error 0x80a40010 is perhaps one of the best proofs for some one like me to stop trying reset to factory defaults and stuff because it’s Microsoft service failure.

Also, the thread references a really smart temporary solution: to set up a VPN connection and share it over wireless network with Xbox One device to transfer it out of banned geolocations. It is really smart and it does work. One needs to have a wireless LAN adapter with hosted network support (“netsh wlan show drivers” should print text with “Hosted network supported: Yes”). This might be a problem actually, but I happen to have an old laptop with the capability. Follow the instructions here: How to share a VPN connection over Wi-Fi on Windows 10 and I additionally had to go to app settings and check “Disable IPv6 connections outside VPN”. Before I changed the setting VPN driver’s Ethernet LAN adapter icon displayed a red cross overlay on it and connectivity was not there.

UPDATE 2018-09-12: After days of downtime sign-in still fails. However, people already figured out how to fake DNS and “fix” the problem without waiting for Microsoft. The numbers below needs to be entered as manual DNS settings under Network, Advanced settings:

For a developer DNS solution has an advantage that the box works well in developer mode. It is not an option with the other VPN solution (or you need to figure out how VPN software would route internal traffic between clients without sending it outward).

Injecting raw audio data into media pipeline (Russian)

$
0
0

I am reposting a Q+A from elsewhere on injecting raw audio data obtained externally into Windows API media pipeline (in Russian).


Q: … какой самый простой способ превратить порции байтов в формате PCM в сжатый формат, например WMA используя только средства Windows SDK? […] я так понял, что без написания своего фильтра DirectShow (DS) – source или capture? – поток байтов не сконвертировать. В случае же Media Foundation (MF) я надеялся найти пример в инете, но почему-то есть хороший пример записи loopback в WAV файл или конвертации из WAV в WMA, но использование промежуточного файла очень неэффективно, тем более что следующей задачей будет потоковая передача этого звука по сети параллельно с записью в файл. Сейчас я пытаюсь разобраться с IMFTransform::ProcessInput, но он требует на вход не байты, а IMFSample, а конкретных примеров затащить байты в IMFSample я пока не нашёл. Просто у меня сложилось впечатление, что и DS и MF для такой, казалось бы, простой задачи требуют создания COM-объектов да ещё и их регистрацию в системе. Неужто нет более простого способа?

A: Готового решения для вталкивания данных в тракт DS или MF нет. Сделать самостоятельно необходимую стыковку – довольно посильная задача и поэтому, видимо, Microsoft устранились в предоставлении готового решения, которое всё равно не каждому подойдёт по разным причинам.

Аудиопоток – это всегда не только поток байтов, но и формат, и привязка ко времени, а поэтому те компоненты, которые работают с байтами, обычно оперируют мультиплексированными форматами (типа .WAV, к примеру). Раз у вас именно порции PCM данных, то это задача для, действительно, или custom DirectShow source filter, или custom Media Foundation media/stream source. Их реализация даст вам необходимую склейку и, вообще говоря, это и есть простой способ. В частности, он куда проще, чем попытаться сделать это через файл.

Ни в случае DS, ни в случае MF не требуется регистрация в системе. Можно, конечно, и с ней, но это необязательно. Когда у вас реализован необходимый класс, то собирая топологию его можно использовать непосредственно, без включения в топологию через системную регистрацию.

В случае DS вам нужно сделать собственный audio source filter. Сложная часть задачи заключается в том, что вам придётся опереться на довольно старый code base (DirectShow base classes) и в том, что, как бы там ни было, DirectShow API – в конце своего жизненного пути. Тем не менее, в старых SDK есть пример Synth Filter Sample, есть еще пример Ball Filter Sample для видео и другие, которые показывают как сделать source filter и, честно говоря, они довольно компактны. Необходимый вам фильтр будет достаточно простым, когда вы разберётесь что к чему. по использованию фильтра без регистрации вы также сможете найти информацию, к примеру, отсюда Using a DirectShow filter without registering it, via a private CoCreateInstance.

В случае MF, ситуация в какой-то мере схожая. Можно было бы, конечно, формировать в памяти поток формата .WAV и передавать его в топологию MF как поток байтов. Такая возможность и гибкость API имеется, но я бы посоветовал также использовать собственный media source который генерирует поток данных PCM из тех кусков, которые вы в него подкладываете. К преимуществам MF относится то, что это более новое и текущее API, у которого шире охват на современных платформах. Возможно, также, что необходимый код вы сможете сделать на C#, если опять же в этом есть нужда. Плохие новости заключаются в том, что по своей структуре такой COM класс будет определенно сложнее и понадобится чуть глубже копнуть API. Информации и примеров немного, и кроме этого сам MF едва ли предлагает лучшие и/или более понятные возможности по стандартным кодекам, возможности отправлять данные в файл и сеть, по инструментам разработки. Ближайший пример из SDK, будет, видимо, MPEG1Source Sample и, как мне кажется, в нём непросто сходу разобраться.

Если у вас нет конкретных предпочтений в плане API, то для этой задачи и с учётом описанной вами ситуации я бы предложил DirectShow. Однако если помимо описанного вопроса у вас есть причины, ограничения, основания по которым необходимо использовать Media Foundation, то в таком случае, возможно, будет предпочтительнее разрабатывать и обработку аудио данных в рамках этого API. Вместе с тем создание источников данных для обоих API, как я написал сначала, являются вполне посильной задачей и будут работать надёжно и эффективно.

Where is ID3D11DeviceChild::GetPrivateDataInterface?

$
0
0

ID3D11DeviceChild similarly to a few other related interfaces offers methods including:

SetPrivateDataInterface option extends SetPrivateData by adding COM reference management to application defined data. However there is no GetPrivateDataInterface… A reasonable assumption is that it is a single collection of keyed application defined data so it is possible to read interface values using GetPrivateData method. The behavior should have been documented to avoid confusion.

I would perhaps not have posted this if there was no additional aspect. If I could read my interface values attached by SetPrivateDataInterface using GetPrivateData method, should I expect the returned values to be IUnknown::AddRef‘ed or not?

ID3D11DeviceChild* p = …
IUnknown* pUnknownA = …
p->SetPrivateDataInterface(__uuidof(MyKey), pUnknownA);
…
IUnknown* pUnknownB;
UINT nDataSize = sizeof pUnknownB;
p->GetPrivateData(__uuidof(MyKey), &nDataSize, &pUnknownB);
// QUES: pUnknownB is AddRef'ed or not?

It is indeed possible to retrieve the interface data. With no documented behavior I would expect no IUnknown::AddRef to be done. Rationale: after all I am using the method from the pair which is not supposed to deal with interface pointers. An argument against is that even though taking a raw copy of pointer is not a big deal, in multi-thread environment it might so happen that the returned unreferenced pointer is gone and invalidated if a concurrent thread replaces the collection value and internal IUnknown::Release on the pointer results in object disposal.

My guess on the behavior in part of COM referencing was incorrect: the API does do IUnknown::AddRef. Also this behavior is documented in the DXGI section for IDXGIObject::GetPrivateData method:

If the data returned is a pointer to an IUnknown, or one of its derivative classes, previously set by IDXGIObject::SetPrivateDataInterface, you must call ::Release() on the pointer before the pointer is freed to decrement the reference count.

Presumably the same behavior applies to APIs having no explicitly mentioned documented behavior, including ID3D11Device::GetPrivateData method, ID3D12Object::GetPrivateData method and other.

AMD’s three ways to spell “enhancement”

$
0
0

From AMD AMF SDK documentation, AMF_Video_Encode_API.pdf:

The typos are not a big deal for development, even though the symbol with the typo is a shortcut to a string with the same typo:

#define AMF_VIDEO_ENCODER_NUM_TEMPORAL_ENHANCMENT_LAYERS L"NumOfTemporalEnhancmentLayers"

The SDK offers a good frontend to AMD VCE hardware encoders, however there are a few unfortunate problems:

  • documentation is incomplete: covers most important but skips too many details
    • as a small example, use of important AMF_VIDEO_ENCODER_EXTRADATA is not covered by documentation; those needing it are not their own to figure out the answer themselves
  • the SDK is good in its structure, convenient and “transparent” – its debug mode is pretty helpful
  • alternative method (this post remains in good standing) to consume hardware encoders are Windows Media Foundation Transforms (MFT), which are stable and efficient, but not so much documented too and lack flexibility; additionally it seems they are not in development and are not directly relying on this SDK

As we take no compromise our experimental AMF SDK based H.264 encoding MFT is eventually slightly more efficient compared to vendor’s but not significantly.

AMF SDK in AMD Video Encoder MFTs

$
0
0

I had a wrong assumption that AMD’s H.264 video encoder MFT (and other Media Foundation primitives) are not based on AMF SDK. There were some references to ATI Technologies in the binary (AMDh264Enc64.dll) and most importantly I was unable to change tracing level of the MFT component on runtime. It was a guess that if AMF runtime is shared when MFT is loaded then change of tracing level would affect the MFT, which was not the case (or I did it wrong). Then the MFT DLL does not have direct reference to AMF runtime amfrt64.dll.

However an attempt to use AMD hardware video encoder incorrectly revealed its AMF base:

2018-10-02 11:14:52.434 7128 [AMFEncoderVCE] Error: …\runtime\src\components\EncoderVCE\EncoderVCEImpl.cpp(3057):Assertion failed:Surface format is not supported
2018-10-02 11:14:52.434 7128 [AMF MFT AMFEngine] Error: …\runtime\src\mft\mft-framework\Engine.cpp(348):AMFEngine(0)::SubmitInput(): SubmitInput() failed, AMF_RESULT(AMF_SURFACE_FORMAT_NOT_SUPPORTED)
2018-10-02 11:14:52.434 7128 [AMFAsyncMFTBase] Error: …\runtime\src\mft\mft-framework\AsyncMFTBase.cpp(1103):AsyncMFTBase(0)::ProcessInput(): SubmitInput() failed, AMF_RESULT(AMF_SURFACE_FORMAT_NOT_SUPPORTED)

Apparently the encoder MFT implementation is built over AMF and VCE. With a static link to AMF runtime perhaps.

Bonus picture (from True Audio Next and Multimedia AMD APIs in game and VR application development) suggests that AMD MFT shares the runtime with other consumers, which seems to be not exactly accurate:


Runtime H.264 encoder setting changes with AMD H.264 hardware MFT

$
0
0

One more AMD MFT related post for now. Some time ago I mentioned that Intel’s implementation of hardware H.264 video encoder Media Foundation Transform (MFT) is not implementing correctly runtime change of encoding settings. Respective Intel Developer Zone submission has received no follow-up and, presumably, attention over time. At this time it was a good moment to check how AMD is doing when it comes to adjustment of encoding settings on active session.

Let us recap:

  • Microsoft: software encoder supports the feature as documented;
  • Intel: fails to change settings;
  • Nvidia: settings change is supported in minimal documented extent;
  • AMD: ?

AMD H.264 hardware encoder fails to support the feature MSDN documentation mentions as required. Respective request triggers 0x80004001 E_NOTIMPL “Not implemented” exception.

MediaFoundationDxgiCapabilities: with AMF SDK H.264 encoder related data

$
0
0

Yet another post on AMD AMF SDK and hopefully a helpful tool reference. I updated one of the capability discovery applications (MediaFoundationDxgiCapabilities) so that it includes a printout of AMFVideoEncoderVCE_AVC related properties similarly as they are printed for Nvidia video adapters.

Information includes:

  • runtime version (and its availability in first place!)
  • maximal resolution, profile and level supported
  • formats with respect to capabilities reported on Direct3D 11 initialized component; specifically the data show which surface formats the encoding component has internal capability to convert on the way to hardware encoder

It looks like this tool was not described in detail earlier so one could find other DXGI related information as well (such as, for example, order of enumeration of DXGI adapters depending on whether an app is running on iGPU or dGPU on a hybrid system; DXGI desktop duplication related information).

This is reported directly from AMF as opposed to information received from Media Foundation API (which is also partially included though). On video encoders reported via Media Foundation, not just H.264 ones, see MediaFoundationVideoEncoderTransforms: Detecting support for hardware H.264 video encoders.

# Display Devices

 * Radeon RX 570 Series
  * Instance: PCI\VEN_1002&DEV_67DF&SUBSYS_E3871DA2&REV_EF\4&2D78AB8F&0&0008
  * DEVPKEY_Device_Manufacturer: Advanced Micro Devices, Inc.
  * DEVPKEY_Device_DriverVersion: 24.20.13017.5001
  * DEVPKEY_Undocumented_LUID: 0.0x0000D1B8

[...]

##### AMD AMF SDK Specific

 * AMF SDK Version: 1.4.9.0 // https://gpuopen.com/gaming-product/advanced-media-framework/
 * AMF Runtime Version: 1.4.9.0

###### AMFVideoEncoderVCE_AVC

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_ENCODER_CAP_MAX_BITRATE: 100,000,000
 * AMF_VIDEO_ENCODER_CAP_NUM_OF_STREAMS: 16
 * AMF_VIDEO_ENCODER_CAP_MAX_PROFILE: AMF_VIDEO_ENCODER_PROFILE_HIGH
 * AMF_VIDEO_ENCODER_CAP_MAX_LEVEL: 52
 * AMF_VIDEO_ENCODER_CAP_BFRAMES: 0
 * AMF_VIDEO_ENCODER_CAP_MIN_REFERENCE_FRAMES: 1
 * AMF_VIDEO_ENCODER_CAP_MAX_REFERENCE_FRAMES: 16
 * AMF_VIDEO_ENCODER_CAP_MAX_TEMPORAL_LAYERS: 1
 * AMF_VIDEO_ENCODER_CAP_FIXED_SLICE_MODE: 0
 * AMF_VIDEO_ENCODER_CAP_NUM_OF_HW_INSTANCES: 1

####### Input

 * Width Range: 64 - 4,096
 * Height Range: 64 - 2,160
 * Vertical Alignment: 32
 * Format Count: 6
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_YUV420P 
 * Format: AMF_SURFACE_YV12 
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Format: AMF_SURFACE_ARGB 
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL 
 * Memory Type: AMF_MEMORY_OPENGL 
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

####### Output

 * Width Range: 64 - 4,096
 * Height Range: 64 - 2,160
 * Vertical Alignment: 32
 * Format Count: 1
  * Format: AMF_SURFACE_NV12 Native
 * Memory Type Count: 4
  * Memory Type: AMF_MEMORY_DX11 Native
  * Memory Type: AMF_MEMORY_OPENCL 
  * Memory Type: AMF_MEMORY_OPENGL 
  * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

Note that more detailed information can be obtained using amf\public\samples\CPPSamples\CapabilityManager application from the SDK itself, if you build and run it.

Download links

Getting MF_E_TRANSFORM_NEED_MORE_INPUT from Video Processor MFT’s ProcessOutput just to let it take next input

$
0
0

Another example how Microsoft Media Foundation can be annoying in small things. So we have this Video Processor MFT transform which addresses multiple video conversion tasks:

The video processor MFT is a Microsoft Media Foundation transform (MFT) that performs colorspace conversion, video resizing, deinterlacing, frame rate conversion, rotation, cropping, spatial left and right view unpacking, and mirroring.

It is easy to see that Microsoft does not offer a lot of DSPs and even less of them are GPU friendly. Video Processor MFT is a “swiss army knife” tool: if takes care of video fitting tasks, in efficient way, in task combinations, being able to take advantage of GPU processing with fallback to software code path, such as known earlier by Color Converter DSP or similar.

Now the main question is, if you are offering just one thing, is there any chance you can do it right?

First of all, the API is not feature rich, it offers just basics via IMFVideoProcessorControl interface. Okay some functionality might not be available in fallback software code path, but it is still the only Direct3D 11 aware conversion component you are offering, so you could still offer more options for those who want to take advantage of GPU-enabled conversions.

With its internal affiliation to Direct3D 11 Video Processor API, it might be worth to mention how exactly the Direct3D API is utilized internally, the limitations and perhaps some advanced options to customize the conversion: underlying API is more flexible than the MFT.

The documentation is not just scarce, it is also incomplete and inconsistent. The MFT does not mention its implementation of IMFVideoProcessorControl2 interface, while the interface itself is described as belonging to the MFT. Although I wrote before that this interface is known for giving some troubles.

The MFT is designed to work in Media Foundation pipelines, such as Media Session hosted and other. However it does take a rocket scientist to realize that if you offer just one thing to the developers for broad range of tasks, the API will be used in various scenarios. Including, for example, use as standalone conversion API.

They should have mentioned in the documentation that the MFT behavior is significantly different in GPU and CPU modes, in for example way the output samples are produced: CPU mode requires the caller to supply a buffer to have output generated to. GPU mode, on the contrary, provides its own output textures with data from internally managed pool (this can be changed, but it’s default behavior). It is fine for Media Session API and alike, but they are also poorly documented so it is not very helpful overall.

I am finally getting to the reason which inspired me to write this post in first place: doing input and output with Video Processor MFT. This is such a fundamental task that it has to have a few words on MSDN on it:

When you have configured the input type and output type for an MFT, you can begin processing data samples. You pass samples to the MFT for processing by using the IMFTransform::ProcessInput method, and then retrieve the processed sample by calling IMFTransform::ProcessOutput. You should set accurate time stamps and durations for all input samples passed. Time stamps are not strictly required but help maintain audio/video synchronization. If you do not have the time stamps for your samples it is better to leave them out than to use uncertain values.

If you use the MFT for conversion and you set the time stamps accurately, it is easy to achieve “one input – one output” behavior. The MFT is additionally synchronous, so it does not require to implement asynchronous processing model: it is possible to consume the API in a really straightforward way: media types set, input 1, output 1, input 2, output 2 etc. – everything within a single thread, linearly. Note that the MFT is not necessarily producing one output sample for every input sample, it is just possible to manage it this way.

No the linear code snippet looks this way:

However even in such a simple scenario the MFT finds a way to horse around. Even if it finishes to produce output for given input, it still requires an additional IMFTransform::ProcessOutput call just to unlock itself for further input and return MF_E_TRANSFORM_NEED_MORE_INPUT. A failure to do this unnecessary call and receive a failure status results in being unable to feed new input in with respectively MF_E_NOTACCEPTING in IMFTransform::ProcessInput. Even though it sort of matches the documented behavior (for example, in ASF related documentation: Processing Data in the Encoder), where MFT host is expected to request output until it is no longer available, there is nothing on documented contract that prevents the MFT to be friendlier on its end. Given the state and the role of this API, it should have been done super friendly to the developers and Microsoft failed to reach minimal acceptable level of friendliness here.

ATLENSURE_SUCCEEDED(pTransform->ProcessInput(0, pSample, 0));
MFT_OUTPUT_DATA_BUFFER OutputDataBuffer = { };
// …
DWORD nStatus;
ATLENSURE_SUCCEEDED(pTransform->ProcessOutput(0, 1, &OutputDataBuffer, &nStatus));
// …
// NOTE: Kick the transform to unlock its input
const HRESULT nProcessOutputResult = pTransform->ProcessOutput(0, 1, &OutputDataBuffer, &nStatus);
ATLASSERT(nProcessOutputResult == MF_E_TRANSFORM_NEED_MORE_INPUT);

AMD H.264 Video Encoder MFT buggy processing of synchronization-enabled input textures

$
0
0

Even though AMD H.264 Video Encoder Media Foundation Transform (MFT) AKA AMDh264Encoder is, generally, a not so bad done piece of software, it still has a few awkward bugs to mention. At this time I am going to show this one: the video encoder transform fails to acquire synchronization on input textures.

The problem comes up when keyed mutex aware textures knock the input door of the transform. The Media Foundation samples carry textures created with D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag, MSDN describes this way:

[…] You can retrieve a pointer to the IDXGIKeyedMutex interface from the resource by using IUnknown::QueryInterface. The IDXGIKeyedMutex interface implements the IDXGIKeyedMutex::AcquireSync and IDXGIKeyedMutex::ReleaseSync APIs to synchronize access to the surface. The device that creates the surface, and any other device that opens the surface by using OpenSharedResource, must call IDXGIKeyedMutex::AcquireSync before they issue any rendering commands to the surface. When those devices finish rendering, they must call IDXGIKeyedMutex::ReleaseSync. […]

Video encoder MFT is supposed to pay attention to the flag and acquire synchronization before the video frame is taken to encoding. AMD implementation fails to do so and it is a bug, a pretty important one and it has been around for a while.

The following code snippet (see also text at the bottom of the post) demonstrates the incorrect behavior of the transform.

Execution reaches the breakpoint position and produces a H.264 sample even though input texture fed into transform is made inaccessible by AcquireSync call in line 104.

By contrast, Microsoft’s H.264 Video Encoder implementation AKA CLSID_MSH264EncoderMFT implements correct behavior and triggers DXGI_ERROR_INVALID_CALL (0x887A0001) failure in line 112.

In the process of doing the SSCCE above and writing the blog post I hit another AMD MFT bug, which is perhaps less important but still showing the internal implementation inaccuracy.

An attempt to send MFT_MESSAGE_NOTIFY_START_OF_STREAM message in line 96 above without input and output media types set triggers a memory access violation:

‘Application.exe’ (Win32): Loaded ‘C:\Windows\System32\DriverStore\FileRepository\c0334550.inf_amd64_cd83b792de8abee9\B334365\atiumd6a.dll’. Symbol loading disabled by Include/Exclude setting.
‘Application.exe’ (Win32): Loaded ‘C:\Windows\System32\DriverStore\FileRepository\c0334550.inf_amd64_cd83b792de8abee9\B334365\atiumd6t.dll’. Symbol loading disabled by Include/Exclude setting.
‘Application.exe’ (Win32): Loaded ‘C:\Windows\System32\DriverStore\FileRepository\c0334550.inf_amd64_cd83b792de8abee9\B334365\amduve64.dll’. Symbol loading disabled by Include/Exclude setting.
Exception thrown at 0x00007FF81FC0E24B (AMDh264Enc64.dll) in Application.exe: 0xC0000005: Access violation reading location 0x0000000000000000.

Better code snippet for the screenshot above:

#pragma region DXGI Adapter
DXGI::CFactory2 pFactory2;
pFactory2.DebugCreate();
DXGI::CAdapter1 pAdapter1;
__C(pFactory2->EnumAdapters1(0, &pAdapter1));
#pragma endregion 
#pragma region D3D11 Device
CComPtr pDevice;
CComPtr pDeviceContext;
UINT nFlags = 0;
#if defined(_DEBUG)
	nFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif // defined(_DEBUG)
static const D3D_FEATURE_LEVEL g_pFeatureLevels[] = 
{
	D3D_FEATURE_LEVEL_12_1,
	D3D_FEATURE_LEVEL_12_0,
	D3D_FEATURE_LEVEL_11_1,
	D3D_FEATURE_LEVEL_11_0,
	D3D_FEATURE_LEVEL_10_1,
	D3D_FEATURE_LEVEL_10_0,
	D3D_FEATURE_LEVEL_9_3,
	D3D_FEATURE_LEVEL_9_2,
	D3D_FEATURE_LEVEL_9_1,
};
D3D_FEATURE_LEVEL FeatureLevel;
__C(D3D11CreateDevice(pAdapter1, D3D_DRIVER_TYPE_UNKNOWN, NULL, nFlags, g_pFeatureLevels, DIM(g_pFeatureLevels), D3D11_SDK_VERSION, &pDevice, &FeatureLevel, &pDeviceContext));
const CComQIPtr pMultithread = pDevice;
__D(pMultithread, E_NOINTERFACE);
pMultithread->SetMultithreadProtected(TRUE);
#pragma endregion 
MF::CStartup Startup;
MF::CDxgiDeviceManager pDeviceManager;
pDeviceManager.Create();
pDeviceManager.Reset(pDevice);
MF::CTransform pTransform;
#if TRUE
	__C(pTransform.m_p.CoCreateInstance(__uuidof(AmdH264Encoder)));
	{
		MF::CAttributes pAttributes;
		__C(pTransform->GetAttributes(&pAttributes));
		_A(pAttributes);
		_A(pAttributes.GetUINT32(MF_TRANSFORM_ASYNC));
		pAttributes[MF_TRANSFORM_ASYNC_UNLOCK] = (UINT32) 1;
	}
	_W(pTransform.ProcessSetD3dManagerMessage(pDeviceManager));
#else
	__C(pTransform.m_p.CoCreateInstance(CLSID_MSH264EncoderMFT));
#endif
static const UINT32 g_nRateNumerator = 50, g_nRateDenominator = 1;
#pragma region Media Type
MF::CMediaType pInputMediaType;
pInputMediaType.Create();
pInputMediaType[MF_MT_MAJOR_TYPE] = MFMediaType_Video;
pInputMediaType[MF_MT_SUBTYPE] = MFVideoFormat_NV12;
pInputMediaType[MF_MT_ALL_SAMPLES_INDEPENDENT] = (UINT32) 1;
pInputMediaType[MF_MT_FRAME_SIZE].SetSize(1280, 720);
pInputMediaType[MF_MT_INTERLACE_MODE] = (UINT32) MFVideoInterlace_Progressive;
pInputMediaType[MF_MT_FRAME_RATE].SetRatio(g_nRateNumerator, g_nRateDenominator);
pInputMediaType[MF_MT_FIXED_SIZE_SAMPLES] = (UINT32) 1;
MF::CMediaType pOutputMediaType;
pOutputMediaType.Create();
pOutputMediaType[MF_MT_MAJOR_TYPE] = MFMediaType_Video;
pOutputMediaType[MF_MT_SUBTYPE] = MFVideoFormat_H264;
pOutputMediaType.CopyFrom(pInputMediaType, MF_MT_FRAME_SIZE);
pOutputMediaType.CopyFrom(pInputMediaType, MF_MT_INTERLACE_MODE);
pOutputMediaType.CopyFrom(pInputMediaType, MF_MT_FRAME_RATE);
pOutputMediaType[MF_MT_AVG_BITRATE] = (UINT32) 1000 * 1000;
pTransform.SetOutputType(pOutputMediaType);
pTransform.SetInputType(pInputMediaType);
#pragma endregion
_W(pTransform.ProcessStartOfStreamNotifyMessage());
CD3D11_TEXTURE2D_DESC TextureDescription(DXGI_FORMAT_NV12, 1280, 720);
TextureDescription.MipLevels = 1;
TextureDescription.MiscFlags |= D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX;
CComPtr pTexture;
__C(pDevice->CreateTexture2D(&TextureDescription, NULL, &pTexture));
_A(D3D11::IsKeyedMutexAware(pTexture));
DXGI::CKeyedMutexLock KeyedMutexLock(pTexture);
_W(KeyedMutexLock.AcquireSync(0));
for(UINT nIndex = 0; nIndex < 20; nIndex++)
{
	MF::CSample pSample;
	pSample.Create();
	pSample.AddTextureBuffer(pTexture);
	pSample->SetSampleTime(MFllMulDiv(nIndex * 1000 * 10000i64, g_nRateDenominator, g_nRateNumerator, 0));
	pSample->SetSampleDuration(MFllMulDiv(1, g_nRateDenominator, g_nRateNumerator, 0));
	__C(pTransform->ProcessInput(0, pSample, 0));
	_A(pTransform.GetOutputStreamInformation().dwFlags & MFT_OUTPUT_STREAM_PROVIDES_SAMPLES);
	MFT_OUTPUT_DATA_BUFFER OutputDataBuffer = { };
	DWORD nStatus;
	if(SUCCEEDED(pTransform->ProcessOutput(0, 1, &OutputDataBuffer, &nStatus)))
	{
		_A(OutputDataBuffer.pSample);
		reinterpret_cast(OutputDataBuffer.pSample).Trace();
		break;
	}
}

Minefields of Intel Media SDK

$
0
0

When it comes to vendor specific SDK for hardware assisted video encoding, Intel Media SDK is perhaps the oldest one among current vendors: Intel, NVIDIA, AMD. And also surprisingly the worst one. All three vendors are offering their SDKs for really close capabilities, however the kits are designed and structured differently. If NVIDIA and AMD are close in terms of convenience to developer, Intel Media SDK is apparently an outsider here.

Debug output to facilitate debugging and troubleshooting? No. Working with this and having memories of AMF SDK AMFDebug and AMFTrace evetnually makes you cry.

Trying to initialize a session against non-Intel GPU, which is apparently not going to work? No failure until you hit something weird later in an unrelated call. What the hell is MFX_IMPL_HARDWARE2 in first place? In which exactly enumeration this device is second or otherwise how do I understand what device this is exactly when I select it by Intel’s ordinal number? MFX_IMPL_HARDWAREn flags are not defined to be sequential. Documentation typo references non-exiting MFX_IMPL_HARDWARE1 flag. NVIDIA and AMD are clearly offering this in a more convenient way.

Forgot to attach an allocator? You get a meaningless failure code trying to initialize encoding context.

Trying to identify maximal supported resolution for encoder? Oopsy.

How do I identify if runtime/driver is capable to implicitly handle ARGB32 (RGB4) to NV12 conversion? No way without actual attempt to initialize context. In which runtime version the capability was introduced? Not documented.

mfxExtVPPDoNotUse and mfxExtVPPDoUse… Seriously? Not documented well. An attempt to initialize structures “differently” still making sense results in meaningless error code.

Asynchronous MFXVideoENCODE_EncodeFrameAsync requires that lifetime of mfxFrameSurface1 argument is extended to the completion of asynchronous call… Things like these just have to be documented! One would hate to find this out while troubleshooting unstable operation of the API.

The hardware encoders are out there for years, and decent ones. It is surprising that the SDK is not equally well and friendly.

Getting started with WASAPI

$
0
0

Reader’s question:

… Audio field is very new to me. But I must use WASAPI for one of my project. Would you mind give me a direction where should I start learning in order to be able to implement WASAPI to my project .

WASAPI basics are straightforward:

  • enumerate devices
  • capture audio
  • play (render) audio back

To start WASAPI development I recommend looking at Microsoft’s SDK samples here. The samples include both capture and playback tasks in simple scenarios.

A few more samples for WASAPI on UWP:

You will need MSDN documentation for Windows Audio Session API (WASAPI) to get details on API calls.

Related MSDN API links:

Once you have more specific questions I suggest that you search StackOverflow and MSDN Forums and ask on StackOverflow if you still need help.

AV1 video makes its way with Media Foundation

$
0
0

Microsoft released AV1 Video Extension (Beta) via their store:

Play AV1 videos on your Windows 10 device. This extension is an early beta version of the AV1 software decoder that lets you play videos that have been encoded using the AV1 video coding standard developed by the Alliance for Open Media. Since this is an early release, you might see some performance issues when playing AV1 videos. We’re continuing to improve this extension. If you allow apps to be updated automatically, you should get the latest updates and improvements when we release them.

The extension installs Media Foundation decoder AV1VideoExtension for MFVideoFormat_AV1 video media subtype, dually interfaced for desktop and store (UWP) applications. The decoder is software-only without hardware acceleration (via GPU). Let us hope we will see compatible hardware soon and vendor specific implementation with hardware assisted decoding.

## AV1VideoExtension

13 Attributes:

 * MF_TRANSFORM_FLAGS_Attribute: MFT_ENUM_FLAG_SYNCMFT
 * MFT_INPUT_TYPES_Attributes: MFVideoFormat_AV1
 * MFT_OUTPUT_TYPES_Attributes: MFVideoFormat_NV12, MFVideoFormat_IYUV, MFVideoFormat_420O, MFVideoFormat_P010
 * {3C0FBE52-D034-4115-995D-95B356B9855C}: 1 (Type VT_UI4)
 * {7347C815-79FC-4AD9-877D-ACDF5F46685E}: C:\Program Files\WindowsApps\Microsoft.AV1VideoExtension_1.1.13377.0_x64__8wekyb3d8bbwe\build\x64\av1decodermft_store.dll (Type VT_LPWSTR)
 * {957193AD-9029-4835-A2F2-3EC9AE9BB6C8}: Microsoft.AV1VideoExtension_1.1.13377.0_x64__8wekyb3d8bbwe (Type VT_LPWSTR)
 * {9D8B61A8-6BC8-4BFF-B31F-3A31060AFA3D}: Microsoft.AV1VideoExtension_8wekyb3d8bbwe (Type VT_LPWSTR)
 * {BB49BC51-1810-4C3A-A9CF-D59C4E5B9622}: {4AFB1971-030E-47F7-B991-C8E3BEBB9094} (Type VT_CLSID)
 * {DE106D30-42FB-4767-808D-0FCC6811B0B9}: AV1DecMft (Type VT_LPWSTR)
 * {F9542F80-D069-4EFE-B30D-345536F76AAA}: 0 (Type VT_UI4)
 * {F9A1EF38-F61E-42E6-87B3-309438F9AC67}: 1 (Type VT_UI4)

### IMFTransform

 * Stream Limits: Input 1..1, Output 1..1
 * Streams: Input 1, Output 1

#### Attributes

 * MF_SA_D3D11_AWARE: 0 (Type VT_UI4)
 * CODECAPI_AVDecVideoThumbnailGenerationMode: 0 (Type VT_UI4)
 * {592A2A5A-E797-491A-9738-C0007BE28C52}: ??? (Type VT_UNKNOWN, 0x00000280DCE59790)
 * CODECAPI_AVDecNumWorkerThreads: 0 (Type VT_UI4)
 * MF_SA_D3D_AWARE: 0 (Type VT_UI4)
 * MF_TRANSFORM_ASYNC: 0 (Type VT_UI4)

Nasty bugs in Intel Media SDK

$
0
0

It might be an “old” version of Intel Media SDK runtime but still it is expected that software is running fine in older environments as well.

MFXVideoVPP_Reset API is available since SDK API 1.0, there is nothing new with it. In certain scenario I use the API to change resolution of the processed video and
I respectively update mfxVideoParam structure then MFXVideoVPP_Reset, MFXVideoVPP_Query, MFXVideoVPP_QueryIOSurf all succeed – nice.

The system is an i7-3571U laptop, that is equipped with Intel 3rd Gen CPU. MFX version reported is 1.11.

When the reset sequence succeeds as expected, however further MFXVideoVPP_GetVideoParam reports unchanged properties… Hey, come on!

I quote below feedback from Intel I just found on a seemingly similar matter, enjoy:

for some algorithm, MFXVideoVPP_Reset is not supported, and it will return MFX_ERR_NONE but no effect.
you can try replace MFXVideoVPP_Reset with MFXVideoVPP_Close & MFXVideoVPP_Init to make it work.

Just for the record the runtime there is producing weird D3D11 errors with a pretty straightforward texture scaling operation:

D3D11_VIDEO_PROCESSOR_DEVICE_CAPS:
RateConversionCapsCount: 1
MaxInputStreams: 8
MaxStreamStates: 8
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_BRIGHTNESS: -1000 – 1000, 0, 0.100000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_CONTRAST: 0 – 1000, 100, 0.010000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_HUE: -1800 – 1800, 0, 0.100000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_SATURATION: 0 – 1000, 100, 0.010000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_NOISE_REDUCTION: 0 – 64, 0, 1.000000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_EDGE_ENHANCEMENT: 0 – 64, 44, 1.000000
D3D11_VIDEO_PROCESSOR_RATE_CONVERSION_CAPS:
CustomRateCount: 0
FutureFrames: 1
PastFrames: 1
103 is supported as input
103 is supported as output
107 is supported as input
107 is supported as output
106 is supported as input
106 is supported as output
87 is supported as input
87 is supported as output
DESCRIPTION
InputFrameFormat: 0
InputFrameRate: -16843010 -16843010
InputWidth: 352 InputHeight: 288
OutputFrameRate: -16843010 -16843010
OutputWidth: 1920 OutputHeight: 1080
Usage: 0
RATE CONV CAPS
PastFrames: 1
FutureFrames: 1
ProcessorCaps: 31
ITelecineCaps: 511
CustomRateCount: 0
D3D11_VIDEO_PROCESSOR_DEVICE_CAPS:
RateConversionCapsCount: 1
MaxInputStreams: 8
MaxStreamStates: 8
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_BRIGHTNESS: -1000 – 1000, 0, 0.100000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_CONTRAST: 0 – 1000, 100, 0.010000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_HUE: -1800 – 1800, 0, 0.100000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_SATURATION: 0 – 1000, 100, 0.010000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_NOISE_REDUCTION: 0 – 64, 0, 1.000000
D3D11_VIDEO_PROCESSOR_FILTER_CAPS_EDGE_ENHANCEMENT: 0 – 64, 44, 1.000000
D3D11_VIDEO_PROCESSOR_RATE_CONVERSION_CAPS:
CustomRateCount: 0
FutureFrames: 1
PastFrames: 1
103 is supported as input
103 is supported as output
107 is supported as input
107 is supported as output
106 is supported as input
106 is supported as output
87 is supported as input
87 is supported as output
DESCRIPTION
InputFrameFormat: 0
InputFrameRate: 50000 1000
InputWidth: 1280 InputHeight: 720
OutputFrameRate: 50000 1000
OutputWidth: 1920 OutputHeight: 1080
Usage: 0
RATE CONV CAPS
PastFrames: 1
FutureFrames: 1
ProcessorCaps: 31
ITelecineCaps: 511
CustomRateCount: 0
D3D11 ERROR: ID3D11DeviceContext::CreateVideoProcessorEnumerator: Invalid input frame rate specified.  The numerator is non-zero, but the denominator is zero! [ STATE_CREATION ERROR #3145795: CREATEVIDEOPROCESSORENUMERATOR_INVALIDINPUTFRAMERATE]

1920×1080 and 1280×720 are mine, it is okay. Where the heck 352×288 comes from? With a frame rate built out of values from uninitialized memory… It is not only debug output, it is also a _com_error E_INVALIDARG exception in d3d11.dll, but MFX runtime plays this game internally and produces no failure on its application requested activity though.

A really good thing in Intel Media SDK in MFX_ERR_GPU_HANG status code. “Oh yeah, it’s our bug, sorry for this”. I appreciate it.

Direct3D 11 Video Processors

$
0
0

ID3D11VideoContext::VideoProcessorSetOutputTargetRect method:

The target rectangle is the area within the destination surface where the output will be drawn. The target rectangle is given in pixel coordinates, relative to the destination surface. If this method is never called, or if the Enable parameter is FALSE, the video processor writes to the entire destination surface.

Okay, let us try it out deflating output rectangle “creating a margin”.

OutputPosition.SetRect(0, 0, OutputTextureDescription.Width, OutputTextureDescription.Height);
OutputPosition.DeflateRect(OutputTextureDescription.Width / 8, OutputTextureDescription.Height / 8);
pVideoContext->VideoProcessorSetOutputTargetRect(pVideoProcessor, TRUE, OutputPosition);

Ability to take care of destination rectangle, Radeon RX 570 Series vs. Intel(R) UHD Graphics 630

Why worry, maybe it is just one small bug for today? Oh, no. Forget SetOutputTargetRect, now just plain texture-to-texture with the same DXGI format. These two are produced on the same system, just different GPUs. NVIDIA GeForce GTX 1080 Ti adds a purple tint to the output when it is basically not expected to:

Ability to keep colors right, NVIDIA GeForce GTX 1080 Ti vs. Intel(R) UHD Graphics 630

This one does not even look a bug compared to mentioned above. Even though it was an “optimal quality” request Radeon’s optimal quality is not really impressing:

Text downscaling, Radeon RX 570 Series vs. Intel(R) UHD Graphics 630

Hardware video encoding in Radeon RX Vega M GH Graphics

$
0
0

If you are curious what video encoding capabilities Radeon RX Vega M GH Graphics offers for a Media Foundation application, here are the details. Some introductory information for starters:

The AMD Radeon RX Vega M GH is an integrated GPU in the fastest Intel Kaby-Lake-G SoC. It combines a Kaby-Lake processor, a Vega graphics card and 4 GB HBM2 memory on a single package. The graphics card offers 24 CUs (1536 shaders) and is clocked from 1063 – 1190 MHz.

The quote above has enough benchmarks related to high resolution gaming, I am however interested in hardware codecs on the chip. The system enumerates two DXGI adapters, so they are both present on chip:

Display Devices

  • Intel(R) HD Graphics 630
    • Instance: PCI\VEN_8086&DEV_591B&SUBSYS_20738086&REV_04\3&11583659&0&10
    • DEVPKEY_Device_Manufacturer: Intel Corporation
    • DEVPKEY_Device_DriverVersion: 24.20.100.6286
  • Radeon RX Vega M GH Graphics
    • Instance: PCI\VEN_1002&DEV_694C&SUBSYS_20738086&REV_C0\4&2BF2E4F6&0&0008
    • DEVPKEY_Device_Manufacturer: Advanced Micro Devices, Inc.
    • DEVPKEY_Device_DriverVersion: 24.20.11026.2001

Then it is interesting that both integrated GPUs have their own video encoders:

Category MFT_CATEGORY_VIDEO_ENCODER

  • IntelВ® Quick Sync Video H.264 Encoder MFT (MFT_ENUM_FLAG_HARDWARE)
  • IntelВ® Hardware H265 Encoder MFT (MFT_ENUM_FLAG_HARDWARE)
  • AMDh264Encoder (MFT_ENUM_FLAG_HARDWARE)
  • AMDh265Encoder (MFT_ENUM_FLAG_HARDWARE)

That is, both Intel and AMD hardware parts come with their video encoding ASICs, no reduction, and together they basically provide excessive video encoding capabilities. 

Below is the quote of AMF SDK capabilities of the hardware. The data looks pretty much similar to that of my another Radeon RX 570 Series system:

## Radeon RX Vega M GH Graphics

 * AMF SDK Version: 1.4.9.0 // https://gpuopen.com/gaming-product/advanced-media-framework/
 * AMF Runtime Version: 1.4.7.0

AMF_Context_DeviceType	AMF_VARIANT_INT64	0

### AMFVideoDecoderUVD_MJPEG

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS: 16

CodecId	AMF_VARIANT_INT64	7
DPBSize	AMF_VARIANT_INT64	1

NumOfStreams	AMF_VARIANT_INT64	16

#### Input

 * Width Range: 32 - 7,680
 * Height Range: 32 - 4,320
 * Vertical Alignment: 32
 * Format Count: 0
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_HOST Native
 * Interlace Support: 1

#### Output

 * Width Range: 32 - 7,680
 * Height Range: 32 - 4,320
 * Vertical Alignment: 32
 * Format Count: 4
 * Format: AMF_SURFACE_YUY2 
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_DX11 Native
 * Interlace Support: 1

### AMFVideoDecoderUVD_MPEG4

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS: 16

CodecId	AMF_VARIANT_INT64	2
DPBSize	AMF_VARIANT_INT64	4

NumOfStreams	AMF_VARIANT_INT64	16

#### Input

 * Width Range: 32 - 1,920
 * Height Range: 32 - 1,080
 * Vertical Alignment: 32
 * Format Count: 0
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_HOST Native
 * Interlace Support: 1

#### Output

 * Width Range: 32 - 1,920
 * Height Range: 32 - 1,080
 * Vertical Alignment: 32
 * Format Count: 3
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_DX11 Native
 * Interlace Support: 1

### AMFVideoDecoderUVD_H264_AVC

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS: 16

CodecId	AMF_VARIANT_INT64	5
DPBSize	AMF_VARIANT_INT64	16

NumOfStreams	AMF_VARIANT_INT64	16

#### Input

 * Width Range: 32 - 4,096
 * Height Range: 32 - 4,080
 * Vertical Alignment: 32
 * Format Count: 0
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_HOST Native
 * Interlace Support: 1

#### Output

 * Width Range: 32 - 4,096
 * Height Range: 32 - 4,080
 * Vertical Alignment: 32
 * Format Count: 3
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_DX11 Native
 * Interlace Support: 1

### AMFVideoDecoderUVD_MPEG2

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS: 16

CodecId	AMF_VARIANT_INT64	1
DPBSize	AMF_VARIANT_INT64	4

NumOfStreams	AMF_VARIANT_INT64	16

#### Input

 * Width Range: 32 - 1,920
 * Height Range: 32 - 1,080
 * Vertical Alignment: 32
 * Format Count: 0
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_HOST Native
 * Interlace Support: 1

#### Output

 * Width Range: 32 - 1,920
 * Height Range: 32 - 1,080
 * Vertical Alignment: 32
 * Format Count: 3
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_DX11 Native
 * Interlace Support: 1

### AMFVideoDecoderHW_H265_HEVC

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS: 16

CodecId	AMF_VARIANT_INT64	1002
DPBSize	AMF_VARIANT_INT64	16

NumOfStreams	AMF_VARIANT_INT64	16

#### Input

 * Width Range: 32 - 4,096
 * Height Range: 32 - 4,096
 * Vertical Alignment: 32
 * Format Count: 0
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_HOST Native
 * Interlace Support: 1

#### Output

 * Width Range: 32 - 4,096
 * Height Range: 32 - 4,096
 * Vertical Alignment: 32
 * Format Count: 3
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Memory Type Count: 1
 * Memory Type: AMF_MEMORY_DX11 Native
 * Interlace Support: 1

### AMFVideoDecoderHW_H265_MAIN10

 * Acceleration Type: AMF_ACCEL_NOT_SUPPORTED
 * AMF_VIDEO_DECODER_CAP_NUM_OF_STREAMS: 16

CodecId	AMF_VARIANT_INT64	1005
DPBSize	AMF_VARIANT_INT64	16

NumOfStreams	AMF_VARIANT_INT64	16

### AMFVideoEncoderVCE_AVC

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_ENCODER_CAP_MAX_BITRATE: 100,000,000
 * AMF_VIDEO_ENCODER_CAP_NUM_OF_STREAMS: 16
 * AMF_VIDEO_ENCODER_CAP_MAX_PROFILE: AMF_VIDEO_ENCODER_PROFILE_HIGH
 * AMF_VIDEO_ENCODER_CAP_MAX_LEVEL: 52
 * AMF_VIDEO_ENCODER_CAP_BFRAMES: 0
 * AMF_VIDEO_ENCODER_CAP_MIN_REFERENCE_FRAMES: 1
 * AMF_VIDEO_ENCODER_CAP_MAX_REFERENCE_FRAMES: 16
 * AMF_VIDEO_ENCODER_CAP_MAX_TEMPORAL_LAYERS: 1
 * AMF_VIDEO_ENCODER_CAP_FIXED_SLICE_MODE: 0
 * AMF_VIDEO_ENCODER_CAP_NUM_OF_HW_INSTANCES: 1

AspectRatio	AMF_VARIANT_RATIO	num 1	den 1
BPicturesDeltaQP	AMF_VARIANT_INT64	0
BPicturesPattern	AMF_VARIANT_INT64	0
BReferenceEnable	AMF_VARIANT_BOOL	0
CABACEnable	AMF_VARIANT_INT64	0
CodecId	AMF_VARIANT_INT64	5
ConstraintSetFlags	AMF_VARIANT_INT64	0
DeBlockingFilter	AMF_VARIANT_BOOL	1
EnableGOPAlignment	AMF_VARIANT_BOOL	1
EnableVBAQ	AMF_VARIANT_BOOL	0
EncoderMaxInstances	AMF_VARIANT_INT64	1
EncoderOutputCallback	AMF_VARIANT_EMPTY
EncoderOutputCallbackSupport	AMF_VARIANT_BOOL	1
EnforceHRD	AMF_VARIANT_BOOL	0
EngineType	AMF_VARIANT_INT64	0
ExtraData	AMF_VARIANT_EMPTY
FillerDataEnable	AMF_VARIANT_BOOL	0
FrameRate	AMF_VARIANT_RATE	num 30	den 1
FrameSize	AMF_VARIANT_SIZE	width 1920	height 1080
FullRangeColor	AMF_VARIANT_BOOL	0
GOPSize	AMF_VARIANT_INT64	60
HalfPixel	AMF_VARIANT_BOOL	1
HeaderInsertionSpacing	AMF_VARIANT_INT64	0
IDRPeriod	AMF_VARIANT_INT64	30
InitialVBVBufferFullness	AMF_VARIANT_INT64	64
InstanceID	AMF_VARIANT_INT64	-1
IntraRefreshMBsNumberPerSlot	AMF_VARIANT_INT64	0
IntraRefreshMode	AMF_VARIANT_INT64	0
IntraRefreshNumOfStripes	AMF_VARIANT_INT64	2147483647
IsUVE	AMF_VARIANT_BOOL	0
LowLatencyInternal	AMF_VARIANT_BOOL	0
MGSKeyPicturePeriod	AMF_VARIANT_INT64	0
MGSVector0	AMF_VARIANT_INT64	0
MGSVector1	AMF_VARIANT_INT64	0
MGSVector2	AMF_VARIANT_INT64	0
MGSVector3	AMF_VARIANT_INT64	0
MGSVectorMode	AMF_VARIANT_BOOL	0
MaxAUSize	AMF_VARIANT_INT64	0
MaxDecFrameBuffering	AMF_VARIANT_INT64	-1
MaxMBPerSec	AMF_VARIANT_INT64	581441
MaxNumRefFrames	AMF_VARIANT_INT64	4
MaxOfLTRFrames	AMF_VARIANT_INT64	0
MaxQP	AMF_VARIANT_INT64	51
MaxSliceSize	AMF_VARIANT_INT64	2147483647
MinQP	AMF_VARIANT_INT64	0
MultiInstanceCurrentQueue	AMF_VARIANT_INT64	0
MultiInstanceMode	AMF_VARIANT_BOOL	0
NumOfQualityLayers	AMF_VARIANT_INT64	0
NumOfTemporalEnhancmentLayers	AMF_VARIANT_INT64	0
PeakBitrate	AMF_VARIANT_INT64	30000000
Profile	AMF_VARIANT_INT64	77
ProfileLevel	AMF_VARIANT_INT64	42
QPB	AMF_VARIANT_INT64	22
QPI	AMF_VARIANT_INT64	22
QPP	AMF_VARIANT_INT64	22
QualityEnhancementMode	AMF_VARIANT_INT64	0
QualityPreset	AMF_VARIANT_INT64	0
QuarterPixel	AMF_VARIANT_BOOL	1
RateControlMethod	AMF_VARIANT_INT64	2
RateControlPreanalysisEnable	AMF_VARIANT_INT64	0
RateControlSkipFrameEnable	AMF_VARIANT_BOOL	0
ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	0
ScanType	AMF_VARIANT_INT64	0
SliceControlMode	AMF_VARIANT_INT64	0
SliceControlSize	AMF_VARIANT_INT64	0
SliceMode	AMF_VARIANT_INT64	1
SlicesPerFrame	AMF_VARIANT_INT64	1
TL0.QL0.BPicturesDeltaQP	AMF_VARIANT_INT64	4
TL0.QL0.EnforceHRD	AMF_VARIANT_BOOL	0
TL0.QL0.FillerDataEnable	AMF_VARIANT_BOOL	0
TL0.QL0.FrameRate	AMF_VARIANT_RATE	num 30	den 1
TL0.QL0.GOPSize	AMF_VARIANT_INT64	60
TL0.QL0.InitialVBVBufferFullness	AMF_VARIANT_INT64	64
TL0.QL0.MaxAUSize	AMF_VARIANT_INT64	0
TL0.QL0.MaxQP	AMF_VARIANT_INT64	51
TL0.QL0.MinQP	AMF_VARIANT_INT64	0
TL0.QL0.PeakBitrate	AMF_VARIANT_INT64	30000000
TL0.QL0.QPB	AMF_VARIANT_INT64	22
TL0.QL0.QPI	AMF_VARIANT_INT64	22
TL0.QL0.QPP	AMF_VARIANT_INT64	22
TL0.QL0.RateControlMethod	AMF_VARIANT_INT64	2
TL0.QL0.RateControlSkipFrameEnable	AMF_VARIANT_BOOL	1
TL0.QL0.ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	2
TL0.QL0.TargetBitrate	AMF_VARIANT_INT64	20000000
TL0.QL0.VBVBufferSize	AMF_VARIANT_INT64	2000000
TL1.QL0.BPicturesDeltaQP	AMF_VARIANT_INT64	4
TL1.QL0.EnforceHRD	AMF_VARIANT_BOOL	0
TL1.QL0.FillerDataEnable	AMF_VARIANT_BOOL	0
TL1.QL0.FrameRate	AMF_VARIANT_RATE	num 30	den 1
TL1.QL0.GOPSize	AMF_VARIANT_INT64	60
TL1.QL0.InitialVBVBufferFullness	AMF_VARIANT_INT64	64
TL1.QL0.MaxAUSize	AMF_VARIANT_INT64	0
TL1.QL0.MaxQP	AMF_VARIANT_INT64	51
TL1.QL0.MinQP	AMF_VARIANT_INT64	0
TL1.QL0.PeakBitrate	AMF_VARIANT_INT64	30000000
TL1.QL0.QPB	AMF_VARIANT_INT64	22
TL1.QL0.QPI	AMF_VARIANT_INT64	22
TL1.QL0.QPP	AMF_VARIANT_INT64	22
TL1.QL0.RateControlMethod	AMF_VARIANT_INT64	2
TL1.QL0.RateControlSkipFrameEnable	AMF_VARIANT_BOOL	1
TL1.QL0.ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	2
TL1.QL0.TargetBitrate	AMF_VARIANT_INT64	20000000
TL1.QL0.VBVBufferSize	AMF_VARIANT_INT64	2000000
TL2.QL0.BPicturesDeltaQP	AMF_VARIANT_INT64	4
TL2.QL0.EnforceHRD	AMF_VARIANT_BOOL	0
TL2.QL0.FillerDataEnable	AMF_VARIANT_BOOL	0
TL2.QL0.FrameRate	AMF_VARIANT_RATE	num 30	den 1
TL2.QL0.GOPSize	AMF_VARIANT_INT64	60
TL2.QL0.InitialVBVBufferFullness	AMF_VARIANT_INT64	64
TL2.QL0.MaxAUSize	AMF_VARIANT_INT64	0
TL2.QL0.MaxQP	AMF_VARIANT_INT64	51
TL2.QL0.MinQP	AMF_VARIANT_INT64	0
TL2.QL0.PeakBitrate	AMF_VARIANT_INT64	30000000
TL2.QL0.QPB	AMF_VARIANT_INT64	22
TL2.QL0.QPI	AMF_VARIANT_INT64	22
TL2.QL0.QPP	AMF_VARIANT_INT64	22
TL2.QL0.RateControlMethod	AMF_VARIANT_INT64	2
TL2.QL0.RateControlSkipFrameEnable	AMF_VARIANT_BOOL	1
TL2.QL0.ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	2
TL2.QL0.TargetBitrate	AMF_VARIANT_INT64	20000000
TL2.QL0.VBVBufferSize	AMF_VARIANT_INT64	2000000
TL3.QL0.BPicturesDeltaQP	AMF_VARIANT_INT64	4
TL3.QL0.EnforceHRD	AMF_VARIANT_BOOL	0
TL3.QL0.FillerDataEnable	AMF_VARIANT_BOOL	0
TL3.QL0.FrameRate	AMF_VARIANT_RATE	num 30	den 1
TL3.QL0.GOPSize	AMF_VARIANT_INT64	60
TL3.QL0.InitialVBVBufferFullness	AMF_VARIANT_INT64	64
TL3.QL0.MaxAUSize	AMF_VARIANT_INT64	0
TL3.QL0.MaxQP	AMF_VARIANT_INT64	51
TL3.QL0.MinQP	AMF_VARIANT_INT64	0
TL3.QL0.PeakBitrate	AMF_VARIANT_INT64	30000000
TL3.QL0.QPB	AMF_VARIANT_INT64	22
TL3.QL0.QPI	AMF_VARIANT_INT64	22
TL3.QL0.QPP	AMF_VARIANT_INT64	22
TL3.QL0.RateControlMethod	AMF_VARIANT_INT64	2
TL3.QL0.RateControlSkipFrameEnable	AMF_VARIANT_BOOL	1
TL3.QL0.ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	2
TL3.QL0.TargetBitrate	AMF_VARIANT_INT64	20000000
TL3.QL0.VBVBufferSize	AMF_VARIANT_INT64	2000000
TargetBitrate	AMF_VARIANT_INT64	20000000
UniqueInstance	AMF_VARIANT_INT64	0
Usage	AMF_VARIANT_INT64	0
VBVBufferSize	AMF_VARIANT_INT64	20000000
WaitForTask	AMF_VARIANT_BOOL	0

BFrames	AMF_VARIANT_BOOL	0
FixedSliceMode	AMF_VARIANT_BOOL	0
MaxBitrate	AMF_VARIANT_INT64	100000000
MaxLevel	AMF_VARIANT_INT64	52
MaxProfile	AMF_VARIANT_INT64	100
MaxReferenceFrames	AMF_VARIANT_INT64	16
MaxTemporalLayers	AMF_VARIANT_INT64	1
MinReferenceFrames	AMF_VARIANT_INT64	1
NumOfHwInstances	AMF_VARIANT_INT64	1
NumOfStreams	AMF_VARIANT_INT64	16

#### Input

 * Width Range: 64 - 4,096
 * Height Range: 64 - 2,160
 * Vertical Alignment: 32
 * Format Count: 6
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_YUV420P 
 * Format: AMF_SURFACE_YV12 
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Format: AMF_SURFACE_ARGB 
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL 
 * Memory Type: AMF_MEMORY_OPENGL 
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

#### Output

 * Width Range: 64 - 4,096
 * Height Range: 64 - 2,160
 * Vertical Alignment: 32
 * Format Count: 1
 * Format: AMF_SURFACE_NV12 Native
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL 
 * Memory Type: AMF_MEMORY_OPENGL 
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

### AMFVideoEncoderVCE_SVC

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_ENCODER_CAP_MAX_BITRATE: 100,000,000
 * AMF_VIDEO_ENCODER_CAP_NUM_OF_STREAMS: 16
 * AMF_VIDEO_ENCODER_CAP_MAX_PROFILE: AMF_VIDEO_ENCODER_PROFILE_HIGH
 * AMF_VIDEO_ENCODER_CAP_MAX_LEVEL: 52
 * AMF_VIDEO_ENCODER_CAP_BFRAMES: 0
 * AMF_VIDEO_ENCODER_CAP_MIN_REFERENCE_FRAMES: 1
 * AMF_VIDEO_ENCODER_CAP_MAX_REFERENCE_FRAMES: 16
 * AMF_VIDEO_ENCODER_CAP_MAX_TEMPORAL_LAYERS: 3
 * AMF_VIDEO_ENCODER_CAP_FIXED_SLICE_MODE: 0
 * AMF_VIDEO_ENCODER_CAP_NUM_OF_HW_INSTANCES: 1

AspectRatio	AMF_VARIANT_RATIO	num 1	den 1
BPicturesDeltaQP	AMF_VARIANT_INT64	0
BPicturesPattern	AMF_VARIANT_INT64	0
BReferenceEnable	AMF_VARIANT_BOOL	0
CABACEnable	AMF_VARIANT_INT64	0
CodecId	AMF_VARIANT_INT64	5
ConstraintSetFlags	AMF_VARIANT_INT64	0
DeBlockingFilter	AMF_VARIANT_BOOL	1
EnableGOPAlignment	AMF_VARIANT_BOOL	1
EnableVBAQ	AMF_VARIANT_BOOL	0
EncoderMaxInstances	AMF_VARIANT_INT64	1
EncoderOutputCallback	AMF_VARIANT_EMPTY
EncoderOutputCallbackSupport	AMF_VARIANT_BOOL	1
EnforceHRD	AMF_VARIANT_BOOL	0
EngineType	AMF_VARIANT_INT64	0
ExtraData	AMF_VARIANT_EMPTY
FillerDataEnable	AMF_VARIANT_BOOL	0
FrameRate	AMF_VARIANT_RATE	num 30	den 1
FrameSize	AMF_VARIANT_SIZE	width 1920	height 1080
FullRangeColor	AMF_VARIANT_BOOL	0
GOPSize	AMF_VARIANT_INT64	60
HalfPixel	AMF_VARIANT_BOOL	1
HeaderInsertionSpacing	AMF_VARIANT_INT64	0
IDRPeriod	AMF_VARIANT_INT64	30
InitialVBVBufferFullness	AMF_VARIANT_INT64	64
InstanceID	AMF_VARIANT_INT64	-1
IntraRefreshMBsNumberPerSlot	AMF_VARIANT_INT64	0
IntraRefreshMode	AMF_VARIANT_INT64	0
IntraRefreshNumOfStripes	AMF_VARIANT_INT64	2147483647
IsUVE	AMF_VARIANT_BOOL	0
LowLatencyInternal	AMF_VARIANT_BOOL	0
MGSKeyPicturePeriod	AMF_VARIANT_INT64	0
MGSVector0	AMF_VARIANT_INT64	0
MGSVector1	AMF_VARIANT_INT64	0
MGSVector2	AMF_VARIANT_INT64	0
MGSVector3	AMF_VARIANT_INT64	0
MGSVectorMode	AMF_VARIANT_BOOL	0
MaxAUSize	AMF_VARIANT_INT64	0
MaxDecFrameBuffering	AMF_VARIANT_INT64	-1
MaxMBPerSec	AMF_VARIANT_INT64	0
MaxNumRefFrames	AMF_VARIANT_INT64	4
MaxOfLTRFrames	AMF_VARIANT_INT64	0
MaxQP	AMF_VARIANT_INT64	51
MaxSliceSize	AMF_VARIANT_INT64	2147483647
MinQP	AMF_VARIANT_INT64	0
MultiInstanceCurrentQueue	AMF_VARIANT_INT64	0
MultiInstanceMode	AMF_VARIANT_BOOL	0
NumOfQualityLayers	AMF_VARIANT_INT64	0
NumOfTemporalEnhancmentLayers	AMF_VARIANT_INT64	0
PeakBitrate	AMF_VARIANT_INT64	30000000
Profile	AMF_VARIANT_INT64	77
ProfileLevel	AMF_VARIANT_INT64	42
QPB	AMF_VARIANT_INT64	22
QPI	AMF_VARIANT_INT64	22
QPP	AMF_VARIANT_INT64	22
QualityEnhancementMode	AMF_VARIANT_INT64	0
QualityPreset	AMF_VARIANT_INT64	0
QuarterPixel	AMF_VARIANT_BOOL	1
RateControlMethod	AMF_VARIANT_INT64	2
RateControlPreanalysisEnable	AMF_VARIANT_INT64	0
RateControlSkipFrameEnable	AMF_VARIANT_BOOL	0
ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	0
ScanType	AMF_VARIANT_INT64	0
SliceControlMode	AMF_VARIANT_INT64	0
SliceControlSize	AMF_VARIANT_INT64	0
SliceMode	AMF_VARIANT_INT64	1
SlicesPerFrame	AMF_VARIANT_INT64	1
TL0.QL0.BPicturesDeltaQP	AMF_VARIANT_INT64	4
TL0.QL0.EnforceHRD	AMF_VARIANT_BOOL	0
TL0.QL0.FillerDataEnable	AMF_VARIANT_BOOL	0
TL0.QL0.FrameRate	AMF_VARIANT_RATE	num 30	den 1
TL0.QL0.GOPSize	AMF_VARIANT_INT64	60
TL0.QL0.InitialVBVBufferFullness	AMF_VARIANT_INT64	64
TL0.QL0.MaxAUSize	AMF_VARIANT_INT64	0
TL0.QL0.MaxQP	AMF_VARIANT_INT64	51
TL0.QL0.MinQP	AMF_VARIANT_INT64	0
TL0.QL0.PeakBitrate	AMF_VARIANT_INT64	30000000
TL0.QL0.QPB	AMF_VARIANT_INT64	22
TL0.QL0.QPI	AMF_VARIANT_INT64	22
TL0.QL0.QPP	AMF_VARIANT_INT64	22
TL0.QL0.RateControlMethod	AMF_VARIANT_INT64	2
TL0.QL0.RateControlSkipFrameEnable	AMF_VARIANT_BOOL	1
TL0.QL0.ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	2
TL0.QL0.TargetBitrate	AMF_VARIANT_INT64	20000000
TL0.QL0.VBVBufferSize	AMF_VARIANT_INT64	2000000
TL1.QL0.BPicturesDeltaQP	AMF_VARIANT_INT64	4
TL1.QL0.EnforceHRD	AMF_VARIANT_BOOL	0
TL1.QL0.FillerDataEnable	AMF_VARIANT_BOOL	0
TL1.QL0.FrameRate	AMF_VARIANT_RATE	num 30	den 1
TL1.QL0.GOPSize	AMF_VARIANT_INT64	60
TL1.QL0.InitialVBVBufferFullness	AMF_VARIANT_INT64	64
TL1.QL0.MaxAUSize	AMF_VARIANT_INT64	0
TL1.QL0.MaxQP	AMF_VARIANT_INT64	51
TL1.QL0.MinQP	AMF_VARIANT_INT64	0
TL1.QL0.PeakBitrate	AMF_VARIANT_INT64	30000000
TL1.QL0.QPB	AMF_VARIANT_INT64	22
TL1.QL0.QPI	AMF_VARIANT_INT64	22
TL1.QL0.QPP	AMF_VARIANT_INT64	22
TL1.QL0.RateControlMethod	AMF_VARIANT_INT64	2
TL1.QL0.RateControlSkipFrameEnable	AMF_VARIANT_BOOL	1
TL1.QL0.ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	2
TL1.QL0.TargetBitrate	AMF_VARIANT_INT64	20000000
TL1.QL0.VBVBufferSize	AMF_VARIANT_INT64	2000000
TL2.QL0.BPicturesDeltaQP	AMF_VARIANT_INT64	4
TL2.QL0.EnforceHRD	AMF_VARIANT_BOOL	0
TL2.QL0.FillerDataEnable	AMF_VARIANT_BOOL	0
TL2.QL0.FrameRate	AMF_VARIANT_RATE	num 30	den 1
TL2.QL0.GOPSize	AMF_VARIANT_INT64	60
TL2.QL0.InitialVBVBufferFullness	AMF_VARIANT_INT64	64
TL2.QL0.MaxAUSize	AMF_VARIANT_INT64	0
TL2.QL0.MaxQP	AMF_VARIANT_INT64	51
TL2.QL0.MinQP	AMF_VARIANT_INT64	0
TL2.QL0.PeakBitrate	AMF_VARIANT_INT64	30000000
TL2.QL0.QPB	AMF_VARIANT_INT64	22
TL2.QL0.QPI	AMF_VARIANT_INT64	22
TL2.QL0.QPP	AMF_VARIANT_INT64	22
TL2.QL0.RateControlMethod	AMF_VARIANT_INT64	2
TL2.QL0.RateControlSkipFrameEnable	AMF_VARIANT_BOOL	1
TL2.QL0.ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	2
TL2.QL0.TargetBitrate	AMF_VARIANT_INT64	20000000
TL2.QL0.VBVBufferSize	AMF_VARIANT_INT64	2000000
TL3.QL0.BPicturesDeltaQP	AMF_VARIANT_INT64	4
TL3.QL0.EnforceHRD	AMF_VARIANT_BOOL	0
TL3.QL0.FillerDataEnable	AMF_VARIANT_BOOL	0
TL3.QL0.FrameRate	AMF_VARIANT_RATE	num 30	den 1
TL3.QL0.GOPSize	AMF_VARIANT_INT64	60
TL3.QL0.InitialVBVBufferFullness	AMF_VARIANT_INT64	64
TL3.QL0.MaxAUSize	AMF_VARIANT_INT64	0
TL3.QL0.MaxQP	AMF_VARIANT_INT64	51
TL3.QL0.MinQP	AMF_VARIANT_INT64	0
TL3.QL0.PeakBitrate	AMF_VARIANT_INT64	30000000
TL3.QL0.QPB	AMF_VARIANT_INT64	22
TL3.QL0.QPI	AMF_VARIANT_INT64	22
TL3.QL0.QPP	AMF_VARIANT_INT64	22
TL3.QL0.RateControlMethod	AMF_VARIANT_INT64	2
TL3.QL0.RateControlSkipFrameEnable	AMF_VARIANT_BOOL	1
TL3.QL0.ReferenceBPicturesDeltaQP	AMF_VARIANT_INT64	2
TL3.QL0.TargetBitrate	AMF_VARIANT_INT64	20000000
TL3.QL0.VBVBufferSize	AMF_VARIANT_INT64	2000000
TargetBitrate	AMF_VARIANT_INT64	20000000
UniqueInstance	AMF_VARIANT_INT64	0
Usage	AMF_VARIANT_INT64	0
VBVBufferSize	AMF_VARIANT_INT64	20000000
WaitForTask	AMF_VARIANT_BOOL	0

BFrames	AMF_VARIANT_BOOL	0
FixedSliceMode	AMF_VARIANT_BOOL	0
MaxBitrate	AMF_VARIANT_INT64	100000000
MaxLevel	AMF_VARIANT_INT64	52
MaxProfile	AMF_VARIANT_INT64	100
MaxReferenceFrames	AMF_VARIANT_INT64	16
MaxTemporalLayers	AMF_VARIANT_INT64	3
MinReferenceFrames	AMF_VARIANT_INT64	1
NumOfHwInstances	AMF_VARIANT_INT64	1
NumOfStreams	AMF_VARIANT_INT64	16

#### Input

 * Width Range: 64 - 4,096
 * Height Range: 64 - 2,160
 * Vertical Alignment: 32
 * Format Count: 6
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_YUV420P 
 * Format: AMF_SURFACE_YV12 
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Format: AMF_SURFACE_ARGB 
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL 
 * Memory Type: AMF_MEMORY_OPENGL 
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

#### Output

 * Width Range: 64 - 4,096
 * Height Range: 64 - 2,160
 * Vertical Alignment: 32
 * Format Count: 1
 * Format: AMF_SURFACE_NV12 Native
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL 
 * Memory Type: AMF_MEMORY_OPENGL 
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

### AMFVideoEncoder_HEVC

 * Acceleration Type: AMF_ACCEL_HARDWARE
 * AMF_VIDEO_ENCODER_HEVC_CAP_MAX_BITRATE: 2,147,483,647
 * AMF_VIDEO_ENCODER_HEVC_CAP_NUM_OF_STREAMS: 16
 * AMF_VIDEO_ENCODER_HEVC_CAP_MAX_PROFILE: AMF_VIDEO_ENCODER_HEVC_PROFILE_MAIN
 * AMF_VIDEO_ENCODER_HEVC_CAP_MAX_PROFILE: AMF_VIDEO_ENCODER_HEVC_TIER_HIGH
 * AMF_VIDEO_ENCODER_HEVC_CAP_MAX_LEVEL: AMF_LEVEL_6_2
 * AMF_VIDEO_ENCODER_HEVC_CAP_MIN_REFERENCE_FRAMES: 1
 * AMF_VIDEO_ENCODER_HEVC_CAP_MAX_REFERENCE_FRAMES: 16

BPicturesPattern	AMF_VARIANT_INT64	0
CABACEnable	AMF_VARIANT_INT64	1
CommonLowLatencyInternal	AMF_VARIANT_BOOL	0
EnableGOPAlignment	AMF_VARIANT_BOOL	1
EngineType	AMF_VARIANT_INT64	0
GOPSizeMax	AMF_VARIANT_INT64	16
GOPSizeMin	AMF_VARIANT_INT64	0
GOPType	AMF_VARIANT_INT64	0
HevcAspectRatio	AMF_VARIANT_RATIO	num 1	den 1
HevcDeBlockingFilter	AMF_VARIANT_BOOL	0
HevcEnableVBAQ	AMF_VARIANT_BOOL	0
HevcEndOfSequence	AMF_VARIANT_BOOL	0
HevcEndOfStream	AMF_VARIANT_BOOL	0
HevcEnforceHRD	AMF_VARIANT_BOOL	0
HevcExtraData	AMF_VARIANT_EMPTY
HevcFillerDataEnable	AMF_VARIANT_BOOL	0
HevcForceLTRReferenceBitfield	AMF_VARIANT_INT64	0
HevcForcePictureType	AMF_VARIANT_INT64	0
HevcFrameRate	AMF_VARIANT_RATE	num 30	den 1
HevcFrameSize	AMF_VARIANT_SIZE	width 1920	height 1080
HevcGOPSPerIDR	AMF_VARIANT_INT64	1
HevcGOPSize	AMF_VARIANT_INT64	30
HevcHalfPixel	AMF_VARIANT_BOOL	1
HevcHeaderInsertionMode	AMF_VARIANT_INT64	1
HevcInitialVBVBufferFullness	AMF_VARIANT_INT64	64
HevcInputQueueSize	AMF_VARIANT_INT64	16
HevcInsertAUD	AMF_VARIANT_BOOL	0
HevcInsertHeader	AMF_VARIANT_BOOL	0
HevcMarkCurrentWithLTRIndex	AMF_VARIANT_INT64	0
HevcMaxAUSize	AMF_VARIANT_INT64	0
HevcMaxMBPerSec	AMF_VARIANT_INT64	61200
HevcMaxNumOfTemporalLayers	AMF_VARIANT_INT64	1
HevcMaxNumRefFrames	AMF_VARIANT_INT64	1
HevcMaxOfLTRFrames	AMF_VARIANT_INT64	0
HevcMaxQP_I	AMF_VARIANT_INT64	51
HevcMaxQP_P	AMF_VARIANT_INT64	46
HevcMinQP_I	AMF_VARIANT_INT64	0
HevcMinQP_P	AMF_VARIANT_INT64	18
HevcNumOfTemporalLayers	AMF_VARIANT_INT64	1
HevcPeakBitrate	AMF_VARIANT_INT64	30000000
HevcProfile	AMF_VARIANT_INT64	1
HevcProfileLevel	AMF_VARIANT_INT64	186
HevcQualityPreset	AMF_VARIANT_INT64	0
HevcQuarterPixel	AMF_VARIANT_BOOL	1
HevcRateControlMethod	AMF_VARIANT_INT64	2
HevcRateControlPreAnalysisEnable	AMF_VARIANT_BOOL	0
HevcRateControlSkipFrameEnable	AMF_VARIANT_BOOL	0
HevcSlicesPerFrame	AMF_VARIANT_INT64	1
HevcTargetBitrate	AMF_VARIANT_INT64	20000000
HevcTemporalLayerSelect	AMF_VARIANT_INT64	0
HevcTier	AMF_VARIANT_INT64	0
HevcUsage	AMF_VARIANT_INT64	0
HevcVBVBufferSize	AMF_VARIANT_INT64	20000000
InstanceID	AMF_VARIANT_INT64	-1
IntraRefreshMode	AMF_VARIANT_INT64	0
IntraRefreshNumOfStripes	AMF_VARIANT_INT64	1
LowLatencyInternal	AMF_VARIANT_BOOL	1
NominalRange	AMF_VARIANT_BOOL	0
PerformanceCounter	AMF_VARIANT_EMPTY
QPCBOFFSET	AMF_VARIANT_INT64	0
QPCROFFSET	AMF_VARIANT_INT64	0
SliceControlMode	AMF_VARIANT_INT64	0
SliceControlSize	AMF_VARIANT_INT64	2176
TL0.HevcQP_I	AMF_VARIANT_INT64	26
TL0.HevcQP_P	AMF_VARIANT_INT64	26
TL1.HevcQP_I	AMF_VARIANT_INT64	26
TL1.HevcQP_P	AMF_VARIANT_INT64	26
TL2.HevcQP_I	AMF_VARIANT_INT64	26
TL2.HevcQP_P	AMF_VARIANT_INT64	26
TL3.HevcQP_I	AMF_VARIANT_INT64	26
TL3.HevcQP_P	AMF_VARIANT_INT64	26
UniqueInstance	AMF_VARIANT_INT64	0

HevcBFrames	AMF_VARIANT_INT64	0
HevcMaxBitrate	AMF_VARIANT_INT64	2147483647
HevcMaxLevel	AMF_VARIANT_INT64	186
HevcMaxProfile	AMF_VARIANT_INT64	1
HevcMaxReferenceFrames	AMF_VARIANT_INT64	16
HevcMaxTier	AMF_VARIANT_INT64	1
HevcMinReferenceFrames	AMF_VARIANT_INT64	1
HevcNumOfStreams	AMF_VARIANT_INT64	16

#### Input

 * Width Range: 192 - 4,096
 * Height Range: 128 - 2,176
 * Vertical Alignment: 32
 * Format Count: 6
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_YUV420P 
 * Format: AMF_SURFACE_YV12 
 * Format: AMF_SURFACE_BGRA 
 * Format: AMF_SURFACE_RGBA 
 * Format: AMF_SURFACE_ARGB 
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL 
 * Memory Type: AMF_MEMORY_OPENGL 
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

#### Output

 * Width Range: 192 - 4,096
 * Height Range: 128 - 2,176
 * Vertical Alignment: 32
 * Format Count: 1
 * Format: AMF_SURFACE_NV12 Native
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL 
 * Memory Type: AMF_MEMORY_OPENGL 
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

### AMFVideoConverter

 * Acceleration Type: AMF_ACCEL_GPU

#### Input

 * Width Range: 32 - 4,096
 * Height Range: 32 - 4,096
 * Vertical Alignment: 2
 * Format Count: 6
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_YV12 Native
 * Format: AMF_SURFACE_BGRA Native
 * Format: AMF_SURFACE_ARGB Native
 * Format: AMF_SURFACE_RGBA Native
 * Format: AMF_SURFACE_YUV420P Native
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL Native
 * Memory Type: AMF_MEMORY_OPENGL Native
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

#### Output

 * Width Range: 32 - 4,096
 * Height Range: 32 - 4,096
 * Vertical Alignment: 2
 * Format Count: 6
 * Format: AMF_SURFACE_NV12 Native
 * Format: AMF_SURFACE_YV12 Native
 * Format: AMF_SURFACE_BGRA Native
 * Format: AMF_SURFACE_ARGB Native
 * Format: AMF_SURFACE_RGBA Native
 * Format: AMF_SURFACE_YUV420P Native
 * Memory Type Count: 4
 * Memory Type: AMF_MEMORY_DX11 Native
 * Memory Type: AMF_MEMORY_OPENCL Native
 * Memory Type: AMF_MEMORY_OPENGL Native
 * Memory Type: AMF_MEMORY_HOST 
 * Interlace Support: 0

Intel H.264 Video Encoder MFT is ignoring texture synchronization too

$
0
0

Some time ago I wrote about a bug in AMD’s H.264 Video Encoder MFT, where implementation fails to synchronize access to Direct3D 11 texture. So Intel’s implementation has exactly the same problem. Intel® Quick Sync Video H.264 Encoder MFT processes input textures/frames without acquiring synchroization and can lose actual content.

It is pretty hard to reproduce this problem because it is hardware dependent and in most cases the data arrives into the texture before encoder starts processing it, so the problem remains hidden. But in certain systems the bug comes up so easily and produces a stuttering effect. Since input textures are pooled, when new data is late to arrive into texture, H.264 encoder encodes an old video frame and H.264 output is technially valid: it just produces a stuttering effect on playback because wrong content was encoded.

For a Media Foundation API consumer it is not really easy to work the problem around because Media Foundation does not provide access to data streamed between the promitives internally. A high level application might be even not aware that primitives are exchanging with synchronization enabled textures so it is unclear where the source of the problem is. 

Possible solutions to the problem (applicable or not depending on specific case):

  1. to not use synchronization-enabled textures; do a copy from properly sycnhronized texture into a new plain texture before feeding it into encoder; this might require an additional/special MFT inserted into the pipeline before the encoder
  2. implement a customized Media Session (Sink Writer) alike subsystem with control over streamed data so that, in particular, one could sycnhronize (or duplicate) the data before it is fed to encoder’s IMFTransform::ProcessInput
  3. avoid using vendor supplied video encoder MFTs as buggy…

MFCreateVideoSampleFromSurface’s IMFTrackedSample offering

$
0
0

IMFTrackedSample interface is available/allowed in UWP applications. The interface is a useful one when one implements a pool of samples and needs a notification when certain instance can be recycled.

Use this interface to determine whether it is safe to delete or re-use the buffer contained in a sample. One object assigns itself as the owner of the video sample by calling SetAllocator. When all objects release their reference counts on the sample, the owner’s callback method is invoked.

The notification is asynchronous meaning that when a sample is available the notification is scheduled for delivery via standard (for Media Foundation) IMFAsyncCallback::Invoke call. This is quite convenient.

When this method is called, the sample holds an additional reference count on itself. When every other object releases its reference counts on the sample, the sample invokes the pSampleAllocator callback method. To get a pointer to the sample, call IMFAsyncResult::GetObject on the asynchronous result object given to the callback’s IMFAsyncCallback::Invoke method.

I would not have mentioned this if it was that simple, would I?

One could start feeling problems already while looking at MSDN page:

Requirements

Minimum supported client – Windows Vista [desktop apps | UWP apps]

Minimum supported server – Windows Server 2008 [desktop apps | UWP apps]

Header – Evr.h

Library – Strmiids.lib

Oh really, Strmiids.lib?

So the problem is that even though the interface itself is whitelisted for UWP and is a Media Foundation interface in its nature, it is implemented along with EVR and is effectively exposed to public via MFCreateVideoSampleFromSurface API. That is, the only API function that provides access to UWP-friendly interface is a UWP-unfriendly function. Bummer.

It took me less than 300 lines of code to implement a video sample class with IMFTrackedSample implementation that mimics standard (good bye stock implementation!), so it is not difficult. However it would be better if OS implementation is available nicely in first place.

Viewing all 204 articles
Browse latest View live