Quantcast
Channel: Uncategorized – Fooling Around
Viewing all 204 articles
Browse latest View live

I have a container element, but I will not give it to you…

$
0
0

A few weeks ago I posted a problem with AMF SDK about a property included into enumeration and triggering a failure in a followup request for value. It appeared to be “internal property” to intentionally tease the caller and report error by design, unlike all other dozens of SDK properties. And of course to raise an error in an unexpected place for those naive ones who trust third party packages from reputable vendors too much.

So I am happy to report that another vendor, NVIDIA, is catching up in the competition.

uint32_t nPresetCount = 0;
NvCheck(NvEncGetEncodePresetCount(Identifier, nPresetCount));
// nPresetCount is 17 now even though it's been 10 for a long time
if(nPresetCount)
{
	[...]
	NvCheck(NvEncGetEncodePresetGUIDs(Identifier, pPresetIdentifiers, nPresetCount, nPresetCount));
	// Success
	for(uint32_t nPresetIndex = 0; nPresetIndex < nPresetCount; nPresetIndex++)
	{
		[...]
		NV_ENC_PRESET_CONFIG Configuration { NV_ENC_PRESET_CONFIG_VER };
		Configuration.presetCfg.version = NV_ENC_CONFIG_VER;
		// Argument for 11th item discovered above
		NvCheck(NvEncGetEncodePresetConfig(Identifier, PresetIdentifier, Configuration));
		// NV_ENC_ERR_UNSUPPORTED_PARAM

If this is behavior by design, it is not an innovation, we’d need something new.


Microsoft “FaceTracker” Face Detection in form of Telegram bot

$
0
0

Microsoft Windows operating systems come with built-in API for face tracking in Windows.Media.FaceAnalysis Namespace. It is available since Windows 10 “Threshold 1” build 1507 even though it is probably not the most popular API for a few reasons but maybe most important is that like some other new additions, the API addresses UWP development.

Nonetheless, Microsoft even published a couple of samples (see Basic face tracking sample) and perhaps this feature is mostly known because of its use Windows Store Camera application: in Photo mode the application detects faces and the documentation suggests that the application might be involved in working cooperatively with hardware in auto-focusing on detected face. Overall, the face detection is of limited use and feature set even though it is quite nicely fitted to not only still image face detection, but also video and “tracking”: when faces are tracked through sequence of video frames.

UWP video APIs do not often mention the internals of implementation and the fact that on the internal side of things this class of functionality is implemented in integration with Media Foundation API. Even though it makes good sense overall since Media Foundation the is the current media API in Windows, the functionality is not advertised as available directly through Media Foundation: no documentation, no sample code: “this area is restricted to personnel, stay well clear of”.

So I just made a quick journey into this restricted territory and plugged FaceTracker into classic desktop Media Foundation pipeline, which in turn is plugged into Telegram Bot template (in next turn, just running as a Windows Service on some Internet connected box: this way Telegram is convertible into cheap cloud service).

The demo is highlighting detected faces in a user provided clip, like this:

That is, in a conversation with a bot one supplies his or her own video clip and the bot runs it against face detector, overlaying frames for found faces, then sends back the re-encoded video. Since Media Foundation is involved and given that face detector is well interfaced to media pipeline, the process is taking place on GPU to extent possible: DXVA, Direct3D 11, hardware encoder ASIC.

All in all, meet @FaceDetectionDemoBot:

I guess next time the bot will also extract face detection information from those clips recorded by iPhones. If I get it right, the recent fruity products do face detection automatically and embed the information right into clip helping cloud storage processing since edge devices already invests its horsepower into resource consuming number crunching.

In closing,

H.265/HEVC and MS Edge “Edgium” Beta

$
0
0

It is embarrassing, my friends…

Irritating Direct3D 11/12 Shader Cache Exception

$
0
0

One annoying thing existing for a long time is an internal _com_error exception in Direct3D shader cache, coming from D3DSCache.dll during execution of innocent API calls like D3D11CreateDevice. Also related storage location for the cache files is %LOCALAPPDATA%\D3DSCache, one can try deleting it completely in the case annoying issue persists.

Many of us can afford writing code with random exceptions but given the impact of Direct3D API such exceptions are intolerable. The exception is handled internally and is not passed (even as failure code) through outer API interface, but this is not a sufficient excuse: exceptions are reserved for situations when something serious happens and situation requires attention. What effectively takes place is you start an app expecting debugger to break on exception and the debugger stupidly breaks in the middle of nowhere for no apparent reason.

In some other cases I am running self-debugging applications. For example, this face detecting Telegram bot runs unmanned as a Windows service on an Intel NUC box I might not even log into for weeks. To keep a track of unexpected conditions, the service, however, runs a debugging session against itself and records minidumps for all exceptions happening so that, if needed, they could be investigated retroactively. Why would I want to mess with shader cache exceptions if they are not real problems, and are triggered as early as when I just create a device?

The error codes are sometimes E_FAIL and at other times it is DXGI_ERROR_CACHE_CORRUPT. The exception object itself is _com_error and originates from D3DSCache.dll so I have to end up filtering this stuff with otherwise unnecessary code like this:

BOOL ShouldWriteExceptionMinidump(const ParentProcessDebug::ExceptionData& Exception)
{
	[...]
	if(_wcsicmp(PathFindFileNameW(Exception.m_ModulePath.c_str()), L"D3DSCache.dll") == 0)
	{
		if(Exception.m_TypeName.compare(".?AV_com_error@@") == 0 && (Exception.m_Value == E_FAIL || Exception.m_Value == DXGI_ERROR_CACHE_CORRUPT))
			return FALSE; // D3D11On12CreateDevice API related?
	}
	[...]
	return TRUE;
}

Game streaming from Windows host to Tesla

Intraspecific Competition

$
0
0

Microsoft Edge does not like the idea to install Microsoft Edge Beta.

Windows Imaging Component (WIC) Interfaces

$
0
0

Just found a memo from 2016 about WIC API interfaces.

The Windows Imaging Component (WIC) provides a Component Object Model (COM) based API for use in C and C++. The WIC API exposes a variety of image-related functionality, including […]

Enjoy!

How fast is your AMD H.264 and H.265/HEVC GPU encoder?

$
0
0

Just a small tool here to try a few of a popular resolutions and measure video encoding latency. The encoder is running in configuration to address needs of real-time encoding with speed over quality setup.

Note that the performance might be affected by side load, such as graphics application (I often use this application for my needs with parameters that produce higher or lower GPU load). Also, the application itself is using Direct2D to generate actual input video frames so this activity also has certain impact, presumably low enough due to primitive operations, yet still.

The main point here is to measure the latency in first place for a particular piece of hardware, see how things possibly improve with driver updates, and how codecs compare one to the other and what is the effect of the resolution choice. Also, the question is whether the encoder is fast enough to process data real-time in first place.

The application keeps drawing a simple chart and then the same data is fed into encoder. The application writes raw elementary stream into .H264 and .H265 files respectively (use Media Player Classic to play them out), also saves last frame as a .PNG file.

You will need to let the application run for a bit because it attempt to encode 30 seconds for every resolution. Finally there also is a summary printout (also also the app would share the summary with a Telegram bot).

Below, for example, are results with my Radeon RX 570 Series video card in development machine.

C:\>AmaEncode
DXGI_ADAPTER_DESC1.Description: Radeon RX 570 Series
Video: 1280 x 720, 32-bit RGB, 60.00 frames per second, 30 seconds
File: C:\20200421-151329-1280x720@60.h264
Average Encoding Time: 6.32 ms
Elapsed Time: 30.0 seconds (100.05% to expected)

Video: 1280 x 720, 32-bit RGB, 120.00 frames per second, 30 seconds
File: C:\20200421-151359-1280x720@120.h264
Average Encoding Time: 6.26 ms
Elapsed Time: 30.0 seconds (100.08% to expected)

Video: 1920 x 1080, 32-bit RGB, 60.00 frames per second, 30 seconds
File: C:\20200421-151429-1920x1080@60.h264
Average Encoding Time: 10.64 ms
Elapsed Time: 30.0 seconds (100.16% to expected)

[...]

Video: 3840 x 2160, 32-bit RGB, 30.00 frames per second, 30 seconds
File: C:\20200421-151846-3840x2160@30.h264
Average Encoding Time: 36.38 ms
Elapsed Time: 30.1 seconds (100.34% to expected)

[...]

1280x720@60     6.32
1280x720@120    6.26
1920x1080@60    10.64
1920x1080@72    10.60
1920x1080@90    10.88
1920x1080@120   12.35
1920x1080@144   FAIL
2560x1440@60    17.46
2560x1440@72    215.39
2560x1440@90    FAIL
3840x2160@30    36.38
3840x2160@60    FAIL

Video: 1280 x 720, 32-bit RGB, 60.00 frames per second, 30 seconds
File: C:\20200421-152014-1280x720@60.h265
Average Encoding Time: 6.25 ms
Elapsed Time: 30.0 seconds (100.06% to expected)

[...]

Video: 3840 x 2160, 32-bit RGB, 30.00 frames per second, 30 seconds
File: C:\20200421-152521-3840x2160@30.h265
Average Encoding Time: 31.29 ms
Elapsed Time: 30.1 seconds (100.33% to expected)

[...]

1280x720@60     6.25
1280x720@120    6.15
1920x1080@60    10.01
1920x1080@72    10.01
1920x1080@90    10.14
1920x1080@120   10.08
1920x1080@144   FAIL
2560x1440@60    15.71
2560x1440@72    15.91
2560x1440@90    FAIL
3840x2160@30    31.29
3840x2160@60    FAIL

Finalizing... results submitted

C:\>

AMD’s implementation here is a bit faster and more performant in the case of HEVC codec, compared to AVC. The numbers are close but as it can be seen H.265/HEVC can process 2560×1440@72. In the case of H.264/AVC encoding the encoder is choking – yet it can still process 2560×1440@60.

So, there will be a run for H.264 encoding, and then – if the hardware has a capable encoder of course – H.265/HEVC run.

And yeah, it’s AMD only (via AMF). NVIDIA goes next week.

On your marks!

Download links

Binaries:

  • 64-bit: AmaEncode.exe (in .ZIP archive)
  • License: This software is free to use

Video encoders of Radeon RX Vega M GH Graphics

$
0
0

A follow-up observation about encoders of Radeon RX Vega M GH Graphics of Hades Canyon NUC and the measurement app from the previous post:

The side by side comparison with desktop RX 570 card shows a few interesting things:

  1. Radeon RX Vega M GH Graphics has the latest driver, but AMF runtime version is way behind the latest: 1.4.11, that is, the system does not receive timely update (and overall its already discontinued)
  2. Even though some performance tuning might be coming from AMF updates, H.265/HEVC encoder performance suggests that the circuitry is pretty much the same and HEVC encoder numbers are close
  3. Embedded version of H.264 encoder is limited to nicely support 1920×1080@60 with reasonable headspace and it is assumed that higher resolutions are to be handled by next gen codec; yet desktop version received an improved version of H.264 encoder to cover real-time processing of 2560×1440@60 and 3820×2160@30

AMD Radeon RX 570 Series video encoders compared to a couple of NVIDIA pieces of hardware

$
0
0

In continuation of previous AMD AMF encoder latency at faster settings posts, side by side comparison to NVIDIA encoders.

The numbers are to show how different they are even though they are doing something similar. The NVIDIA cards are not high end: GTX 1650 is literally the cheapest stuff among Turing 16xx series, and GeForce 700 series were released in 2013 (OK, GTX 750 was midrange at that time).

The numbers are milliseconds of encoding latency per video frame.

In 2013 NVIDIA card was already capable to do NVENC real-time hardware encoding of video content in 4K resolution 60 frames per second, and four years later RX 570 was released with a significantly less powerful encoder.

Encoder of GTX 1650 is much slower compared to GTX 1080 Ti (application and data to come later) but it is still powerful enough to cover a wide range of video streams including quite intensive in computation and bandwidth.

GTX 750 vs. GTX 1650 comparison also shows that even though the two belong to so much different microarchitectures (Kepler and Turing respectively, and there were Maxwell and Pascal between them), they are not in direct relationship that much newer is superior than the older in every way. When it comes to real-time performance the vendors design stuff to be just good enough.

Hardware video encoding latency with NVIDIA GeForce GTX 1080 Ti

$
0
0

To complete the set of posts [1, 2, 3] on hardware video encoding at lowest latency settings, I am sharing the juiciest part and the application for NVIDIA NVENC. I did not have a 20 series card at hand to run the measurement for the numbers, and I hope the table below for GeForce GTX 1080 Ti is eloquent.

It is a sort of awkward to put the GTX 1080 Ti numbers (and those are latency in milliseconds for every video frame sent to encoding) side by side with those of AMD products, at least those I had a chance to check out, so here we go with GeForce GTX 1080 Ti vs. GeForce GTX 1650:

Well that’s fast, and GeForce 10 series were released in 2016.

The numbers show that NVIDIA cards are powerful enough for game experience remoting (what you use Rainway for) in wide range of video modes including high frame rates 144 and up.

I also added 640×360@260 just because I have a real camera (and an inexpensive one, with USB 2.0 connection) operating in this mode with high frame rate capture: generally the numbers suggest that it is generally possible to remote a high video frame rate signal at a blink-of-an-eye speed.

There might be many aspects to compare when it comes to choosing among AMD and NVIDIA products, but when it comes to video streaming, low latency video compression and hardware assisted video compression in general, the situation is pretty much clear: just grab an NVIDIA thing and do not do what I did when I put AMD Radeon RX 570 Series video card into my primary development system. I thought maybe at that time AMD had something cool.

So, here goes the app for NVIDIA hardware.

Download links

Binaries:

  • 64-bit: NvcEncode.exe (in .ZIP archive)
  • License: This software is free to use

A readable version of HelloDirectML sample

$
0
0

So it came to my attention that there is a new API in DirectX family: Direct Machine Learning (DirectML).

Direct Machine Learning (DirectML) is a low-level API for machine learning. It has a familiar (native C++, nano-COM) programming interface and workflow in the style of DirectX 12. You can integrate machine learning inferencing workloads into your game, engine, middleware, backend, or other application. DirectML is supported by all DirectX 12-compatible hardware.

You might want to check out this introduction video if you are interested:

I updated HelloDirectML code and restructured it to be readable and easy to comprehend. In my variant I have two operators of addition and multiplication following one another with a UAV resource barrier in between. The code does (1.5 * 2) ^ 2 math in tensor space.

Here is my fork with updated HelloDirectML, with the top surface code with tensor math in less than 150 lines of code starting here. If you are a fan of spaghetti style (according to Wiki it appears what I prefer is referred to as “Ravioli code”), the original sample is there.

Heterogeneous Media Foundation pipelines

$
0
0

Just a small test here to feature use of multiple GPUs within single Media Foundation pipeline. The initial idea here is pretty simple: quite so many systems are equipped with multiple GPUs, some have “free” onboard Intel GPU idling in presence of regular video card. Some other systems have integrated “iGPU” and discrete “dGPU” seamlessly blended by DXGI.

Media Foundation API does not bring any specific feature set to leverage multiple GPUs at a time, but this is surely possible to take advantage of.

The application creates a 20 second long video clips by combining GPUs: one GPU is used for video rendering and another is a host of hardware H.264 video encoding. No system memory is used for uncompressed video: system memory jumps in first to receive encoded H.264 bitstream. The Media Foundation pipeline hence is:

  • Media Source generating video frames off its video stream using the first GPU
  • Transform to combine multiple GPUs
  • H.264 video encoder transform specific to second GPU
  • Stock MP4 Media Sink

The pipeline runs in a single media session pretty much like normal pipeline. Media Foundation is designed in the way that primitives do not have to be aligned in their GPU usage with the pipeline. Surely they have to share devices and textures so that they all could operate together, but pipeline itself does not put much of limitations there.

Microsoft Windows [Version 10.0.18363.815]
(c) 2019 Microsoft Corporation. All rights reserved.

C:\Temp>HeterogeneousRecordFile.exe
HeterogeneousRecordFile.exe 20200502.1-11-gd3d16d5 (Release)
d3d16d51e7f2a098c5765d445714f14051c7a68d
HEAD -> master, origin/master
2020-05-09 23:51:54 +0300
--
Found 2 DXGI adapters
Trying heterogeneous configurations…
Using NVIDIA GeForce GTX 1650 to render video and Intel(R) HD Graphics 4600 to encode the content
Using filename HeterogeneousRecordFile-20200509-235406.mp4 for recording
Using Intel(R) HD Graphics 4600 to render video and NVIDIA GeForce GTX 1650 to encode the content
Using filename HeterogeneousRecordFile-20200509-235411.mp4 for recording
Trying trivial configuration with loopback data transfer…
Using NVIDIA GeForce GTX 1650 to render video and NVIDIA GeForce GTX 1650 to encode the content
Using filename HeterogeneousRecordFile-20200509-235416.mp4 for recording
Using Intel(R) HD Graphics 4600 to render video and Intel(R) HD Graphics 4600 to encode the content
Using filename HeterogeneousRecordFile-20200509-235419.mp4 for recording

This is just a simple use case, I believe there can be other: GPUs are pretty powerful for certain specific tasks, and they are also equipped with video specific ASICs.

Download links

Binaries:

Intel Media SDK H.264 encoder buffer and target bitrate management

$
0
0

I might have mentioned that Intel Media SDK has a ridiculously eclectic design and is a software package for the brave. Something to stay well clear of for as long as you possibly can.

To be on par on the customer support side Intel did something that caused blocking of Intel Developer Zone account. Over time I did a few attempts to restore the account and only once out of the blue someone followed up from there with a surprising response: “You do not have the enterprise login account”. That’s unbelievable, I could register account, I could post like this, I can still request password reset links and receive them, but the problem is I don’t have “enterprise account”.

Back to Intel Media SDK where things are designed to work about the same obvious and reliable as their forums. A bit of code from very basic tutorial:

    //5. Initialize the Media SDK encoder
    sts = mfxENC.Init(&mfxEncParams);
    MSDK_IGNORE_MFX_STS(sts, MFX_WRN_PARTIAL_ACCELERATION);
    MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

    // Retrieve video parameters selected by encoder.
    // - BufferSizeInKB parameter is required to set bit stream buffer size
    mfxVideoParam par;
    memset(&par, 0, sizeof(par));
    sts = mfxENC.GetVideoParam(&par);
    MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

    //6. Prepare Media SDK bit stream buffer
    mfxBitstream mfxBS;
    memset(&mfxBS, 0, sizeof(mfxBS));
    mfxBS.MaxLength = par.mfx.BufferSizeInKB * 1000;
    mfxBS.Data = new mfxU8[mfxBS.MaxLength];
    MSDK_CHECK_POINTER(mfxBS.Data, MFX_ERR_MEMORY_ALLOC);

Even though who was that genius who designed it to measure buffers in kilobytes, the snippet makes great sense. You ask SDK for required buffer size and then you provide the space. I myself am even more generous than that: I grant 1024 bytes for every kilobyte in question.

The thing is that hardware encoder is still hitting scenarios where it is unable to fit the data into the space sized the mentioned way. What happens if encoder has more data on hands, maybe it emits a warning? “Well I just screwed things up, be aware”? Buffer overflow error? Buffer reallocation request? Oh no, the SDK makes it smarter: it fills the buffer completely trimming the excess making the bitstream incompliant and triggering frame corruption later on decoder end. Then encoder continues as if nothing important has happened.

There is an absolute rule in the software technology that if the thing is designed to be able to get broken in certain aspect, once in a while there will be a consumer hit by this flaw. Maybe just this once Intel guys thought it would not be the case.

Something got broken around version 16.6.1 of Visual C++ compiler

$
0
0

Ancient piece of code started giving troubles:

template <typename T>
class ATL_NO_VTABLE CMediaControlT :
    ...
{
    ...
    STDMETHOD(Run)()
    {
        ...
        T* MSVC_FIX_VOLATILE pT = static_cast<T*>(this); // <<--------------------------------
        CRoCriticalSectionLock GraphLock(pT->m_GraphCriticalSection);
        pT->FilterGraphNeeded();
        __D(pT->GetMediaControl(), E_NOINTERFACE);
        pT->PrepareCue();
        pT->DoRun();
        __if_exists(T::Fire_Running)
        {
            pT->Fire_Running();
        }
    ...
}

When MSVC_FIX_VOLATILE is nothing, it appears that optimizing compiler forgets pT and uses just some variation of adjusted this, which makes sense overall because static cast between the two can be resolved at compile time.

However, the problem is that the value of this is wrong and there is undefined behavior scenario.

If I make MSVC_FIX_VOLATILE to be volatile and have the variable pT somewhat “heavier”, optimizing compiler would forget this and uses pT directly with everything working as expected.

The problem still exists in current 16.6.2.


Incorrect breaking #import behavior in recent (e.g. 16.6.2) MSVC versions

$
0
0

Yesterday’s bug is not the only “news”. Some time ago I already saw weird broken behavior of rebuild of DirectShowSpy.dll with current version of Visual Studio and MSVC.

Now the problem is getting more clear.

Here is some interface:

[
    object,
    uuid(6CE45967-F228-4F7B-8B93-83DC599618CA),
    //dual,
    //oleautomation,
    nonextensible,
    pointer_default(unique)
]
interface IMuxFilter : IUnknown
{
    HRESULT IsTemporaryIndexFileEnabled();
    HRESULT SetTemporaryIndexFileEnabled([in] BOOL bTemporaryIndexFileEnabled);
    HRESULT GetAlignTrackStartTimeDisabled();
    HRESULT SetAlignTrackStartTimeDisabled([in] BOOL bAlignTrackStartTimeDisabled);
    HRESULT GetMinimalMovieDuration([out] LONGLONG* pnMinimalMovieDuration);
    HRESULT SetMinimalMovieDuration([in] LONGLONG nMinimalMovieDuration);
};

Compiled into type library it looks okay. Windows SDK 10.0.18362 COM/OLE Object Viewer shows the correct definition obtained from the type library:

[
    odl,
    uuid(6CE45967-F228-4F7B-8B93-83DC599618CA),
    nonextensible
]
interface IMuxFilter : IUnknown {
    HRESULT _stdcall IsTemporaryIndexFileEnabled();
    HRESULT _stdcall SetTemporaryIndexFileEnabled([in] long bTemporaryIndexFileEnabled);
    HRESULT _stdcall GetAlignTrackStartTimeDisabled();
    HRESULT _stdcall SetAlignTrackStartTimeDisabled([in] long bAlignTrackStartTimeDisabled);
    HRESULT _stdcall GetMinimalMovieDuration([out] int64* pnMinimalMovieDuration);
    HRESULT _stdcall SetMinimalMovieDuration([in] int64 nMinimalMovieDuration);
};

Now what happens when MSVC #import takes it into Win32 32-bit code:

struct __declspec(uuid("6ce45967-f228-4f7b-8b93-83dc599618ca"))
IMuxFilter : IUnknown
{
    //
    // Raw methods provided by interface
    //

      virtual HRESULT __stdcall IsTemporaryIndexFileEnabled ( ) = 0;
    virtual HRESULT _VtblGapPlaceholder1( ) { return E_NOTIMPL; }
      virtual HRESULT __stdcall SetTemporaryIndexFileEnabled (
        /*[in]*/ long bTemporaryIndexFileEnabled ) = 0;
    virtual HRESULT _VtblGapPlaceholder2( ) { return E_NOTIMPL; }
      virtual HRESULT __stdcall GetAlignTrackStartTimeDisabled ( ) = 0;
    virtual HRESULT _VtblGapPlaceholder3( ) { return E_NOTIMPL; }
      virtual HRESULT __stdcall SetAlignTrackStartTimeDisabled (
        /*[in]*/ long bAlignTrackStartTimeDisabled ) = 0;
    virtual HRESULT _VtblGapPlaceholder4( ) { return E_NOTIMPL; }
      virtual HRESULT __stdcall GetMinimalMovieDuration (
        /*[out]*/ __int64 * pnMinimalMovieDuration ) = 0;
    virtual HRESULT _VtblGapPlaceholder5( ) { return E_NOTIMPL; }
      virtual HRESULT __stdcall SetMinimalMovieDuration (
        /*[in]*/ __int64 nMinimalMovieDuration ) = 0;
    virtual HRESULT _VtblGapPlaceholder6( ) { return E_NOTIMPL; }
};

WTF _VtblGapPlaceholder1??? That was uncalled for!

It looks like some 32/64 bullshit added by MSVC from some point (cross-compilation issue?) for no good reason reason. A sort of gentle reminder that one should get rid of #import in C++ code.

Please have it fixed, 32-bit code is something still being used.

#import of Microsoft’s own quartz.dll, for example, has the same invalid gap insertion:

struct __declspec(uuid("56a868bc-0ad4-11ce-b03a-0020af0ba770"))
IMediaTypeInfo : IDispatch
{
    //
    // Raw methods provided by interface
    //

      virtual HRESULT __stdcall get_Type (
        /*[out,retval]*/ BSTR * strType ) = 0;
    virtual HRESULT _VtblGapPlaceholder1( ) { return E_NOTIMPL; }
      virtual HRESULT __stdcall get_Subtype (
        /*[out,retval]*/ BSTR * strType ) = 0;
    virtual HRESULT _VtblGapPlaceholder2( ) { return E_NOTIMPL; }
};

GetFileVersionInfoSize and API sets

$
0
0

GetFileVersionInfoSizeW function in Requirements section lists:

Minimum supported clientWindows Vista [desktop apps only]
Minimum supported serverWindows Server 2008 [desktop apps only]
Target PlatformWindows
Headerwinver.h (include Windows.h)
LibraryVersion.lib
DLLApi-ms-win-core-version-l1-1-0.dll

and this is inaccurate. The actual requirement DLL is api-ms-win-core-version-l1-1-1.dll instead. However, what does it mean exactly? Windows API sets:

API Sets rely on operating system support in the library loader to effectively introduce a namespace redirection component into the library binding process. Subject to various inputs, including the API Set name and the binding (import) context, the library loader performs a runtime redirection of the reference to a target host binary that houses the appropriate implementation of the API Set.

The decoupling between implementation and interface contracts provided by API Sets offers many engineering advantages, but can also potentially reduce the number of DLLs loaded in a process.

The “hyphen one” DLL (api-ms-win-core-version-l1-1-1.dll) is missing in Windows Server 2012 R2 and so the documented promise to offer support starting Windows Server 2008 is incorrect. Windows Server 2012 R2 has only “hyphen zero” DLL.

The hyphen zero DLL exposes, however, GetFileVersionInfoSizeExW entry point so the application developers addressing backward compatibility should switch from use of GetFileVersionInfoSize to GetFileVersionInfoSizeEx even though the former is not documented as deprecated explicitly (probably another out of date aspect of the documentation).

The same applies to GetFileVersionInfo functions.

Also related, this part of MSDN documentation API Sets available in Windows 8.1 and Windows Server 2012 R2 looks good and has no mention of GetFileVersionInfoSize and GetFileVersionInfo.

DXGI desktop snapshot taking

$
0
0

One more system check tool to quickly enumerate available monitors (using DXGI), take snapshots and was them in PNG files (using WIC).

Apart from straightforward desktop snapshot taking, the tool offers a few more functions:

  1. Goes through the entire list of available video adapters and connected monitors
  2. Uses three slightly different methods to do the same thing: “pass A” and files starting with “A” – using Direct3D 11; “pass B” – basically the same but with a Direct3D 11 device created without specifying adapter; “pass C” – same as pass A but using Direct3D 10.1 API
  3. Displayed (printed out) are the monitor connected to video adapter outputs, including the case of sharing/mirroring displays; when two displays are showing the same signal via mirroring the output will list them along with connector type, e.g. same picture is displayed on two physical displays connected with DisplayPort and HDMI cables respectively:
Output: \.\DISPLAY5
LG Ultra HD, DISPLAYCONFIG_OUTPUT_TECHNOLOGY_DISPLAYPORT_EXTERNAL, \?\DISPLAY#GSM5B09#5&7f9757e&0&UID260#{e6f07b5f-ee97-4a90-b076-33f57bf4eaa7}
LG Ultra HD, DISPLAYCONFIG_OUTPUT_TECHNOLOGY_HDMI, \?\DISPLAY#GSM5B08#4&1540260c&0&UID206371#{e6f07b5f-ee97-4a90-b076-33f57bf4eaa7}
D3D_FEATURE_LEVEL_12_0
✔
  1. Detects multi-GPU systems and automatically repeats attempt to take snapshots applying different GPU preferences (“Power Saving” preference vs. “High Performance”); the reason for this is that use of wrong GPU preference is a notorious reason for DXGI Desktop Duplication API to not provide duplication service. Files named A1, A2 (as opposed to A0) correspond to use of power saving (1) or high performance (2) adapter:
  1. Last but not least, command line option “-DebugLayers” enables the tool to include DirectX Debug Layer messages in the option (such as, to troubleshoot errors in greater detail); the layer should be installed, of course.

Download links

Binaries:

Another API exception which should not have been thrown

$
0
0

From the documentation on StorageFolder.TryGetItemAsync(String) Method:

Tries to get the file or folder with the specified name from the current folder. Returns null instead of raising a FileNotFoundException if the specified file or folder is not found.

using IStorageItem = winrt::Windows::Storage::IStorageItem;
IStorageItem EventStorageItem { m_Configuration.m_ApplicationStorageFolder.TryGetItemAsync(m_FileName).get() };

Exception thrown at 0x00007FFB06BDA799 (KernelBase.dll) in DownloadIssue.exe: WinRT originate error – 0x80070002 : ‘An item cannot be found with the specified name (Issue-1714684927.tsv).’.

It looks like TryGetItemAsync is a wrapper over GetItemAsync to suppress exception if file is not found, and it should be the other way around: if there is no file, there is nullptr result with no exception. In conjunction with missing mode equivalent to OPEN_ALWAYS it makes it not really convenient to write code free from exceptions.

Exceptionless workaround via file query:

#if 1
	using namespace winrt::Windows::Storage;
	StorageFile EventStorageFile(nullptr);
	Search::StorageFileQueryResult const FileQueryResult { m_Configuration.m_ApplicationStorageFolder.CreateFileQuery() };
	auto const FileVector { FileQueryResult.GetFilesAsync().get() };
	for(auto FileVectorIterator { FileVector.First() }; FileVectorIterator.HasCurrent(); FileVectorIterator.MoveNext())
	{
		auto const StorageFile { FileVectorIterator.Current() };
		if(StorageFile.IsOfType(StorageItemTypes::File) && _wcsicmp(StorageFile.Name().c_str(), m_FileName.c_str()) == 0)
		{
			EventStorageFile = StorageFile;
			break;
		}
	}
	if(!EventStorageFile)
		return;
#else
	// TODO: Get rid of exception if file is missing
	IStorageItem EventStorageItem { m_Configuration.m_ApplicationStorageFolder.TryGetItemAsync(m_FileName).get() };
	if(!EventStorageItem)
		return;
	StorageFile EventStorageFile { EventStorageItem.as<StorageFile>() };
#endif

C++/WinRT version of SetFileTime

$
0
0

SetFileTime is simple and does not deserve a blog post: you have a file handle, you have date/time, you set it.

StorageFile File { ... };
FILETIME DateCreatedFileTime = ...;
winrt::file_handle File { CreateFile(File.Path().c_str(), GENERIC_WRITE, FILE_SHARE_READ, nullptr, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, nullptr) };
winrt::check_bool(SetFileTime(File.get(), &DateCreatedFileTime, &DateCreatedFileTime, &DateCreatedFileTime));

C++/WinRT and UWP make the journey much more exciting.

There is StorageItemContentProperties Class which provides access to the content-related properties of an item (like a file or folder). This includes file times which are exposed as named properties, like “System.DateModified”. StorageItemContentProperties.SavePropertiesAsync Method is to save properties associated with the item.

SavePropertiesAsync is asynchronous so you have to deal with this too, but it is not to worry since C++/WinRT does not let you down with this at least.

One another thing is you need a Windows::Foundation::DateTime value for the time which is something you might be not have good understanding for, as opposed to old fashioned FILETIME's definition of “a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC)”.

In C++/WinRT Windows::Foundation::DateTime is directly mapped to std::chrono::time_point and there is also winrt::clock as well, and conversion helpers which are easy to use but you have to be aware of them.

Moving on? With Windows::Foundation::DateTime value on hands you need to put it into a key-value collection of properties to save. However, since values, in general, might be of different types you need to convert the time to variant IInspectable. That is, to “box” it. Luckily, C++/WinRT has it all for you already:

C++/WinRT provides the winrt::box_value function, which takes a scalar value and returns the value boxed into an IInspectable. For unboxing an IInspectable back into a scalar value, there are the winrt::unbox_value and winrt::unbox_value_or functions.

Now we reached a hard one. SavePropertiesAsync signature is this:

...SavePropertiesAsync(param::async_iterable<Windows::Foundation::Collections::IKeyValuePair<hstring, Windows::Foundation::IInspectable>> const& propertiesToSave) const
{
  ...
}

So IInspectable there is the boxed value. This needs to be taken into something convertible to Windows::Foundation::Collections::IKeyValuePair which is already a bit scary. But the truth is the real bastard is param::async_iterable.

I have to admit MSDN documentation for C++/WinRT is awesome. There are so many things already mentioned there, including, for example, this article: Standard C++ data types and C++/WinRT, which is relevant and helpful. C++/WinRT is awesome too.

So you might think you could just have a std::map and it will be converted to Windows::Foundation::Collections::IKeyValuePair automagically and then it would be picked up by param::async_iterable as an argument? No.

[...]
error C2664: 'winrt::Windows::Foundation::IAsyncAction winrt::impl::consume_Windows_Storage_FileProperties_IStorageItemExtraProperties::SavePropertiesAsync(void) const': cannot convert argument 1 from 'std::map,std::allocator>>' to 'const winrt::param::async_iterable> &'
[...]
Reason: cannot convert from 'std::map,std::allocator>>' to 'const winrt::param::async_iterable>'
[...]
No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called

The thing is, and it’s mentioned here, for example, that async_iterable need to acquire ownership of the collection when it takes it as an argument in a thin and lightweight way. So you have got to std::move it there before the code starts building.

What do we got? Thing do build and call UWP API correctly. Everything is important here:

StorageFile File { ... };
FILETIME DateCreatedFileTime = ...;
std::map<winrt::hstring, winrt::Windows::Foundation::IInspectable> PropertyMap;
PropertyMap.emplace(L"System.DateModified", winrt::box_value(winrt::clock::from_file_time(DateCreatedFileTime)));
File.Properties().SavePropertiesAsync(std::move(PropertyMap)).get();

Now the problem is that runtime behavior is this: E_INVALIDARG 0x80070057 : ‘Cannot write to the specified property (System.DateModified). The property is read-only.’.

That is, you don’t change/save this property from UWP.

Viewing all 204 articles
Browse latest View live