Questions on DXVA2.0 and Media foundation  
Author Message
Mikkel Haugstrup





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

Hi

1. How does IDirectXVideoAccelerationService, IDirectXVideoDecoderService and
IDirectXVideoProcessorService differ

2. Will IDirectXVideoProcessorService make it possible to program the Video Processor as such on the nVidia Geforce 6/7 series, so ISV's could implement their own algortihms for tasks like noise reduction and deinterlacing

3. How does the different DXVA2_SURFACETYPE's differ

4. What are the main differences between the Enhanced Video Renderer and the Video Mixing Renderer 9

5. When can one expect to see more documentation ; )

Regards
Mikkel



Audio and Video Development1  
 
 
Prakash Channagiri - MSFT





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

Hi Mikkel,

I have forwarded your questions to our development team here and you should be hearing from one of us shortly...

Thanks,
Prakash Channagiri


 
 
Prakash Channagiri - MSFT





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

Hi Mikkel,

Here are the answers to your questions:

1. How does IDirectXVideoAccelerationService, IDirectXVideoDecoderService and IDirectXVideoProcessorService differ
Answer: IDirectXVideoProcessorService and IDirectXVideoDecoderService inherit from IDirectXVideoAcceleratorService.  Your choice on which interface to use is dependent upon the task you are trying to achieve.  If you trying to accelerate the decoding of a compressed video stream you would (obviously) use the DecoderService interface.

2. Will IDirectXVideoProcessorService make it possible to program the Video Processor as such on the nVidia Geforce 6/7 series, so ISV's could implement their own algortihms for tasks like noise reduction and deinterlacing
Answer: No - If the IHVs have dedicated video processing hardware, then they implement their own de-interlacing, noise filtering etc algorithms and expose them via this interface.  The IHV’s would map this interface onto the most appropriate h/w at their disposal.

3. How does the different DXVA2_SURFACETYPE's differ
Answer: During type negotiation between components, if the connection is HW-accelerated, then the surface type needs to be properly negotiated using IDirectXVideoMemoryConfiguration.

DXVA2_SurfaceType_DecoderRenderTarget indicates that the surfaces should be created using DXVA2_VideoDecoderRenderTarget passed as the DxvaType parameter to IDirectXVideoAccelerationService::CreateSurface.

DXVA2_SurfaceType_ProcessorRenderTarget indicates that DXVA2_VideoProcessorRenderTarget should be passed as the DxvaType parameter to IDirectXVideoAccelerationService::CreateSurface.

DXVA2_SurfaceType_D3DRenderTargetTexture indicates that IDirect3DDevice9::CreateTexture should be called with D3DUSAGE_RENDERTARGET specified as the Usage.

4. What are the main differences between the Enhanced Video Renderer and the Video Mixing Renderer 9
Answer:
There are several differences between the two. Some of them are:
1. The EVR is built directly upon the DXVA2 video processor
2. The EVR makes use of the Multimedia Class Scheduler, deep queuing, and timeline mapping to achieve greater resilience to glitches than the VMR9
3. The EVR is available in both DShow and Media Foundation
4. The EVR’s mixer component is an MFT (Media FOundation Transform) can be used independent of the EVR
5. The EVR has a superior programming model – its easier for developers to use
6. The EVR is pull based whereas VMR9 is push based
7. The EVR automatically handles output mode (Tear free windowed output, DWM support, fullscreen support)
8. The EVR has a much advanced presenter that is synchronized with monitor
9. The EVR has pluggable mixer and presenters

5. When can one expect to see more documentation ; )
Answer: The documentation is being improved and updates should appear soon – in the meantime please continue to ask questions via this forum.

Thanks,
Prakash Channagiri


 
 
mofo7





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

Hi.

What do you mean when you say:

"3. The EVR is available in both DShow and Media Foundation"

I only know vmr9 with directshow !!

Thanks for answer me.

 
 
Thobias Jones - MSFT





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

In Windows XP, Media Foundation is not supported, and DirectShow still uses the VMR7 and VMR9 as video sinks.

In Vista, we've added Media Foundation, and the default video sink for MF is the EVR. We also support the ability to use the EVR as an optional video sink in DirectShow (it does not go the other way around though - you cannot use the VMR in Media Foundation).

One advantage to this is that applications which add custom presentation logic to the EVR can, more or less, be agnostic to whether they use Media Foundation or DShow as the underlying playback engine.

Hope this clears things up!

Thobias Jones



 
 
mofo7





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

Thanks.

I've got others questions:

1. The EVR is built directly upon the DXVA2 video processor.

Can, two or more differents streams, use
DXVA2 video processor, at the same time (multithread) in the same application



2. The EVR makes use of the Multimedia Class Scheduler, deep queuing, and timeline mapping to achieve greater resilience to glitches than the VMR9.

Has
EVR a better synchronisation with "3D real time engine" than the vmr9 has (no dropped frame and better control in the multithread issue).



6. The EVR is pull based whereas VMR9 is push based

What is the advantage



7. The EVR automatically handles output mode (Tear free windowed output, DWM support, fullscreen support).

Good, but ATI and NVIDIA are
"DXVA2 and output mode" compliant in all case



Some news about t
he documentation
Could i find samples like "multivmr9" (multiEVR !) rendering in 3D world , in fullscreen exclusive mode


PS: excuse my english, i'm french...

 
 
mofo7





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

Up.

This questions are very important for me.
I want to know if it's a good choice to upgrade to Media Foundation, with better developpement than DirectSow.

 
 
Sumedh Barde - MSFT





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

The choice of which SDK to use depends on what you are doing.

Media Foundation is a better choice if you are doing one of the following:

1. Writing an application that wants to play ASF/WMA/WMV files.

2. Writing an application to play any other file format and it is important for the playback to be resilient against other programs taking up CPU. Note that you will need to write your own code to parse the file format.

3. You are writing an application that involves copy-protected content.

DirectShow is a better choice if

1. You are playing an existing file format that plays on Windows XP such as MPEG, DVD, AVI.

2. You want your application to work on older versions of Windows in addition to Vista and need to use the same runtime on both Vista and older Windows versions.

-Sumedh


 
 
mofo7





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

Thank you.


I have to stay with DirectShow for the first point, too bad.

 
 
jack zhao





PostPosted: Media Foundation Development, Questions on DXVA2.0 and Media foundation Top

I wrote a DXVA2 Mpeg4 filter,but now I found the frame drop is very obvious,the root cause seems EVR doesn't release the uncompress buffer,so we can get the uncompress buffer from the allocator free list.I don't know why EVR holds the buffer for a long time.I do the same as Vista SDK,how can we trigger EVR release the uncompress buffer

Assuming that the device handle is valid, the decoding process works as follows:

  1. Call IDirectXVideoDecoder::BeginFrame.

  2. Do the following one or more times:

    1. Call IDirectXVideoDecoder::GetBuffer to get a DXVA decoder buffer.

    2. Fill the buffer.

    3. Call IDirectXVideoDecoder::ReleaseBuffer.

  3. Call IDirectXVideoDecoder::Execute to perform the decoding operations on the frame.

Within each pair of BeginFrame/Execute calls, you may call GetBuffer multiple times, but only once for each type of DXVA buffer. If you call it twice with the same buffer type, you will overwrite the data.

After calling Execute, call IMemInputPin::Receive to deliver the frame to the video renderer, as with software decoding. The Receive method is asynchronous; after it returns, the decoder can continue decoding the next frame. The display driver prevents any decoding commands from overwriting the buffer while the buffer is in use.

thanks,

Jack zhao