CSCore
Defines and provides methods to convert between -values and -values.
are used by the , the and the class.
WAVE_FORMAT_UNKNOWN, Microsoft Corporation
WAVE_FORMAT_PCM Microsoft Corporation
WAVE_FORMAT_ADPCM Microsoft Corporation
WAVE_FORMAT_IEEE_FLOAT Microsoft Corporation
WAVE_FORMAT_VSELP Compaq Computer Corp.
WAVE_FORMAT_IBM_CVSD IBM Corporation
WAVE_FORMAT_ALAW Microsoft Corporation
WAVE_FORMAT_MULAW Microsoft Corporation
WAVE_FORMAT_DTS Microsoft Corporation
WAVE_FORMAT_DRM Microsoft Corporation
WAVE_FORMAT_WMAVOICE9
WAVE_FORMAT_OKI_ADPCM OKI
WAVE_FORMAT_DVI_ADPCM Intel Corporation
WAVE_FORMAT_IMA_ADPCM Intel Corporation
WAVE_FORMAT_MEDIASPACE_ADPCM Videologic
WAVE_FORMAT_SIERRA_ADPCM Sierra Semiconductor Corp
WAVE_FORMAT_G723_ADPCM Antex Electronics Corporation
WAVE_FORMAT_DIGISTD DSP Solutions, Inc.
WAVE_FORMAT_DIGIFIX DSP Solutions, Inc.
WAVE_FORMAT_DIALOGIC_OKI_ADPCM Dialogic Corporation
WAVE_FORMAT_MEDIAVISION_ADPCM Media Vision, Inc.
WAVE_FORMAT_CU_CODEC Hewlett-Packard Company
WAVE_FORMAT_YAMAHA_ADPCM Yamaha Corporation of America
WAVE_FORMAT_SONARC Speech Compression
WAVE_FORMAT_DSPGROUP_TRUESPEECH DSP Group, Inc
WAVE_FORMAT_ECHOSC1 Echo Speech Corporation
WAVE_FORMAT_AUDIOFILE_AF36, Virtual Music, Inc.
WAVE_FORMAT_APTX Audio Processing Technology
WAVE_FORMAT_AUDIOFILE_AF10, Virtual Music, Inc.
WAVE_FORMAT_PROSODY_1612, Aculab plc
WAVE_FORMAT_LRC, Merging Technologies S.A.
WAVE_FORMAT_DOLBY_AC2, Dolby Laboratories
WAVE_FORMAT_GSM610, Microsoft Corporation
WAVE_FORMAT_MSNAUDIO, Microsoft Corporation
WAVE_FORMAT_ANTEX_ADPCME, Antex Electronics Corporation
WAVE_FORMAT_CONTROL_RES_VQLPC, Control Resources Limited
WAVE_FORMAT_DIGIREAL, DSP Solutions, Inc.
WAVE_FORMAT_DIGIADPCM, DSP Solutions, Inc.
WAVE_FORMAT_CONTROL_RES_CR10, Control Resources Limited
WAVE_FORMAT_NMS_VBXADPCM
WAVE_FORMAT_CS_IMAADPCM
WAVE_FORMAT_ECHOSC3
WAVE_FORMAT_ROCKWELL_ADPCM
WAVE_FORMAT_ROCKWELL_DIGITALK
WAVE_FORMAT_XEBEC
WAVE_FORMAT_G721_ADPCM
WAVE_FORMAT_G728_CELP
WAVE_FORMAT_MSG723
WAVE_FORMAT_MPEG, Microsoft Corporation
WAVE_FORMAT_RT24
WAVE_FORMAT_PAC
WAVE_FORMAT_MPEGLAYER3, ISO/MPEG Layer3 Format Tag
WAVE_FORMAT_LUCENT_G723
WAVE_FORMAT_CIRRUS
WAVE_FORMAT_ESPCM
WAVE_FORMAT_VOXWARE
WAVE_FORMAT_CANOPUS_ATRAC
WAVE_FORMAT_G726_ADPCM
WAVE_FORMAT_G722_ADPCM
WAVE_FORMAT_DSAT_DISPLAY
WAVE_FORMAT_VOXWARE_BYTE_ALIGNED
WAVE_FORMAT_VOXWARE_AC8
WAVE_FORMAT_VOXWARE_AC10
WAVE_FORMAT_VOXWARE_AC16
WAVE_FORMAT_VOXWARE_AC20
WAVE_FORMAT_VOXWARE_RT24
WAVE_FORMAT_VOXWARE_RT29
WAVE_FORMAT_VOXWARE_RT29HW
WAVE_FORMAT_VOXWARE_VR12
WAVE_FORMAT_VOXWARE_VR18
WAVE_FORMAT_VOXWARE_TQ40
WAVE_FORMAT_SOFTSOUND
WAVE_FORMAT_VOXWARE_TQ60
WAVE_FORMAT_MSRT24
WAVE_FORMAT_G729A
WAVE_FORMAT_MVI_MVI2
WAVE_FORMAT_DF_G726
WAVE_FORMAT_DF_GSM610
WAVE_FORMAT_ISIAUDIO
WAVE_FORMAT_ONLIVE
WAVE_FORMAT_SBC24
WAVE_FORMAT_DOLBY_AC3_SPDIF
WAVE_FORMAT_MEDIASONIC_G723
WAVE_FORMAT_PROSODY_8KBPS
WAVE_FORMAT_ZYXEL_ADPCM
WAVE_FORMAT_PHILIPS_LPCBB
WAVE_FORMAT_PACKED
WAVE_FORMAT_MALDEN_PHONYTALK
WAVE_FORMAT_GSM
WAVE_FORMAT_G729
WAVE_FORMAT_G723
WAVE_FORMAT_ACELP
WAVE_FORMAT_RAW_AAC1
WAVE_FORMAT_RHETOREX_ADPCM
WAVE_FORMAT_IRAT
WAVE_FORMAT_VIVO_G723
WAVE_FORMAT_VIVO_SIREN
WAVE_FORMAT_DIGITAL_G723
WAVE_FORMAT_SANYO_LD_ADPCM
WAVE_FORMAT_SIPROLAB_ACEPLNET
WAVE_FORMAT_SIPROLAB_ACELP4800
WAVE_FORMAT_SIPROLAB_ACELP8V3
WAVE_FORMAT_SIPROLAB_G729
WAVE_FORMAT_SIPROLAB_G729A
WAVE_FORMAT_SIPROLAB_KELVIN
WAVE_FORMAT_G726ADPCM
WAVE_FORMAT_QUALCOMM_PUREVOICE
WAVE_FORMAT_QUALCOMM_HALFRATE
WAVE_FORMAT_TUBGSM
WAVE_FORMAT_MSAUDIO1
Windows Media Audio, WAVE_FORMAT_WMAUDIO2, Microsoft Corporation
Windows Media Audio Professional WAVE_FORMAT_WMAUDIO3, Microsoft Corporation
Windows Media Audio Lossless, WAVE_FORMAT_WMAUDIO_LOSSLESS
Windows Media Audio Professional over SPDIF WAVE_FORMAT_WMASPDIF (0x0164)
WAVE_FORMAT_UNISYS_NAP_ADPCM
WAVE_FORMAT_UNISYS_NAP_ULAW
WAVE_FORMAT_UNISYS_NAP_ALAW
WAVE_FORMAT_UNISYS_NAP_16K
WAVE_FORMAT_CREATIVE_ADPCM
WAVE_FORMAT_CREATIVE_FASTSPEECH8
WAVE_FORMAT_CREATIVE_FASTSPEECH10
WAVE_FORMAT_UHER_ADPCM
WAVE_FORMAT_QUARTERDECK
WAVE_FORMAT_ILINK_VC
WAVE_FORMAT_RAW_SPORT
WAVE_FORMAT_ESST_AC3
WAVE_FORMAT_IPI_HSX
WAVE_FORMAT_IPI_RPELP
WAVE_FORMAT_CS2
WAVE_FORMAT_SONY_SCX
WAVE_FORMAT_FM_TOWNS_SND
WAVE_FORMAT_BTV_DIGITAL
WAVE_FORMAT_QDESIGN_MUSIC
WAVE_FORMAT_VME_VMPCM
WAVE_FORMAT_TPC
WAVE_FORMAT_OLIGSM
WAVE_FORMAT_OLIADPCM
WAVE_FORMAT_OLICELP
WAVE_FORMAT_OLISBC
WAVE_FORMAT_OLIOPR
WAVE_FORMAT_LH_CODEC
WAVE_FORMAT_NORRIS
WAVE_FORMAT_SOUNDSPACE_MUSICOMPRESS
Advanced Audio Coding (AAC) audio in Audio Data Transport Stream (ADTS) format.
The format block is a WAVEFORMATEX structure with wFormatTag equal to WAVE_FORMAT_MPEG_ADTS_AAC.
The WAVEFORMATEX structure specifies the core AAC-LC sample rate and number of channels,
prior to applying spectral band replication (SBR) or parametric stereo (PS) tools, if present.
No additional data is required after the WAVEFORMATEX structure.
http://msdn.microsoft.com/en-us/library/dd317599%28VS.85%29.aspx
MPEG_RAW_AAC
Source wmCodec.h
MPEG-4 audio transport stream with a synchronization layer (LOAS) and a multiplex layer (LATM).
The format block is a WAVEFORMATEX structure with wFormatTag equal to WAVE_FORMAT_MPEG_LOAS.
See .
The WAVEFORMATEX structure specifies the core AAC-LC sample rate and number of channels,
prior to applying spectral SBR or PS tools, if present.
No additional data is required after the WAVEFORMATEX structure.
NOKIA_MPEG_ADTS_AAC
Source wmCodec.h
NOKIA_MPEG_RAW_AAC
Source wmCodec.h
VODAFONE_MPEG_ADTS_AAC
Source wmCodec.h
VODAFONE_MPEG_RAW_AAC
Source wmCodec.h
High-Efficiency Advanced Audio Coding (HE-AAC) stream.
The format block is an HEAACWAVEFORMAT structure. See .
WAVE_FORMAT_DVM
WAVE_FORMAT_VORBIS1 "Og" Original stream compatible
WAVE_FORMAT_VORBIS2 "Pg" Have independent header
WAVE_FORMAT_VORBIS3 "Qg" Have no codebook header
WAVE_FORMAT_VORBIS1P "og" Original stream compatible
WAVE_FORMAT_VORBIS2P "pg" Have independent headere
WAVE_FORMAT_VORBIS3P "qg" Have no codebook header
Raw AAC1
Windows Media Audio Voice (WMA Voice)
Extensible
WAVE_FORMAT_DEVELOPMENT
FLAC
Converts a -value to a -value.
The -value to convert to the equivalent -value.
The which belongs to the specified .
Converts a value to a -value.
The to convert to the equivalent -value.
The -value which belongs to the specified .
The Major Type for Audio media types.
Channelmask used by . For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/dd757714(v=vs.85).aspx
Front left speaker.
Front right speaker.
Front center speaker.
Low frequency speaker.
Back left speaker.
Back right speaker.
Front left of center speaker.
Front right of center speaker.
Back center speaker.
Side left speaker.
Side right speaker.
Top center speaker.
Top front left speaker.
Top front center speaker.
Top front right speaker.
Top back left speaker.
Top back center speaker.
Top back right speaker.
Defines common channelmasks.
Mono.
Stereo.
5.1 surround with rear speakers.
5.1 surround with side speakers.
7.1 surround.
Specifies the audio profile and level of an Advanced Audio Coding (AAC) stream.
is the default setting.
For more information, see .
None/Invalid
AACProfile_L2_0x29 - Default value
AACProfile_L4_0x2A
AACProfile_L5_0x2B
HighEfficiencyAACProfile_L2_0x2C
HighEfficiencyAACProfile_L3_0x2D
HighEfficiencyAACProfile_L4_0x2E
HighEfficiencyAACProfile_L5_0x2F
ReservedForIsoUse_0x30
ReservedForIsoUse_0x31
ReservedForIsoUse_0x32
ReservedForIsoUse_0x33
Provides an encoder for encoding raw waveform-audio data to the AAC (Advanced Audio Codec) format.
Initializes a new instance of the class.
of the audio data which gets encoded.
which should be used to save the encoded data in.
Initializes a new instance of the class.
of the audio data which gets encoded.
which should be used to save the encoded data in.
Default samplerate. Use 192000 as the default value.
Guid of the container type. Use as the default container.
Gets or sets the audio profile and level of an Advanced Audio Coding (AAC) stream.
This attribute contains the value of the audioProfileLevelIndication field, as defined by ISO/IEC 14496-3.
Mediafoundation AAC decoder.
Gets a value which indicates whether the Mediafoundation AAC decoder is supported on the current platform.
Initializes a new instance of the class.
Url which points to a data source which provides AAC data. This is typically a filename.
Initializes a new instance of the class.
Stream which contains AAC data.
Decodes an aiff-chunk and provides its stored data.
Initializes a new instance of the class.
The binary reader which provides can be used to decode the chunk.
The chunk identifier.
binaryReader
or
chunkId
Gets the underlying binary reader.
Care endianness.
Gets the ChunkId of the . The is used to determine the type of the
.
Gets the size of the in bytes. The and the
(4 bytes each) are not included.
Seeks to the end of the chunk.
Can be used to make sure that the underlying / points to
the next .
Provides all s of a aiff stream.
Initializes a new instance of the class.
The binary reader which provides can be used to decode the chunk.
FORM header not found.
or
Invalid Formtype.
Gets the form type.
Either 'AIFF' or 'AIFC'.
Gets all found of the .
Seeks to the end of the chunk.
Can be used to make sure that the underlying / points to
the next .
Represents errors that occur when decoding or encoding Aiff-streams/files.
Initializes a new instance of the class.
Initializes a new instance of the class.
The message that describes the error.
Initializes a new instance of the class.
The message that describes the error.
The that caused the .
Initializes a new instance of the class.
The that holds the serialized object
data about the exception being thrown.
The that contains contextual
information about the source or destination.
Decodes a aiff stream/file.
Initializes a new instance of the class for the specified .
The complete file path to be decoded.
No COMM Chunk found.
or
No SSND Chunk found.
or
Format not supported.
Initializes a new instance of the class for the specified .
The stream to be decoded.
stream
Stream is not readable.;stream
or
Stream is not seekable.;stream
No COMM Chunk found.
or
No SSND Chunk found.
or
Format not supported.
Gets the found s of the aiff stream/file.
Reads a sequence of elements from the and advances the position within the stream by
the number of elements read.
An array of elements. When this method returns, the contains the
specified array of elements with the values between and (
+ - 1) replaced by the elements read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of elements to read from the current source.
The total number of elements read into the buffer.
buffer
offset
or
count
The sum of offset and count is larger than the buffer length.
Unexpected error. Not supported bps.
Gets a value indicating whether the supports seeking.
Gets the of the waveform-audio data.
Gets or sets the current position in bytes.
The value is less than zero or greater than .
Gets the length of the audio data in bytes.
Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
Releases unmanaged and - optionally - managed resources.
true to release both managed and unmanaged resources; false to release only
unmanaged resources.
Finalizes an instance of the class.
Provides the format of the encoded audio data of a AIFF-file.
Initializes a new instance of the class.
The binary reader which provides can be used to decode the chunk.
Compression type not supported.
Gets the number of channels.
Gets the total number of sample frames.
To get the total number of samples multiply by
.
Gets the number of bits per sample.
Gets the sample rate in Hz.
Gets the compression type.
All compression types except PCM are currently not supported.
Gets the wave format.
The wave format.
This method does not take care about multi channel formats. It won't setup a channel mask.
Seeks to the end of the chunk.
Can be used to make sure that the underlying / points to
the next .
Provides the format version of the aifc file.
Defines Aiff-Versions.
Version 1.
Initializes a new instance of the class.
The binary reader which provides can be used to decode the chunk.
Invalid AIFF-C Version.
Gets the version of the aifc file.
Seeks to the end of the chunk.
Can be used to make sure that the underlying / points to
the next .
Provides the encoded audio data of an aiff stream.
Initializes a new instance of the class.
The binary reader which provides can be used to decode the chunk.
Gets the offset. The offset determines where the first sample frame in the starts.
Offset in bytes.
Gets the block size. It specifies the size in bytes of the blocks that sound data is aligned to.
Gets the zero based position in the stream, at which the encoded audio data starts.
Seeks to the end of the chunk.
Can be used to make sure that the underlying / points to
the next .
Represents an entry of the class which provides information about a codec.
Gets the which initializes a codec decoder based on a .
Gets all with the codec associated file extensions.
Initializes a new instance of the class.
Delegate which initializes a codec decoder based on a .
All which the codec associated file extensions.
Provides data for all events which notify the client that a connection got established. For example the event.
Gets the uri of the connection.
Gets a value indicating whether the connection got established successfully or not. true if the connection got established successfully, otherwise false.
Initializes a new instance of the class.
The uri of the connection.
A value indicating whether the connection got established successfully or not. true if the connection got established successfully, otherwise false.
Mediafoundation DDP decoder.
Gets a value which indicates whether the Mediafoundation DDP decoder is supported on the current platform.
Initializes a new instance of the class.
Url which points to a data source which provides DDP data. This is typically a filename.
Initializes a new instance of the class.
Stream which contains DDP data.
Helps to choose the right decoder for different codecs.
Gets the default singleton instance of the class.
Gets the file filter in English. This filter can be used e.g. in combination with an OpenFileDialog.
Registers a new codec.
The key which gets used internally to save the in a
. This is typically the associated file extension. For example: the mp3 codec
uses the string "mp3" as its key.
which provides information about the codec.
Returns a fully initialized instance which is able to decode the specified file. If the
specified file can not be decoded, this method throws an .
Filename of the specified file.
Fully initialized instance which is able to decode the specified file.
The codec of the specified file is not supported.
Returns a fully initialized instance which is able to decode the audio source behind the
specified .
If the specified audio source can not be decoded, this method throws an .
Uri which points to an audio source.
Fully initialized instance which is able to decode the specified audio source.
The codec of the specified audio source is not supported.
Returns all the common file extensions of all supported codecs. Note that some of these file extensions belong to
more than one codec.
That means that it can be possible that some files with the file extension abc can be decoded but other a few files
with the file extension abc can't be decoded.
Supported file extensions.
Defines the channel assignments.
Independent assignment.
Left/side stereo. Channel 0 becomes the left channel while channel 1 becomes the side channel.
Right/side stereo. Channel 0 becomes the right channel while channel 1 becomes the side channel.
Mid/side stereo. Channel 0 becomes the mid channel while channel 1 becomes the side channel.
FLAC Exception.
Gets the layer of the flac stream the exception got thrown.
Used for debugging purposes.
Initializes a new instance of the class.
A message which describes the error.
The layer of the flac stream the exception got thrown.
Initializes a new instance of the class.
The InnerException which caused the error.
The layer.The layer of the flac stream the exception got thrown.
Initializes a new instance of the class from serialization data.
The object that holds the serialized object data.
The StreamingContext object that supplies the contextual information about the source or
destination.
When overridden in a derived class, sets the with information about the exception.
The that holds the serialized object data about the exception being thrown.
The that contains contextual information about the source or destination.
Represents a frame inside of an Flac-Stream.
Gets the header of the flac frame.
Gets the CRC16-checksum.
Gets a value indicating whether the decoder has encountered an error with this frame.
true if this frame contains an error; otherwise, false.
Creates a new instance of the class based on the specified .
The stream which contains the flac frame.
A new instance of the class.
Creates a new instance of the class based on the specified and some basic stream information.
The stream which contains the flac frame.
Some basic information about the flac stream.
A new instance of the class.
Tries to read the next flac frame inside of the specified stream and returns a value which indicates whether the next flac frame could be successfully read.
True if the next flac frame could be successfully read; false if not.
Gets the raw pcm data of the flac frame.
The buffer which should be used to store the data in. This value can be null.
The number of read bytes.
Disposes the and releases all associated resources.
Finalizes an instance of the class.
Provides a decoder for decoding flac (Free Lostless Audio Codec) data.
Gets a list with all found metadata fields.
Gets the output of the decoder.
Gets a value which indicates whether the seeking is supported. True means that seeking is supported; False means
that seeking is not supported.
Initializes a new instance of the class.
Filename which of a flac file which should be decoded.
Initializes a new instance of the class.
Stream which contains flac data which should be decoded.
Initializes a new instance of the class.
Stream which contains flac data which should be decoded.
Scan mode which defines how to scan the flac data for frames.
Initializes a new instance of the class.
Stream which contains flac data which should be decoded.
Scan mode which defines how to scan the flac data for frames.
Callback which gets called when the pre scan processes finished. Should be used if the
argument is set the .
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Gets or sets the position of the in bytes.
Gets the length of the in bytes.
Disposes the instance and disposes the underlying stream.
Disposes the instance and disposes the underlying stream.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Destructor which calls the method.
Represents the header of a .
Gets number of samples, the frame contains.
The number of samples, the frame contains.
Gets the sample rate in Hz.
The sample rate in Hz.
Gets the number of channels.
The number of channels.
Gets the channel assignment.
The channel assignment.
Gets the bits per sample.
The bits per sample.
Gets a value which indicates whether the frame provides the or the .
A value which indicates whether the frame provides the or the .
Gets the frame's starting sample number.
The frame's starting sample number.
Only available if the is set to .
Gets the frame's number.
The frame's number.
Only available if the is set to .
Gets the 8-bit crc checksum of the frame header.
The 8-bit crc checksum of the frame header.
Gets a value indicating whether this instance has error.
true if this instance has error; otherwise, false.
Gets the stream position.
The stream position.
Initializes a new instance of the class.
The underlying stream which contains the .
Initializes a new instance of the class.
The underlying stream which contains the .
The stream-info-metadata-block of the flac stream which provides some basic information about the flac framestream. Can be set to null.
Initializes a new instance of the class.
The underlying stream which contains the .
The stream-info-metadata-block of the flac stream which provides some basic information about the flac framestream. Can be set to null.
A value which indicates whether the crc8 checksum of the should be calculated.
Initializes a new instance of the class.
The raw byte-data which contains the .
The stream-info-metadata-block of the flac stream which provides some basic information about the flac framestream. Can be set to null.
A value which indicates whether the crc8 checksum of the should be calculated.
Indicates whether the format of the current is equal to the format of another .
A which provides the format to compare with the format of the current .
true if the format of the current is equal to the format of the .
Provides some basic information about a flac frame. This structure is typically used for implementing a seeking algorithm.
Gets the header of the flac frame.
Gets a value which indicates whether the described frame is the first frame of the flac stream. True means that the described frame is the first frame of the flac stream. False means that the described frame is not the first frame of the flac stream.
Gets the offset in bytes at which the frame starts in the flac stream (including the header of the frame).
Gets the number samples which are contained by other frames before this frame occurs.
Splits a flac file into a few basic layers and defines them. Mainly used for debugging purposes.
Everything which is not part of a flac frame.
For example the "fLaC" sync code.
Everything metadata related.
Everything which is part of a frame but not part of its subframes.
Everything subframe related.
Defines the blocking strategy of the a flac frame.
The of flac frames is variable.
Each flac frame uses the same .
Provides data for a FlacPreScan.
Gets the a list of found frames by the scan.
Initializes a new instance of the class.
Found frames.
Defines how to scan a flac stream.
Don't scan the flac stream. This will cause a stream to be not seekable.
Scan synchronously.
Scan async.
Don't use the stream while scan is running because the stream position
will change while scanning. If you playback the stream, it will cause an error!
Default value.
Represents a flac metadata block.
Reads and returns a single from the specified .
The stream which contains the .
Returns the read .
Reads all from the specified .
The stream which contains the .
All .
Skips all of the specified .
The stream which contains the .
Initializes a new instance of the class.
The type of the metadata.
A value which indicates whether this is the last block inside of the stream. true means that this is the last block inside of the stream.
The length of block inside of the stream in bytes. Does not include the metadata header.
Gets the type of the .
Gets a value indicating whether this instance is the last block.
Gets the length of the block inside of the stream in bytes.
The length does not include the metadata header.
Represents a flac seektable.
Initializes a new instance of the class.
The stream which contains the seektable.
The length of the seektable inside of the stream in bytes. Does not include the metadata header.
A value which indicates whether this is the last block inside of the stream. true means that this is the last block inside of the stream.
Gets the number of entries, the seektable offers.
Gets the seek points.
Gets the at the specified .
The .
The index.
The at the specified .
Represents the streaminfo metadata flac which provides general information about the flac stream.
Initializes a new instance of the class.
The stream which contains the .
The length of the inside of the stream in bytes. Does not include the metadata header.
A value which indicates whether this is the last block inside of
the stream. true means that this is the last block inside of the stream.
Gets the minimum size of the block in samples.
The minimum size of the block in samples.
Gets the maximum size of the block in samples.
The maximum size of the block in samples.
Gets the maximum size of the frame in bytes.
The maximum size of the frame in bytes.
Gets the minimum size of the frame in bytes.
The minimum size of the frame in bytes.
Gets the sample rate in Hz.
The sample rate.
Gets the number of channels.
The number of channels.
Gets the number of bits per sample.
The number of bits per sample.
Gets the total number of samples inside of the stream.
Gets MD5 signature of the unencoded audio data.
The MD5 signature of the unencoded audio data.
This method is based on the CUETools.NET BitReader (see http://sourceforge.net/p/cuetoolsnet/code/ci/default/tree/CUETools.Codecs/BitReader.cs)
The author "Grigory Chudov" explicitly gave the permission to use the source as part of the cscore source code which got licensed under the ms-pl.
Defines flac metadata types.
Streaminfo metadata.
Padding metadata.
Application metadata.
Seektable metadata.
Vorbis comment metadata.
Cue sheet metadata.
Picture metadata.
Undefined metadata. Used for custom metadata fields.
Represents a single flac seek point.
The sample number for a placeholder point.
Gets the sample number of the first sample in the target frame, or for a placeholder point.
The sample number of the first sample in the target frame.
According to https://xiph.org/flac/format.html#metadata_block_seektable.
Gets the offset (in bytes) from the first byte of the first frame header to the first byte of the target frame's header.
The offset (in bytes) from the first byte of the first frame header to the first byte of the target frame's header.
"/>
According to https://xiph.org/flac/format.html#metadata_block_seektable.
Gets the number of samples in the target frame.
The number of samples in the target frame.
According to https://xiph.org/flac/format.html#metadata_block_seektable.
Initializes a new instance of the class.
Initializes a new instance of the class.
The of the target frame.
The of the target frame.
The of the target frame.
Copied from http://stackoverflow.com/questions/8970101/whats-the-quickest-way-to-compute-log2-of-an-integer-in-c 14.01.2015
This method is based on the CUETools.NET BitReader (see http://sourceforge.net/p/cuetoolsnet/code/ci/default/tree/CUETools.Codecs/BitReader.cs)
The author "Grigory Chudov" explicitly gave the permission to use the source as part of the cscore source code which got licensed under the ms-pl.
Delegate which initializes a new decoder for a specific codec based on a .
which contains the data that should be decoded by the codec decoder.
Decoder for a specific coded based on a .
Mediafoundation MP1 decoder.
Gets a value which indicates whether the Mediafoundation MP1 decoder is supported on the current platform.
Initializes a new instance of the class.
Url which points to a data source which provides MP1 data. This is typically a filename.
Initializes a new instance of the class.
Stream which contains MP1 data.
Mediafoundation MP2 decoder.
Gets a value which indicates whether the Mediafoundation MP2 decoder is supported on the current platform.
Initializes a new instance of the class.
Url which points to a data source which provides MP2 data. This is typically a filename.
Initializes a new instance of the class.
Stream which contains MP2 data.
DirectX Media Object MP3 Decoder wrapper.
Initializes a new instance of the class.
File which contains raw MP3 data.
Initializes a new instance of the class.
Stream which contains raw MP3 data.
Gets or sets the position of the in bytes.
Gets the length of the in bytes.
Gets a value indicating whether the supports seeking.
Reads a sequence of bytes from the stream.
An array of bytes. When this method returns, the buffer contains the read bytes.
The zero-based byte offset in buffer at which to begin storing the data read from the stream.
The maximum number of bytes to be read from the stream
The actual number of read bytes.
Returns a to decode the mp3 data.
Format of the mp3 data to decode.
Output format.
to decode the mp3 data.
Returns the input format.
Input format.
Returns the output format.
Output format.
Gets raw mp3 data to decode.
Byte array which will hold the raw mp3 data to decode.
Number of requested bytes.
Total amount of read bytes.
Disposes the .
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Channelmode of MP3 data. For more information see the mp3 specification.
Stereo (left and right).
Joint stereo.
Dual channel.
Mono (only one channel).
The class describes an MPEG Audio Layer-3 (MP3) audio format.
Set this member to .
Indicates whether padding is used to adjust the average bitrate to the sampling rate.
Block size in bytes. This value equals the frame length in bytes x . For MP3 audio, the frame length is calculated as follows: 144 x (bitrate / sample rate) + padding.
Number of audio frames per block.
Encoder delay in samples. If you do not know this value, set this structure member to zero.
MPEGLAYER3_WFX_EXTRA_BYTES
Initializes a new instance of the class.
Sample rate in Hz.
Number of channels.
Block size in bytes. This value equals the frame length in bytes x . For MP3 audio, the frame length is calculated as follows: 144 x (bitrate / sample rate) + padding.
Bitrate.
Updates the - and the -property.
MP3 Format id.
None
Default value. Equals the MPEGLAYER3_ID_MPEG constant.
Constant frame size.
Represents an MP3 Frame.
Maximum length of one single in bytes.
Creates a new instance of the class based on a .
which provides MP3 data.
A new instance of the class based on the specified .
Creates a new instance of the class based on a .
which provides MP3 data.
Byte array which recieves the content of the .
A new instance of the class based on the specified .
Reads data from the .
Buffer which will receive the read data.
Zero-based index at which to begin storing data within the .
The number of read bytes.
Gets the Mpeg Version.
Gets the Mpeg Layer.
Gets the bit rate.
Gets the sample rate.
Gets the channel mode.
Gets the number of channels.
Gets the number of samples
Gets the length of the frame.
Gets the channel extension.
Gets a value which indicates whether the copyright flag is set (true means that the copyright flag is set).
Gets a value which indicates whether the original flag is set (true means that the original flag is set).
Gets the emphasis.
Gets the padding.
Gets a value which indicates whether the crc flag is set (true means that the crc flag is set).
MP3 Mediafoundation Decoder.
Gets a value which indicates whether the Mediafoundation MP3 decoder is supported on the current platform.
Initializes a new instance of the class.
Url which points to a data source which provides MP3 data. This is typically a filename.
Initializes a new instance of the class.
Stream which contains MP3 data.
Indicates whether padding is used to adjust the average bitrate to the sampling rate. Use one of the following values:
Insert padding as needed to achieve the stated average bitrate.
Always insert padding. The average bit rate may be higher than stated.
Never insert padding. The average bit rate may be lower than stated.
An implementation for streaming mp3 streams like mp3 radio stations, etc.
Initializes a new instance of the class.
The address of the mp3 stream.
Initializes a new instance of the class.
The address of the mp3 stream.
If set to true, the connection will be established asynchronously and the constructor will return immediately.
Doing that, requires the usage of the event which will notify the caller when the
is ready for use. If set to false the constructor will block the current thread as long as it takes to establish the connection.
Initializes a new instance of the class.
The address of the mp3 stream.
Initializes a new instance of the class.
The address of the mp3 stream.
If set to true, the connection will be established asynchronously and the constructor will return immediately.
Doing that, requires the usage of the event which will notify the caller when the
is ready for use. If set to false the constructor will block the current thread as long as it takes to establish the connection.
Gets the stream address.
Gets the number buffered bytes.
Gets the size of the internal buffer in bytes.
Gets a value indicating whether the supports seeking.
This property will always be set to false.
Gets the of the decoded mp3 stream.
If the internal decoder got not initialized yet, the value of the property is set to null.
Reads a sequence of elements from the and advances the position within the stream by the number of elements read.
An array of elements. When this method returns, the contains the specified array of elements with the values between and ( + - 1) replaced by the elements read from the current source.
The zero-based offset in the at which to begin storing the data read from the current stream.
The maximum number of elements to read from the current source.
The total number of elements read into the buffer.
Mp3WebStream
Gets or sets the current position. This property is not supported by the class.
The Mp3WebStream class does not support seeking.
Gets the length of the waveform-audio data. The value of this property will always be set to zero.
Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
Occurs when connection got established and the async argument of the constructor was set to true.
Initializes the connection.
true if the connection was initialized successfully; otherwise false.
Could not create HttpWebRequest
or
Could not create WebResponse
Releases unmanaged and - optionally - managed resources.
true to release both managed and unmanaged resources; false to release only unmanaged resources.
Finalizes an instance of the class.
Defines all known Mpeg-layers.
Reserved by ISO.
MPEG Layer 3
MPEG Layer 2
MPEG Layer 1
Defines all known Mpeg Versions.
Version 2.5
Reserved by ISO
Version 2.0
Version 1.0
Defines a Xing-Header.
Gets the header flags of the .
Gets the of a . If the does not has an the return value will be null.
which should get checked whether it contains a .
of the specified or null.
Defines the header flags of a xing header.
Frames field is present
Bytes field is present.
TOC field is present.
Quality indicator field is present.
Mediafoundation WMA decoder.
Gets a value which indicates whether the Mediafoundation WMA, WMA-Speech and WMA-Professional decoder is supported on the current platform.
Gets a value which indicates whether the Mediafoundation WMA-Speech decoder is supported on the current platform.
Gets a value which indicates whether the Mediafoundation WMA-Professional decoder is supported on the current platform.
Gets a value which indicates whether the Mediafoundation WMA decoder is supported on the current platform.
Initializes a new instance of the class.
Url which points to a data source which provides WMA data. This is typically a filename.
Initializes a new instance of the class.
Stream which contains WMA data.
Implementation of the interface which reads raw data from a based
on a specified .
Initializes a new instance of the class.
which contains raw waveform-audio data.
The format of the waveform-audio data within the .
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Gets a value indicating whether the supports seeking.
Gets the format of the raw data.
Gets or sets the position of the in bytes.
Gets the length of the in bytes.
Disposes the and the underlying .
Disposes the and the underlying .
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Destructor which calls the method.
Represents the of a wave file.
Chunk ID of the .
Initializes a new instance of the class.
which contains the data chunk.
Initializes a new instance of the class.
which should be used to read the data chunk.
Gets the zero-based position inside of the stream at which the audio data starts.
Represents the of a wave file.
Chunk ID of the .
Initializes a new instance of the class.
which contains the fmt chunk.
Initializes a new instance of the class.
which should be used to read the fmt chunk.
Gets the specified by the .
Represents a wave file chunk. For more information see
.
Initializes a new instance of the class.
which contains the wave file chunk.
Initializes a new instance of the class.
which should be used to read the wave file chunk.
Gets the unique ID of the Chunk. Each type of chunk has its own id.
Gets the data size of the chunk.
Parses the and returns a . Note that the position of the
stream has to point to a wave file chunk.
which points to a wave file chunk.
Instance of the class or any derived classes. It the stream does not point to a
wave file chunk the instance of the which gets return will be invalid.
Provides a decoder for reading wave files.
Initializes a new instance of the class.
Filename which points to a wave file.
Initializes a new instance of the class.
Stream which contains wave file data.
Gets a list of all found chunks.
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Gets the wave format of the wave file. This property gets specified by the .
Gets or sets the position of the in bytes.
Gets the length of the in bytes.
Gets a value indicating whether the supports seeking.
Disposes the and the underlying stream.
Disposes the and the underlying stream.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Destructor which calls the method.
Encoder for wave files.
Signals if the object has already been disposed
Signals if the object is in a disposing state
Initializes a new instance of the class.
Filename of the destination file. This filename should typically end with the .wav extension.
Format of the waveform-audio data. Note that the won't convert any
data.
Initializes a new instance of the class.
Destination stream which should be used to store the
Format of the waveform-audio data. Note that the won't convert any
data.
Disposes the and writes down the wave header.
Writes down all audio data of the to a file.
The filename.
The source to write down to the file.
if set to true the file will be overritten if it already exists.
The maximum number of bytes to write. Use -1 to write an infinte number of bytes.
This method is obsolete. Use the extension instead.
Encodes a single sample.
The sample to encode.
Encodes multiple samples.
Float array which contains the samples to encode.
Zero-based offset in the array.
Number of samples to encode.
Encodes raw data in the form of a byte array.
Byte array which contains the data to encode.
Zero-based offset in the .
Number of bytes to encode.
Writes down a single byte.
Byte to write down.
Writes down a single 16 bit integer value.
Value to write down.
Writes down a single 32 bit integer value.
Value to write down.
Writes down a single 32 bit float value.
Value to write down.
Disposes the and writes down the wave header.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Destructor of the which calls the method.
Enables a client to read input data from a capture endpoint buffer. For more information, see
.
Initializes a new instance of the class.
The native pointer of the IAudioCaptureClient COM object.
Gets the size of the next packet in frames (the size of one frame equals the blockalign value of the waveformat).
Creates a new by calling the method of the
specified .
The which should be used to create the -instance
with.
A new instance of the class.
Retrieves a pointer to the next available packet of data in the capture endpoint buffer.
For more information see
.
A pointer variable into which the method writes the starting address of the next data
packet that is available for the client to read.
Variable into which the method writes the frame count (the number of audio frames
available in the data packet). The client should either read the entire data packet or none of it.
Variable into which the method writes the buffer-status flags.
Variable into which the method writes the device position of the first audio frame in the
data packet. The device position is expressed as the number of audio frames from the start of the stream.
Variable into which the method writes the value of the performance counter at the time that
the audio endpoint device recorded the device position of the first audio frame in the data packet.
HRESULT
Retrieves a pointer to the next available packet of data in the capture endpoint buffer.
For more information see
.
Variable into which the method writes the frame count (the number of audio frames available in
the data packet). The client should either read the entire data packet or none of it.
Variable into which the method writes the buffer-status flags.
Variable into which the method writes the device position of the first audio frame in the
data packet. The device position is expressed as the number of audio frames from the start of the stream.
Variable into which the method writes the value of the performance counter at the time that
the audio endpoint device recorded the device position of the first audio frame in the data packet.
Pointer to a variable which stores the starting address of the next data packet that is available for the
client to read.
Use Marshal.Copy to convert the pointer to the buffer into an array.
Retrieves a pointer to the next available packet of data in the capture endpoint buffer.
For more information see
.
Variable into which the method writes the frame count (the number of audio frames available in
the data packet). The client should either read the entire data packet or none of it.
Variable into which the method writes the buffer-status flags.
Pointer to a variable which stores the starting address of the next data packet that is available for the
client to read.
Use Marshal.Copy to convert the pointer to the buffer into an array.
The ReleaseBuffer method releases the buffer. For more information, see .
The number of audio frames that the client read from the
capture buffer. This parameter must be either equal to the number of frames in the
previously acquired data packet or 0.
HRESULT
The ReleaseBuffer method releases the buffer. For more information, see .
The number of audio frames that the client read from the
capture buffer. This parameter must be either equal to the number of frames in the
previously acquired data packet or 0.
The GetNextPacketSize method retrieves the number of frames in the next data packet in
the capture endpoint buffer.
For more information, see .
Variable into which the method writes the frame count (the number of audio
frames in the next capture packet).
HRESULT
The GetNextPacketSize method retrieves the number of frames in the next data packet in
the capture endpoint buffer.
For more information, see .
The number of the audio frames in the next capture packet.
Defines flags that indicate the status of an audio endpoint buffer.
None
The data in the packet is not correlated with the previous packet's device position;
this is possibly due to a stream state transition or timing glitch.
Treat all of the data in the packet as silence and ignore the actual data values.
The time at which the device's stream position was recorded is uncertain. Thus, the
client might be unable to accurately set the time stamp for the current data packet.
The constants indicate characteristics of an audio session associated with
the stream. A client can specify these options during the initialization of the stream by through the
StreamFlags parameter of the method.
The session expires when there are no associated streams and owning session control objects holding references.
The volume control is hidden in the volume mixer user interface when the audio session is created. If the session
associated with the stream already exists before opens the stream, the volume
control is displayed in the volume mixer.
The volume control is hidden in the volume mixer user interface after the session expires.
Specifies characteristics that a client can assign to an audio stream during the initialization of the stream.
None
The audio stream will be a member of a cross-process audio session. For more information, see
.
The audio stream will operate in loopback mode. For more information, see
.
Processing of the audio buffer by the client will be event driven. For more information, see
.
The volume and mute settings for an audio session will not persist across system restarts. For more information,
see .
This constant is new in Windows 7. The sample rate of the stream is adjusted to a rate specified by an application.
For more information, see
.
The class enables a client to monitor a stream's data rate and the current position in
the stream.
Initializes a new instance of the class.
The native pointer of the IAudioClock COM Object.
Gets the device frequency. For more information, see
.
Gets the device position.
Creates a new by calling the method of the
specified .
which should be used to create the -instance
with.
A new .
The GetFrequency method gets the device frequency.
The device frequency. For more information, see
.
HRESULT
The GetPosition method gets the current device position.
The device position is the offset from the start of the stream to the current position in the stream. However, the
units in which this offset is expressed are undefined—the device position value has meaning only in relation to the
. For more information, see
.
The value of the performance counter at the time that the audio endpoint device read the device position
() in response to the call. The method converts
the counter value to 100-nanosecond time
units before writing it to .
HRESULT
The GetCharacteristics method is reserved for future use.
Value that indicates the characteristics of the audio clock.
HREUSLT
Used to get the device position.
Initializes a new instance of the class.
The native pointer of the IAudioClock2 COM object.
Initializes a new instance of the class.
An instance which should be used to query the
object.
The argument is null.
The COM object is not supported on the current platform. Only supported on Windows
7/Windows Server 2008 R2 and above.
For more information, see
.
The method gets the current device position, in frames, directly from the
hardware.
Receives the device position, in frames. The received position is an unprocessed value
that the method obtains directly from the hardware. For more information, see
.
Receives the value of the performance counter at the time that the audio endpoint device read
the device position retrieved in the parameter in response to the
call.
converts the counter value to 100-nanosecond time units before writing it to
QPCPosition.
HRESULT
The method gets the current device position, in frames, directly from the
hardware.
Receives the device position, in frames. The received position is an unprocessed value
that the method obtains directly from the hardware. For more information, see
.
Receives the value of the performance counter at the time that the audio endpoint device read
the device position retrieved in the parameter in response to the
call.
converts the counter value to 100-nanosecond time units before writing it to
QPCPosition.
Represents a peak meter on an audio stream to or from an audio endpoint device.
For more information, see
.
Initializes a new instance of class.
The native pointer.
Gets the number of channels in the audio stream that are monitored by peak meters.
Gets the peak sample value for the given .
The peak sample value for the given .
Gets the hardware-supported functions.
Gets the peak sample value for the channels in the audio stream.
Creates a new instance for the given .
The underlying device to create the audio meter instance for.
A new instance for the given .
Gets the peak sample value for the channels in the audio stream.
A variable into which the method writes the peak sample value for the audio stream. The peak value
is a number in the normalized range from 0.0 to 1.0.
HRESULT
Gets the peak sample value for the channels in the audio stream.
The peak sample value for the audio stream. The peak value is a number in the normalized range from 0.0 to
1.0.
Gets the number of channels in the audio stream that
are monitored by peak meters.
A variable into which the method writes the number of channels.
HRESULT
Gets the number of channels in the audio stream that
are monitored by peak meters.
The number of channels.
Gets the peak sample values for all the channels in the
audio stream.
The channel count. This parameter also specifies the number of elements in the
array. If the specified count does not match the number of channels in the stream,
the method returns error code .
An array of peak sample values. The method writes the peak values for the channels into the
array. The array contains one element for each channel in the stream. The peak values are numbers in the normalized
range from 0.0 to 1.0. The array gets allocated by the method.
HRESULT
Gets the peak sample values for all the channels in the
audio stream.
The channel count. This parameter also specifies the number of elements in the returned
array. If the specified count does not match the number of channels in the stream, the method returns error code
.
An array of peak sample values. he array contains one element for each channel in the stream. The peak values
are numbers in the normalized range from 0.0 to 1.0.
Gets the peak sample values for all the channels in the
audio stream.
An array of peak sample values. he array contains one element for each channel in the stream. The peak values
are numbers in the normalized range from 0.0 to 1.0.
Queries the audio endpoint device for its
hardware-supported functions.
A variable into which the method writes a hardware support mask that indicates the
hardware capabilities of the audio endpoint device.
HRESULT
Queries the audio endpoint device for its
hardware-supported functions.
A hardware support mask that indicates the hardware capabilities of the audio endpoint device.
Provides data for the event.
Gets the number of audio channels in the session submix.
Gets the volume level for each audio channel. Each volume level is a value in the range 0.0 to 1.0, where 0.0 is silence and 1.0 is full volume.
Gets the index of the audio channel that changed. Use this value as an index into the .
If the session submix contains n channels, the channels are numbered from 0 to n– 1. If more than one channel might have changed, the value of ChangedChannel is (DWORD)(–1).
Gets the volume of the channel specified by the .
The zero-based index of the channel.
Volume level of the specified channelIndex in the range 0.0 to 1.0, where 0.0 is silence and 1.0 is full volume.
Initializes a new instance of the class.
The number of channels.
Volumes of the channels.
Number of channel volumes changed.
Userdefined event context.
The class can be used by a client to get information about the audio session.
For more information, see .
Initializes a new instance of the class.
The native pointer to the IAudioSessionControl2 object.
Gets the session identifier.
For more information, see .
Gets the identifier of the audio session instance.
For more information, see .
Gets the process identifier of the audio session.
In the case of that the session is no single-process-session (see ), the is the initial identifier of the process that created the session.
Gets a value indicating whether the session spans more than one process. If True, the session spans more than one process; If False otherwise.
Gets the process of the audio session.
In the case of that the session is no SingleProcessSession (see ), the Process is the process that created the session.
If the process that created the session is not available anymore, the value is null.
Gets a value indicating whether the session is a system sounds session. If True, the session is a system sound session; If False otherwise.
Gets the session identifier.
A variable which retrieves the session identifier.
HRESULT
Gets the identifier of the audio session instance.
A variable which retrieves the identifier of a particular instance of the audio session.
HRESULT
Gets the process identifier of the audio session.
A variable which receives the process id of the audio session.
HRESULT
Indicates whether the session is a system sounds session.
HRESULT; S_OK = true, S_FALSE = false
Enables or disables the default stream attenuation experience (auto-ducking) provided by the system.
A variable that enables or disables system auto-ducking.
HRESULT
Enables or disables the default stream attenuation experience (auto-ducking) provided by the system.
A variable that enables or disables system auto-ducking.
Provides data for the event.
Gets the reason that the audio session was disconnected.
Initializes a new instance of the class.
The reason that the audio session was disconnected.
Specifies reasons that a audio session was disconnected.
For more information about WTS sessions, see the Windows SDK documentation or .
The user removed the audio endpoint device.
The Windows audio service has stopped.
The stream format changed for the device that the audio session is connected to.
The user logged off the Windows Terminal Services (WTS) session that the audio session was running in.
The WTS session that the audio session was running in was disconnected.
The (shared-mode) audio session was disconnected to make the audio endpoint device available for an exclusive-mode connection.
Provides data for the event.
Gets the new display name the session.
Initializes a new instance of the class.
Thew new display name of the session.
The event context value.
The object enumerates audio sessions on an audio device.
For more information, see .
Initializes a new instance of the class.
The native pointer of the object.
Gets the total number of audio sessions.
Gets the audio session specified by an index.
The session number. If there are n sessions, the sessions are numbered from 0 to n – 1. To get the number of sessions, call the GetCount method.
Gets the total number of audio sessions that are open on the audio device.
Receives the total number of audio sessions.
HRESULT
Gets the audio session specified by an audio session number.
The session number. If there are n sessions, the sessions are numbered from 0 to n – 1. To get the number of sessions, call the GetCount method.
The of the specified session number.
HRESULT
Gets the audio session specified by an audio session number.
The session number. If there are n sessions, the sessions are numbered from 0 to n – 1. To get the number of sessions, call the GetCount method.
The of the specified session number.
Returns an enumerator that iterates through the audio sessions.
A that can be used to iterate through the audio sessions.
Returns an enumerator that iterates through the audio sessions.
An object that can be used to iterate through the audio sessions.
A base class for all event-args classes which specify an value.
Gets the event context value.
Initializes a new instance of the class.
The event context value.
Provides notifications of session-related events such as changes in the volume level, display name, and session state.
For more information, see .
Occurs when the display name for the session has changed.
Occurs when the display icon for the session has changed.
Occurs when the volume level or muting state of the session has changed.
Occurs when the volume level of an audio channel in the session submix has changed.
Occurs when the grouping parameter for the session has changed.
Occurs when the stream-activity state of the session has changed.
Occurs when the session has been disconnected.
Notifies the client that the display name for the session has changed.
The new display name for the session.
The event context value.
HRESULT
Notifies the client that the display icon for the session has changed.
The path for the new display icon for the session.
The event context value.
HRESULT
Notifies the client that the volume level or muting state of the audio session has changed.
The new volume level for the audio session. This parameter is a value in the range 0.0 to 1.0,
where 0.0 is silence and 1.0 is full volume (no attenuation).
The new muting state. If TRUE, muting is enabled. If FALSE, muting is disabled.
The event context value.
HRESULT
Notifies the client that the volume level of an audio channel in the session submix has changed.
The number of channels in the session submix.
An array of volume levels. Each element is a value of type float that specifies the volume level for a particular channel. Each volume level is a value in the range 0.0 to 1.0, where 0.0 is silence and 1.0 is full volume (no attenuation). The number of elements in the array is specified by the ChannelCount parameter.
The number of the channel whose volume level changed.
The event context value.
Notifies the client that the grouping parameter for the session has changed.
The new grouping parameter for the session. This parameter points to a grouping-parameter GUID.
The event context value.
HRESULT
Notifies the client that the stream-activity state of the session has changed.
The new session state.
HRESULT
Notifies the client that the audio session has been disconnected.
The reason that the audio session was disconnected.
HRESULT
Provides data for the event.
Gets the new grouping parameter for the session.
Initializes a new instance of the class.
The new grouping parameter for the session.
The event context value.
Provides data for the event.
Gets the path for the new display icon for the session.
Initializes a new instance of the class.
The path for the new display icon for the session.
The event context value.
The class enables a client to access the session controls and volume controls for both cross-process and process-specific audio sessions.
Initializes a new instance of the class.
Native pointer to the object.
Retrieves an audio session control.
If the GUID does not identify a session that has been previously opened, the call opens a new but empty session. If the value is Guid.Empty, the method assigns the stream to the default session.
Specifies the status of the flags for the audio stream.
The of the specified .
HRESULT
Retrieves an audio session control.
If the GUID does not identify a session that has been previously opened, the call opens a new but empty session. If the value is Guid.Empty, the method assigns the stream to the default session.
Specifies the status of the flags for the audio stream.
instance.
Retrieves a simple audio volume control.
Specifies whether the request is for a cross-process session. Set to TRUE if the session is cross-process. Set to FALSE if the session is not cross-process.
If the GUID does not identify a session that has been previously opened, the call opens a new but empty session. If the value is Guid.Empty, the method assigns the stream to the default session.
of the audio volume control object.
HRESULT
Retrieves a simple audio volume control.
Specifies whether the request is for a cross-process session. Set to TRUE if the session is cross-process. Set to FALSE if the session is not cross-process.
If the GUID does not identify a session that has been previously opened, the call opens a new but empty session. If the value is Guid.Empty, the method assigns the stream to the default session.
instance.
Enables an application to manage submixes for the audio device.
Occurs when the audio session has been created.
Occurs when a pending system ducking event gets fired.
Occurs when a pending system unducking event gets fired.
Creates a new instance of based on a .
Device to use to activate the .
instance for the specified .
Initializes a new instance of the class.
The native pointer.
Gets a pointer to the audio session enumerator object.
Retrieves a session enumerator object that the client can use to enumerate audio sessions on the audio device.
HRESULT
The client is responsible for releasing the .
Gets a pointer to the audio session enumerator object.
a session enumerator object that the client can use to enumerate audio sessions on the audio device.
The client is responsible for releasing the returned .
Registers the application to receive a notification when a session is created.
The application's implementation of the interface.
HRESULT
Use the class as the default implementation for the parameter.
Note: Make sure to call the from an MTA-Thread. Also make sure to enumerate all sessions after calling this method.
Registers the application to receive a notification when a session is created.
The application's implementation of the interface.
Use the class as the default implementation for the parameter.
Note: Make sure to call the from an MTA-Thread. Also make sure to enumerate all sessions after calling this method.
Deletes the registration to receive a notification when a session is created.
The application's implementation of the interface.
Pass the same object that was specified to the session manager in a previous call to register for notification.
HRESULT
Deletes the registration to receive a notification when a session is created.
The application's implementation of the interface.
Pass the same object that was specified to the session manager in a previous call to register for notification.
Registers the application to receive ducking notifications.
A string that contains a session instance identifier. Applications that are playing a media stream and want to provide custom stream attenuation or ducking behavior, pass their own session instance identifier.
Other applications that do not want to alter their streams but want to get all the ducking notifications must pass NULL.
Instance of any object which implements the and which should receive duck notifications.
HRESULT
Registers the application to receive ducking notifications.
A string that contains a session instance identifier. Applications that are playing a media stream and want to provide custom stream attenuation or ducking behavior, pass their own session instance identifier.
Other applications that do not want to alter their streams but want to get all the ducking notifications must pass NULL.
Instance of any object which implements the and which should receive duck notifications.
Deletes the registration to receive ducking notifications.
The interface that is implemented by the application. Pass the same interface pointer that was specified to the session manager in a previous call to the method.
HRESULT
Deletes the registration to receive ducking notifications.
The interface that is implemented by the application. Pass the same interface pointer that was specified to the session manager in a previous call to the method.
Releases the COM object and unregisters all session notifications and all volume duck notifications.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
The object provides notification when an audio session is created.
For more information, .
Occurs when the audio session has been created.
Notifies the registered processes that the audio session has been created.
Pointer to the object of the audio session that was created.
HRESULT
Provides data for the event.
Gets the new volume level for the audio session.
The value is a value in the range 0.0 to 1.0, where 0.0 is silence and 1.0 is full volume (no attenuation).
Gets the new muting state.
If true, muting is enabled. If false, muting is disabled.
Initializes a new instance of the class.
The new volume level for the audio session. This parameter is a value in the range 0.0 to 1.0, where 0.0 is silence and 1.0 is full volume (no attenuation).
The muting state. If true, muting is enabled. If false, muting is disabled.
The event context value.
Defines constants that indicate the current state of an audio session.
The session has no active audio streams.
The session has active audio streams.
The session is dormant.
Provides data for the event.
Gets the new session state.
Initializes a new instance of the class.
The default implementation of the interface.
Occurs when a pending system ducking event gets fired.
Occurs when a pending system unducking event gets fired.
Sends a notification about a pending system ducking event.
A string containing the session instance identifier of the communications session that raises the the auto-ducking event.
The number of active communications sessions. If there are n sessions, the sessions are numbered from 0 to –1.
HRESULT
Sends a notification about a pending system unducking event.
A string containing the session instance identifier of the terminating communications session that intiated the ducking.
The number of active communications sessions. If there are n sessions, they are numbered from 0 to n-1.
Provides data for the event.
Gets the data-flow direction of the endpoint device.
Gets the device role of the audio endpoint device.
Initializes a new instance of the class.
The device id that identifies the audio endpoint device.
The data-flow direction of the endpoint device.
The device role of the audio endpoint device.
Provides basic data for all device notification events.
Gets the device id that identifies the audio endpoint device.
Initializes a new instance of the class.
The device id that identifies the audio endpoint device.
Tries the get device associated with the .
The device associated with the . If the return value is false, the will be null.
true if the associated device be successfully retrieved; false otherwise.
Provides data for the event.
Gets the that specifies the changed property.
Initializes a new instance of the class.
The device id that identifies the audio endpoint device.
The that specifies the changed property.
Provides data for the event.
Gets the new state of the endpoint device.
Initializes a new instance of the class.
The device id that identifies the audio endpoint device.
The new state of the endpoint device.
The class enables a client to configure the control parameters for an audio session and to monitor events in the session.
For more information, see .
Occurs when the display name for the session has changed.
Occurs when the display icon for the session has changed.
Occurs when the volume level or muting state of the session has changed.
Occurs when the volume level of an audio channel in the session submix has changed.
Occurs when the grouping parameter for the session has changed.
Occurs when the stream-activity state of the session has changed.
Occurs when the session has been disconnected.
Initializes a new instance of the class.
Native pointer of the object.
Initializes a new instance of the class.
The audio client to create a instance for.
audioClient
Gets the current state of the audio session.
Gets or sets the display name for the audio session.
Gets or sets the path for the display icon for the audio session.
Gets or sets the grouping parameter of the audio session.
Retrieves the current state of the audio session.
A variable into which the method writes the current session state.
HRESULT
Retrieves the display name for the audio session.
A variable into which the method writes the display name of the session.
HRESULT
Assigns a display name to the current session.
The new display name of the audio session.
EventContext which can be accessed in the event handler.
HRESULT
Retrieves the path for the display icon for the audio session.
A variable into which the method writes the path and file name of an .ico, .dll, or .exe file that contains the icon.
HRESULT
Assigns a display icon to the current session.
A string that specifies the path and file name of an .ico, .dll, or .exe file that contains the icon.
EventContext which can be accessed in the event handler.
HRESULT
Retrieves the grouping parameter of the audio session.
A variable into which the method writes the grouping parameter.
HRESULT
For some more information about grouping parameters, see .
Assigns a session to a grouping of sessions.
HRESULT
For some more information about grouping parameters, see .
Registers the client to receive notifications of session events, including changes in the stream state.
An instance of the object which receives the notifications.
HRESULT
Registers the client to receive notifications of session events, including changes in the stream state.
An instance of the object which receives the notifications.
Deletes a previous registration by the client to receive notifications.
The instance of the object which got registered previously by the method.
HRESULT
Deletes a previous registration by the client to receive notifications.
The instance of the object which got registered previously by the method.
Releases the COM object.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
The interface provides notifications of session-related events such as changes in the volume level, display name, and session state.
Notifies the client that the display name for the session has changed.
The new display name for the session.
The event context value.
HRESULT
Notifies the client that the display icon for the session has changed.
The path for the new display icon for the session.
The event context value.
HRESULT
Notifies the client that the volume level or muting state of the audio session has changed.
The new volume level for the audio session. This parameter is a value in the range 0.0 to 1.0, where 0.0 is silence and 1.0 is full volume (no attenuation).
The new muting state. If TRUE, muting is enabled. If FALSE, muting is disabled.
The event context value.
HRESULT
Notifies the client that the volume level of an audio channel in the session submix has changed.
The number of channels in the session submix.
An array of volume levels. Each element is a value of type float that specifies the volume level for a particular channel. Each volume level is a value in the range 0.0 to 1.0, where 0.0 is silence and 1.0 is full volume (no attenuation). The number of elements in the array is specified by the ChannelCount parameter.
The number of the channel whose volume level changed.
The event context value.
HRESULT
Notifies the client that the grouping parameter for the session has changed.
The new grouping parameter for the session. This parameter points to a grouping-parameter GUID.
The event context value.
HRESULT
Notifies the client that the stream-activity state of the session has changed.
The new session state.
HRESULT
Notifies the client that the audio session has been disconnected.
The reason that the audio session was disconnected.
HRESULT
The interface provides notification when an audio session is created.
Notifies the registered processes that the audio session has been created.
Pointer to the object of the audio session that was created.
HRESULT
The interface is used to by the system to send notifications about stream attenuation changes.
For more information, see .
Sends a notification about a pending system ducking event.
A string containing the session instance identifier of the communications session that raises the the auto-ducking event.
The number of active communications sessions. If there are n sessions, the sessions are numbered from 0 to –1.
HRESULT
Sends a notification about a pending system unducking event.
A string containing the session instance identifier of the terminating communications session that intiated the ducking.
The number of active communications sessions. If there are n sessions, they are numbered from 0 to n-1.
HRESULT
The interface provides notifications when an audio endpoint device is added or removed, when the state or properties of an endpoint device change, or when there is a change in the default role assigned to an endpoint device.
The OnDeviceStateChanged method indicates that the state of an audio endpoint device has
changed.
The device id that identifies the audio endpoint device.
Specifies the new state of the endpoint device.
HRESULT
The OnDeviceAdded method indicates that a new audio endpoint device has been added.
The device id that identifies the audio endpoint device.
HRESULT
The OnDeviceRemoved method indicates that an audio endpoint device has been removed.
The device id that identifies the audio endpoint device.
HRESULT
The OnDefaultDeviceChanged method notifies the client that the default audio endpoint
device for a particular device role has changed.
The data-flow direction of the endpoint device.
The device role of the audio endpoint device.
The device id that identifies the audio endpoint device.
HRESULT
The OnPropertyValueChanged method indicates that the value of a property belonging to an
audio endpoint device has changed.
The device id that identifies the audio endpoint device.
The that specifies the changed property.
HRESULT
Represents an audio endpoint device
(see also ).
Initializes a new instance of the class.
The native pointer of the COM object.
Obtain an instance of the by using the constructor.
Initializes a new instance of the class based on an
by calling its method.
The used to obtain an instance.
device
Gets the data flow of the associated device.
The data flow of the associated device.
Indicates whether the endpoint is associated with a rendering device or a capture device.
A variable into which the method writes the data-flow direction of the endpoint device.
HRESULT
Use the property instead.
Provides data for the event.
Gets the object of the audio session that was created.
Initializes a new instance of the class.
The object of the audio session that was created.
must not be null.
Provides data for the and the event.
For more information, see .
A string containing the session instance identifier of the communications session that raises the auto-ducking event.
The number of active communications sessions. If there are n sessions, the sessions are numbered from 0 to –1.
Initializes a new instance of the class.
The session instance identifier of the communications session that raises the the auto-ducking event.
number of active communications sessions.
sessionID is null or empty.
countCommunicationSessions is less than zero.
The interface represents the volume controls on the audio stream to or from an
audio endpoint device.
For more information, see
.
Initializes a new instance of the class.
Native pointer of the object.
Gets all registered .
Gets the number of available channels.
Gets or sets the MasterVolumeLevel in decibel.
Gets or sets the MasterVolumeLevel as a normalized value in the range from 0.0 to 1.0.
Gets or sets the muting state of the audio stream that enters or leaves the
audio endpoint device. True indicates that the audio endpoint devie is muted. False indicates that the audio endpoint device is not muted.
Gets all available channels.
Returns a new instance based on a instance.
instance to create the for.
A new instance based on the specified .
Registers a client's notification callback
interface.
The callback instance that the client is registering for notification callbacks.
HRESULT
When notifications are no longer needed, the client can call the
method to terminate the
notifications.
Registers a client's notification callback
interface.
The callback instance that the client is registering for notification callbacks.
When notifications are no longer needed, the client can call the
method to terminate the
notifications.
Deletes the registration of a client's
notification callback interface that the client registered in a previous call to the
method.
The callback instance to unregister. The client passed this same object to the endpoint volume
object in the previous call to the method.
HRESULT
Deletes the registration of a client's
notification callback interface that the client registered in a previous call to the
method.
The callback instance to unregister. The client passed this same object to the endpoint volume
object in the previous call to the method.
Gets the number of channels in the audio stream that enters
or leaves the audio endpoint device.
Retrieves the number of channels in the audio stream.
HRESULT
Gets the number of channels in the audio stream that enters
or leaves the audio endpoint device.
The number of channels in the audio stream.
Sets the master volume level, in decibels, of the audio
stream that enters or leaves the audio endpoint device.
The new master volume level in decibels. To obtain the range and
granularity of the volume levels that can be set by this method, call the
method.
EventContext which can be accessed in the event handler.
HRESULT
Sets the master volume level, in decibels, of the audio
stream that enters or leaves the audio endpoint device.
The new master volume level in decibels. To obtain the range and
granularity of the volume levels that can be set by this method, call the
method.
EventContext which can be accessed in the event handler.
Sets the master volume level of the audio stream
that enters or leaves the audio endpoint device. The volume level is expressed as a
normalized, audio-tapered value in the range from 0.0 to 1.0.
The new master volume level. The level is expressed as a normalized
value in the range from 0.0 to 1.0.
EventContext which can be accessed in the event handler.
HRESULT
Sets the master volume level of the audio stream
that enters or leaves the audio endpoint device. The volume level is expressed as a
normalized, audio-tapered value in the range from 0.0 to 1.0.
The new master volume level. The level is expressed as a normalized
value in the range from 0.0 to 1.0.
EventContext which can be accessed in the event handler.
Gets the master volume level, in decibels, of the audio
stream that enters or leaves the audio endpoint device.
A
float variable into which the method writes the volume level in decibels. To get the
range of volume levels obtained from this method, call the
method.
HRESULT
Gets the master volume level, in decibels, of the audio
stream that enters or leaves the audio endpoint device.
Volume level in decibels. To get the range of volume levels obtained from this
method, call the method.
Gets the master volume level of the audio stream
that enters or leaves the audio endpoint device. The volume level is expressed as a
normalized, audio-tapered value in the range from 0.0 to 1.0.
A float
variable into which the method writes the volume level. The level is expressed as a
normalized value in the range from 0.0 to 1.0.
HRESULT
Gets the master volume level of the audio stream
that enters or leaves the audio endpoint device. The volume level is expressed as a
normalized, audio-tapered value in the range from 0.0 to 1.0.
Volume level. The level is expressed as a normalized value in the range from
0.0 to 1.0.
Sets the volume level, in decibels, of the specified
channel of the audio stream that enters or leaves the audio endpoint device.
The new volume level in decibels. To obtain the range and
granularity of the volume levels that can be set by this method, call the
method.
EventContext which can be accessed in the event handler.
The channel number. If the audio stream contains n channels, the channels are numbered from 0 to
n–1.
HRESULT
Sets the volume level, in decibels, of the specified
channel of the audio stream that enters or leaves the audio endpoint device.
The new volume level in decibels. To obtain the range and
granularity of the volume levels that can be set by this method, call the
method.
EventContext which can be accessed in the event handler.
The channel number. If the audio stream contains n channels, the channels are numbered from 0 to
n–1.
Sets the normalized, audio-tapered volume level
of the specified channel in the audio stream that enters or leaves the audio endpoint
device.
The volume level. The volume level is expressed as a normalized
value in the range from 0.0 to 1.0.
EventContext which can be accessed in the event handler.
The channel number. If the audio stream contains n channels, the channels are numbered from 0 to
n–1.
HRESULT
Sets the normalized, audio-tapered volume level
of the specified channel in the audio stream that enters or leaves the audio endpoint
device.
The volume level. The volume level is expressed as a normalized
value in the range from 0.0 to 1.0.
EventContext which can be accessed in the event handler.
The channel number. If the audio stream contains n channels, the channels are numbered from 0 to
n–1.
Gets the volume level, in decibels, of the specified
channel in the audio stream that enters or leaves the audio endpoint device.
A float variable into which the method writes the
volume level in decibels. To get the range of volume levels obtained from this method,
call the method.
The channel number. If the audio stream contains n channels, the channels are numbered from 0 to
n–1.
HRESULT
Gets the volume level, in decibels, of the specified
channel in the audio stream that enters or leaves the audio endpoint device.
The channel number. If the audio stream contains n channels, the channels are numbered from 0 to
n–1.
Volume level in decibels. To get the range of volume levels obtained from this
method, call the method.
Gets the normalized, audio-tapered volume level
of the specified channel of the audio stream that enters or leaves the audio endpoint
device.
A float variable into which the method writes the volume
level. The level is expressed as a normalized value in the range from 0.0 to
1.0.
The channel number. If the audio stream contains n channels, the channels are numbered from 0 to
n–1.
HRESULT
Gets the normalized, audio-tapered volume level
of the specified channel of the audio stream that enters or leaves the audio endpoint
device.
The channel number. If the audio stream contains n channels, the channels are numbered from 0 to
n–1.
Volume level of a specific channel. The level is expressed as a normalized
value in the range from 0.0 to 1.0.
Sets the muting state of the audio stream that enters or leaves the
audio endpoint device.
True mutes the stream. False turns off muting.
EventContext which can be accessed in the event handler.
HRESULT
Sets the muting state of the audio stream that enters or leaves the
audio endpoint device.
EventContext which can be accessed in the event handler.
True mutes the stream. False turns off muting.
Gets the muting state of the audio stream that enters or leaves the
audio endpoint device.
A Variable into which the method writes the muting state.
If is true, the stream is muted. If false, the stream is not muted.
HRESULT
Gets the muting state of the audio stream that enters or leaves the
audio endpoint device.
If the method returns true, the stream is muted. If false, the stream is not muted.
Gets information about the current step in the volume
range.
A variable into which the method writes the current step index. This index is a value in the
range from 0 to – 1, where 0 represents the minimum volume level and
– 1 represents the maximum level.
A variable into which the method writes the number of steps in the volume range. This number
remains constant for the lifetime of the object instance.
HRESULT
Gets information about the current step in the volume
range.
A variable into which the method writes the current step index. This index is a value in the
range from 0 to – 1, where 0 represents the minimum volume level and
– 1 represents the maximum level.
A variable into which the method writes the number of steps in the volume range. This number
remains constant for the lifetime of the object instance.
Increments, by one step, the volume level of the audio stream
that enters or leaves the audio endpoint device.
EventContext which can be accessed in the event handler.
HRESULT
Increments, by one step, the volume level of the audio stream
that enters or leaves the audio endpoint device.
EventContext which can be accessed in the event handler.
Decrements, by one step, the volume level of the audio stream
that enters or leaves the audio endpoint device.
EventContext which can be accessed in the event handler.
HRESULT
Decrements, by one step, the volume level of the audio stream
that enters or leaves the audio endpoint device.
EventContext which can be accessed in the event handler.
Queries the audio endpoint device for its
hardware-supported functions.
A variable into which the method writes a hardware support mask that indicates the
hardware capabilities of the audio endpoint device.
HRESULT
Queries the audio endpoint device for its
hardware-supported functions.
A hardware support mask that indicates the hardware capabilities of the audio endpoint device.
Gets the volume range, in decibels, of the audio stream that
enters or leaves the audio endpoint device.
Minimum volume level in decibels. This value remains constant
for the lifetime of the object instance.
Maximum volume level in decibels. This value remains constant
for the lifetime of the object instance.
Volume increment in decibels. This increment remains
constant for the lifetime of the object instance.
HREUSLT
Gets the volume range, in decibels, of the audio stream that
enters or leaves the audio endpoint device.
Minimum volume level in decibels. This value remains constant
for the lifetime of the object instance.
Maximum volume level in decibels. This value remains constant
for the lifetime of the object instance.
Volume increment in decibels. This increment remains
constant for the lifetime of the object instance.
Provides an implementation of the interface.
Occurs when the volume level or the muting state of the audio endpoint device has changed.
The method notifies the client that the volume level or muting state of the audio endpoint device has changed.
Pointer to the volume-notification data.
HRESULT; If the method succeeds, it returns . If it fails, it returns an error code.
Provides data for the event.
Initializes a new instance of the class.
The data which describes a change in the volume level or muting state of an audio endpoint device.
The native pointer to the .
Gets the event context value.
The event context value.
Context value for the method. This member is the value of the
event-context GUID that was provided as an input parameter to the method call
that changed the endpoint volume level or muting state. For more information, see
.
Gets a value indicating whether the audio stream is currently muted.
true if the audio stream is currently muted; otherwise, false.
Gets the current master volume level of the audio stream. The volume level is
normalized to the range from 0.0 to 1.0, where 0.0 is the minimum volume level and 1.0
is the maximum level. Within this range, the relationship of the normalized volume level
to the attenuation of signal amplitude is described by a nonlinear, audio-tapered curve.
Gets the number of channels.
The number of channels.
Gets the volume level for each channel is normalized to the range from 0.0 to 1.0, where 0.0
is the minimum volume level and 1.0 is the maximum level. Within this range, the
relationship of the normalized volume level to the attenuation of signal amplitude is
described by a nonlinear, audio-tapered curve.
Represents a single audio endpoint volume channel.
Gets the parent instance.
The parent instance.
Gets the index of the audio endpoint channel.
The index of the audio endpoint channel.
Initializes a new instance of the class.
The underlying which provides access to the audio endpoint volume.
The zero-based index of the channel.
Gets or sets the volume in decibel.
The volume in decibel.
Gets or sets the volume as a normalized value in the range from 0.0 to 1.0.
The volume as a normalized value in the range from 0.0 to 1.0.
The class enables a client to write output data to a rendering endpoint buffer.
For more information, see
.
Initializes a new instance of the class.
Pointer to the instance.
Returns a new instance of the class. This is done by calling the
method of the class.
The instance which should be used to create the new
instance.
A new instance of the class.
Retrieves a pointer to the next available space in the rendering endpoint buffer into
which the caller can write a data packet.
The number of audio frames in the data packet that the caller plans to write to the requested space in the buffer.
If the call succeeds, the size of the buffer area pointed to by return value matches the size specified in
.
A pointer variable into which the method writes the starting address of the buffer area into which the caller
will write the data packet.
Retrieves a pointer to the next available space in the rendering endpoint buffer into
which the caller can write a data packet.
The number of audio frames in the data packet that the caller plans to write to the requested space in the buffer.
If the call succeeds, the size of the buffer area pointed to by matches the size
specified in .
Pointer variable into which the method writes the starting address of the buffer area into which
the caller will write the data packet.
HRESULT
Releases the buffer space acquired in the previous call to the
method.
The number of audio frames written by the client to the data packet.
The value of this parameter must be less than or equal to the size of the data packet, as specified in the
numFramesRequested parameter passed to the method.
The buffer-configuration flags.
HRESULT
Releases the buffer space acquired in the previous call to the
method.
The number of audio frames written by the client to the data packet.
The value of this parameter must be less than or equal to the size of the data packet, as specified in the
numFramesRequested parameter passed to the method.
The buffer-configuration flags.
The structure describes a change in the volume level or muting state of an audio endpoint device.
For more information, see .
The event context value.
Context value for the method. This member is the value of the
event-context GUID that was provided as an input parameter to the method call
that changed the endpoint volume level or muting state. For more information, see
.
A value indicating whether the audio stream is currently muted. true if the audio stream is currently muted;
otherwise, false.
Specifies the current master volume level of the audio stream. The volume level is
normalized to the range from 0.0 to 1.0, where 0.0 is the minimum volume level and 1.0
is the maximum level. Within this range, the relationship of the normalized volume level
to the attenuation of signal amplitude is described by a nonlinear, audio-tapered curve.
The number of channels.
The first element of an array which specifies the volume level of each channel. Use the
method to get all channel volumes.
Gets all channel volumes.
The volume level for each channel is normalized to the range from 0.0 to 1.0, where 0.0
is the minimum volume level and 1.0 is the maximum level. Within this range, the
relationship of the normalized volume level to the attenuation of signal amplitude is
described by a nonlinear, audio-tapered curve.
The are hardware support flags for an audio endpoint device.
For more information, see .
None
The audio endpoint device supports a hardware volume control.
The audio endpoint device supports a hardware mute control.
The audio endpoint device supports a hardware peak meter.
The interface provides notifications of changes in the volume level and muting state of an audio endpoint device.
Notifies the client that the volume level or muting state of the audio endpoint device has changed.
Pointer to the volume-notification data.
HRESULT; If the method succeeds, it returns . If it fails, it returns an error code.
Enables a client to create and initialize an audio stream between an audio application and the audio engine (for a
shared-mode stream) or the hardware buffer of an audio endpoint device (for an exclusive-mode stream). For more
information, see
.
IID of the IAudioClient-interface.
Initializes a new instance of the class.
Native pointer.
Use the method to create a new instance.
Gets the default interval between periodic processing passes by the audio engine. The time is expressed in
100-nanosecond units.
Gets the minimum interval between periodic processing passes by the audio endpoint device. The time is expressed in
100-nanosecond units.
Gets the maximum capacity of the endpoint buffer.
Gets the number of frames of padding in the endpoint buffer.
Gets the stream format that the audio engine uses for its
internal processing of shared-mode streams.
Gets the maximum latency for the current stream and can
be called any time after the stream has been initialized.
Returns a new instance of the class.
Device which should be used to create the instance.
instance.
Initializes the audio stream.
The sharing mode for the connection. Through this parameter, the client tells the audio engine
whether it wants to share the audio endpoint device with other clients.
Flags to control creation of the stream.
The buffer capacity as a time value (expressed in 100-nanosecond units). This parameter
contains the buffer size that the caller requests for the buffer that the audio application will share with the
audio engine (in shared mode) or with the endpoint device (in exclusive mode). If the call succeeds, the method
allocates a buffer that is a least this large.
The device period. This parameter can be nonzero only in exclusive mode. In shared mode,
always set this parameter to 0. In exclusive mode, this parameter specifies the requested scheduling period for
successive buffer accesses by the audio endpoint device. If the requested device period lies outside the range that
is set by the device's minimum period and the system's maximum period, then the method clamps the period to that
range. If this parameter is 0, the method sets the device period to its default value. To obtain the default device
period, call the method. If the
stream flag is set and
is set as the , then
must be nonzero and equal to .
The format descriptor. For more information, see
.
A value that identifies the audio session that the stream belongs to. If the
identifies a session that has been previously opened, the method adds the stream to that
session. If the GUID does not identify an existing session, the method opens a new session and adds the stream to
that session. The stream remains a member of the same session for its lifetime. Use to
use the default session.
HRESULT
For more information, see
.
Initializes the audio stream.
The sharing mode for the connection. Through this parameter, the client tells the audio engine
whether it wants to share the audio endpoint device with other clients.
Flags to control creation of the stream.
The buffer capacity as a time value (expressed in 100-nanosecond units). This parameter
contains the buffer size that the caller requests for the buffer that the audio application will share with the
audio engine (in shared mode) or with the endpoint device (in exclusive mode). If the call succeeds, the method
allocates a buffer that is a least this large.
The device period. This parameter can be nonzero only in exclusive mode. In shared mode,
always set this parameter to 0. In exclusive mode, this parameter specifies the requested scheduling period for
successive buffer accesses by the audio endpoint device. If the requested device period lies outside the range that
is set by the device's minimum period and the system's maximum period, then the method clamps the period to that
range. If this parameter is 0, the method sets the device period to its default value. To obtain the default device
period, call the method. If the
stream flag is set and
is set as the , then
must be nonzero and equal to .
Pointer to the format descriptor. For more information, see
.
A value that identifies the audio session that the stream belongs to. If the
identifies a session that has been previously opened, the method adds the stream to that
session. If the GUID does not identify an existing session, the method opens a new session and adds the stream to
that session. The stream remains a member of the same session for its lifetime. Use to
use the default session.
HRESULT
For more information, see
.
Initializes the audio stream.
The sharing mode for the connection. Through this parameter, the client tells the audio engine
whether it wants to share the audio endpoint device with other clients.
Flags to control creation of the stream.
The buffer capacity as a time value (expressed in 100-nanosecond units). This parameter
contains the buffer size that the caller requests for the buffer that the audio application will share with the
audio engine (in shared mode) or with the endpoint device (in exclusive mode). If the call succeeds, the method
allocates a buffer that is a least this large.
The device period. This parameter can be nonzero only in exclusive mode. In shared mode,
always set this parameter to 0. In exclusive mode, this parameter specifies the requested scheduling period for
successive buffer accesses by the audio endpoint device. If the requested device period lies outside the range that
is set by the device's minimum period and the system's maximum period, then the method clamps the period to that
range. If this parameter is 0, the method sets the device period to its default value. To obtain the default device
period, call the method. If the
stream flag is set and
is set as the , then
must be nonzero and equal to .
The format descriptor. For more information, see
.
A value that identifies the audio session that the stream belongs to. If the
identifies a session that has been previously opened, the method adds the stream to that
session. If the GUID does not identify an existing session, the method opens a new session and adds the stream to
that session. The stream remains a member of the same session for its lifetime. Use to
use the default session.
For more information, see
.
Retrieves the size (maximum capacity) of the endpoint buffer.
Retrieves the number of audio frames that the buffer can hold.
The size of one frame = (number of bits per sample)/8 * (number of channels)
HRESULT
Returns the size (maximum capacity) of the endpoint buffer.
The number of audio frames that the buffer can hold.
The size of one frame = (number of bits per sample)/8 * (number of channels)
HRESULT
Retrieves the maximum latency for the current stream and can
be called any time after the stream has been initialized.
Retrieves a value representing the latency. The time is expressed in 100-nanosecond units.
Rendering clients can use this latency value to compute the minimum amount of data that
they can write during any single processing pass. To write less than this minimum is to
risk introducing glitches into the audio stream. For more information, see
.
HRESULT
Retrieves the maximum latency for the current stream and can
be called any time after the stream has been initialized.
Rendering clients can use this latency value to compute the minimum amount of data that
they can write during any single processing pass. To write less than this minimum is to
risk introducing glitches into the audio stream. For more information, see
.
A value representing the latency. The time is expressed in 100-nanosecond units.
Retrieves the number of frames of padding in the endpoint buffer.
Retrieves the frame count (the number of audio frames of padding in the buffer).
HRESULT
The size of one frame = (number of bits per sample)/8 * (number of channels)
Retrieves the number of frames of padding in the endpoint
buffer.
The frame count (the number of audio frames of padding in the buffer).
The size of one frame = (number of bits per sample)/8 * (number of channels)
Indicates whether the audio endpoint device
supports a particular stream format.
The sharing mode for the stream format. Through this parameter, the client indicates whether it
wants to use the specified format in exclusive mode or shared mode.
The stream format to test whether it is supported by the or not.
Retrieves the supported format that is closest to the format that the client specified
through the parameter. If is
, the will be always null.
HRESULT code. If the method returns 0 (= ), the endpoint device supports the specified
. If the method returns
1 (= ), the method succeeded with a to the specified
. If the method returns
0x88890008 (= ), the method succeeded but the specified format
is not supported in exclusive mode. If the method returns anything else, the method failed.
For more information, see
.
Indicates whether the audio endpoint device
supports a particular stream format.
The sharing mode for the stream format. Through this parameter, the client indicates whether it
wants to use the specified format in exclusive mode or shared mode.
The stream format to test whether it is supported by the or not.
Retrieves the supported format that is closest to the format that the client specified
through the parameter. If is
, the will be always null.
True if the is supported. False if the
is not supported.
For more information, see
.
Indicates whether the audio endpoint device
supports a particular stream format.
The sharing mode for the stream format. Through this parameter, the client indicates whether it
wants to use the specified format in exclusive mode or shared mode.
The stream format to test whether it is supported by the or not.
True if the is supported. False if the
is not supported.
For more information, see
.
Retrieves the stream format that the audio engine uses for its
internal processing of shared-mode streams.
Retrieves the mix format that the audio engine uses for its internal processing of
shared-mode streams.
For more information, see
.
HRESULT
Retrieves the stream format that the audio engine uses for its
internal processing of shared-mode streams.
For more information, see
.
The mix format that the audio engine uses for its internal processing of shared-mode streams.
Retrieves the length of the periodic interval separating
successive processing passes by the audio engine on the data in the endpoint buffer.
Retrieves a time value specifying the default interval between periodic processing
passes by the audio engine. The time is expressed in 100-nanosecond units.
Retrieves a time value specifying the minimum interval between periodic processing
passes by the audio endpoint device. The time is expressed in 100-nanosecond units.
Use the and the properties instead of
the method.
For more information, see
.
HRESULT
Starts the audio stream.
HRESULT
For more information, see
.
Starts the audio stream.
For more information, see
.
Stops the audio stream.
HRESULT
For more information, see
.
Stops the audio stream.
For more information, see
.
Resets the audio stream.
HRESULT
For more information, see
.
Resets the audio stream.
For more information, see
.
Sets the event handle that the system signals when an audio
buffer is ready to be processed by the client.
The event handle.
HRESULT
For more information, see
.
Sets the event handle that the system signals when an audio
buffer is ready to be processed by the client.
The event handle.
For more information, see
.
Sets the event handle that the system signals when an audio
buffer is ready to be processed by the client.
The event handle.
For more information, see
.
Accesses additional services from the audio client object.
The interface ID for the requested service. For a list of all available values, see
.
A pointer variable into which the method writes the address of an instance of the
requested interface. Through this method, the caller obtains a counted reference to the interface. The caller is
responsible for releasing the interface, when it is no longer needed, by calling the interface's Release method. If
the GetService call fails, *ppv is .
HRESULT
For more information, see
.
Accesses additional services from the audio client object.
The interface ID for the requested service. For a list of all available values, see
.
A pointer into which the method writes the address of an instance of the requested interface.
Through this method, the caller obtains a counted reference to the interface. The caller is responsible for
releasing the interface, when it is no longer needed, by calling the interface's Release method.
For more information, see
.
AudioClient share mode
The device will be opened in shared mode and use the WAS format.
The device will be opened in exclusive mode and use the application specified format.
Represents a collection of multimedia device resources.
Initializes a new instance of the class.
The native pointer.
Use the method to create an instance of the class.
Gets the number of devices in the device collection.
Gets the element at the specified index.
The method retrieves a count of the devices in the device collection.
The number of devices in the device collection.
The method retrieves a count of the devices in the device collection.
Variable into which the method writes the number of devices in the device collection.
HRESULT
The method retrieves a pointer to the specified item in the device collection.
The device number. If the collection contains n devices, the devices are numbered 0 to n– 1.
The object of the specified item in the device collection.
The method retrieves a pointer to the specified item in the device collection.
The device number. If the collection contains n devices, the devices are numbered 0 to n– 1.
A pointer variable into which the method writes the address of the object of the specified item in the device collection.
HRESULT
Returns an enumerator that iterates through the .
Enumerator for the .
Returns an enumerator that iterates through the .
Enumerator for the .
The object provides notifications when an audio endpoint device is added or removed, when the state or properties of an endpoint device change, or when there is a change in the default role assigned to an endpoint device.
Occurs when the state of an audio endpoint device has changed.
Occurs when a new audio endpoint device has been added.
Occurs when an audio endpoint device has been removed.
Occurs when the default audio endpoint device for a particular device role has changed.
Occurs when the value of a property belonging to an audio endpoint device has changed.
Initializes a new instance of the class.
Initializes a new instance of the class based on an existing .
The OnDeviceStateChanged method indicates that the state of an audio endpoint device has
changed.
The device id that identifies the audio endpoint device.
Specifies the new state of the endpoint device.
HRESULT
The OnDeviceAdded method indicates that a new audio endpoint device has been added.
The device id that identifies the audio endpoint device.
HRESULT
The OnDeviceRemoved method indicates that an audio endpoint device has been removed.
The device id that identifies the audio endpoint device.
HRESULT
The OnDefaultDeviceChanged method notifies the client that the default audio endpoint
device for a particular device role has changed.
The data-flow direction of the endpoint device.
The device role of the audio endpoint device.
The device id that identifies the audio endpoint device.
HRESULT
The OnPropertyValueChanged method indicates that the value of a property belonging to an
audio endpoint device has changed.
The device id that identifies the audio endpoint device.
The that specifies the changed property.
HRESULT
Disposes und unregisters the .
In order to unregister the , this method calls the method.
Finalizes an instance of the class.
The object enables a client to control the master volume level of an audio session.
For more information, see .
Creates a new instance by calling the method of the
specified .
The which should be used to create the -instance
with.
A new instance of the class.
Initializes a new instance of the class.
The native pointer of the COM object.
Gets or sets the master volume level for the audio session. Valid volume levels are in the range 0.0 (=0%) to 1.0 (=100%).
Gets or sets the muting state for the audio session. True indicates that muting is enabled. False indicates that it is disabled.
Sets the master volume level for the audio session.
The new master volume level. Valid volume levels are in the range 0.0 to 1.0.
EventContext which can be accessed in the event handler.
HRESULT
Retrieves the client volume level for the audio session.
A variable into which the method writes the client volume level. The volume level is a value in the range 0.0 to 1.0.
HRESULT
Sets the muting state for the audio session.
The new muting state. TRUE enables muting. FALSE disables muting.
EventContext which can be accessed in the event handler.
HRESULT
The GetMute method retrieves the current muting state for the audio session.
A variable into which the method writes the muting state. TRUE indicates that muting is enabled. FALSE indicates that it is disabled.
HRESULT
Encapsulates the generic features of a multimedia device resource.
Initializes a new instance of the class.
Native pointer.
Use the class to create a new instance.
Gets the propertystore associated with the .
Warning: This PropertyStore is only readable. Use the OpenPropertyStore-Method to get
writeable PropertyStore.
Gets the device id. For information, see .
Gets the friendly name of the device.
This value is stored in the .
Gets the AudioEndpointPath of the device.
This value is stored in the .
Use this value as the deviceid for XAudio2.8 device selection.
Gets the device state of the device.
Gets the data flow of the device.
The data flow of the device.
Gets the device format.
Specifies the device format, which is the format that the user has selected for the stream that flows between the audio engine and the audio endpoint device when the device operates in shared mode.
Creates a COM object with the specified interface.
The interface identifier. This parameter is a reference to a GUID that identifies the interface that the caller requests be activated. The caller will use this interface to communicate with the COM object.
The execution context in which the code that manages the newly created object will run.
Use as the default value. See http://msdn.microsoft.com/en-us/library/windows/desktop/dd371405%28v=vs.85%29.aspx for more details.
A pointer variable into which the method writes the address of the interface specified by parameter .
HRESULT
Creates a COM object with the specified interface.
The interface identifier. This parameter is a reference to a GUID that identifies the interface that the caller requests be activated. The caller will use this interface to communicate with the COM object.
The execution context in which the code that manages the newly created object will run.
Use as the default value. See http://msdn.microsoft.com/en-us/library/windows/desktop/dd371405%28v=vs.85%29.aspx for more details.
A pointer variable into which the method writes the address of the interface specified by parameter .
Retrieves an interface to the device's property store.
The storage-access mode. This parameter specifies whether to open the property store in read mode, write mode, or read/write mode.
for the .
Retrieves an interface to the device's property store.
The storage-access mode. This parameter specifies whether to open the property store in read mode, write mode, or read/write mode.
A pointer variable into which the method writes the address of the IPropertyStore interface of the device's property store.
HRESULT
Retrieves an endpoint ID string that identifies the audio endpoint device.
The variable which will receive the id of the device.
HRESULT
Retrieves the current device state.
The variable which will receive the of the device.
HRESULT
Disposes the and its default property store (see property).
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Returns the of the .
The .
CoreAudioAPI COM Exception
Throws an if the represents an error.
The error code.
Name of the interface which contains the COM-function which returned the specified .
Name of the COM-function which returned the specified .
Initializes a new instance of the class.
Errorcode.
Name of the interface which contains the COM-function which returned the specified .
Name of the COM-function which returned the specified .
Defines constants that indicate the direction in which audio data flows between an audio endpoint device and an application.
Audio rendering stream. Audio data flows from the application to the audio endpoint device, which renders the stream.
Audio capture stream. Audio data flows from the audio endpoint device that captures the stream, to the application.
Audio rendering or capture stream. Audio data can flow either from the application to the audio endpoint device, or from the audio endpoint device to the application.
Indicates the current state of an audio endpoint device.
The audio endpoint device is active. That is, the audio adapter that connects to the endpoint device is present and enabled. In addition, if the endpoint device plugs into a jack on the adapter, then the endpoint device is plugged in.
The audio endpoint device is disabled. The user has disabled the device in the Windows multimedia control panel, Mmsys.cpl. For more information, see Remarks.
he audio endpoint device is not present because the audio adapter that connects to the endpoint device has been removed from the system, or the user has disabled the adapter device in Device Manager.
The audio endpoint device is unplugged. The audio adapter that contains the jack for the endpoint device is present and enabled, but the endpoint device is not plugged into the jack. Only a device with jack-presence detection can be in this state.
Includes audio endpoint devices in all states—active, disabled, not present, and unplugged.
Encapsulates the generic features of a multimedia device resource.
Creates a COM object with the specified interface.
The interface identifier. This parameter is a reference to a GUID that identifies the interface that the caller requests be activated. The caller will use this interface to communicate with the COM object.
The execution context in which the code that manages the newly created object will run.
Use as the default value. See http://msdn.microsoft.com/en-us/library/windows/desktop/dd371405%28v=vs.85%29.aspx for more details.
Pointer to a pointer variable into which the method writes the address of the interface specified by parameter .
HRESULT
Retrieves an interface to the device's property store.
The storage-access mode. This parameter specifies whether to open the property store in read mode, write mode, or read/write mode.
Pointer to a pointer variable into which the method writes the address of the IPropertyStore interface of the device's property store.
HRESULT
Retrieves an endpoint ID string that identifies the audio endpoint device.
The variable which will receive the id of the device.
HRESULT
Retrieves the current device state.
The variable which will receive the of the device.
HRESULT
Represents a collection of multimedia device resources.
The method retrieves a count of the devices in the device collection.
Variable into which the method writes the number of devices in the device collection.
HRESULT
The method retrieves a pointer to the specified item in the device collection.
The device number. If the collection contains n devices, the devices are numbered 0 to n– 1.
The object of the specified item in the device collection.
HRESULT
Provides methods for enumerating multimedia device resources.
Generates a collection of audio endpoint devices that meet the specified criteria.
The data-flow direction for the endpoint device.
The state or states of the endpoints that are to be included in the collection.
Pointer to a pointer variable into which the method writes the address of the COM object of the device-collection object.
HRESULT
The method retrieves the default audio endpoint for the specified data-flow direction and role.
The data-flow direction for the endpoint device.
The role of the endpoint device.
Pointer to a pointer variable into which the method writes the address of the COM object of the endpoint object for the default audio endpoint device.
HRESULT
Retrieves an audio endpoint device that is identified by an endpoint ID string.
Endpoint ID. The caller typically obtains this string from the property or any method of the .
Pointer to a pointer variable into which the method writes the address of the IMMDevice interface for the specified device. Through this method, the caller obtains a counted reference to the interface.
HREUSLT
Registers a client's notification callback interface.
Implementation of the which is should receive the notificaitons.
HRESULT
Deletes the registration of a notification interface that the client registered in a previous call to the method.
Implementation of the which should be unregistered from any notifications.
HRESULT
Defines constants that indicate the role that the system has assigned to an audio endpoint device.
Games, system notification sounds, and voice commands.
Music, movies, narration, and live music recording.
Voice communications (talking to another person).
Specifies how to open a property store.
Readable only.
Writeable but not readable.
Read- and writeable.
Provides methods for enumerating multimedia device resources.
Returns the default audio endpoint for the specified data-flow direction and role.
The data-flow direction for the endpoint device.
The role of the endpoint device.
instance of the endpoint object for the default audio endpoint device.
Returns the default audio endpoint for the specified data-flow direction and role. If no device is available the method returns null.
The data-flow direction for the endpoint device.
The role of the endpoint device.
instance of the endpoint object for the default audio endpoint device. If no device is available the method returns null.
Generates a collection of all active audio endpoint devices that meet the specified criteria.
The data-flow direction for the endpoint device.
which contains the enumerated devices.
Generates a collection of audio endpoint devices that meet the specified criteria.
The data-flow direction for the endpoint device.
The state or states of the endpoints that are to be included in the collection.
which contains the enumerated devices.
Initializes a new instance of the class.
Gets the with the specified device id.
The .
The device identifier.
Returns the default audio endpoint for the specified data-flow direction and role.
The data-flow direction for the endpoint device.
The role of the endpoint device.
instance of the endpoint object for the default audio endpoint device.
The method retrieves the default audio endpoint for the specified data-flow direction and role.
The data-flow direction for the endpoint device.
The role of the endpoint device.
A pointer variable into which the method writes the address of the COM object of the endpoint object for the default audio endpoint device.
HRESULT
Generates a collection of audio endpoint devices that meet the specified criteria.
The data-flow direction for the endpoint device.
The state or states of the endpoints that are to be included in the collection.
which contains the enumerated devices.
Generates a collection of audio endpoint devices that meet the specified criteria.
The data-flow direction for the endpoint device.
The state or states of the endpoints that are to be included in the collection.
A pointer variable into which the method writes the address of the COM object of the device-collection object.
HRESULT
Retrieves an audio endpoint device that is identified by an endpoint ID string.
Endpoint ID. The caller typically obtains this string from the property or any method of the .
instance for specified device.
Retrieves an audio endpoint device that is identified by an endpoint ID string.
Endpoint ID. The caller typically obtains this string from the property or any method of the .
A pointer variable into which the method writes the address of the IMMDevice interface for the specified device. Through this method, the caller obtains a counted reference to the interface.
HREUSLT
Registers a client's notification callback interface.
Implementation of the which is should receive the notificaitons.
Registers a client's notification callback interface.
Implementation of the which is should receive the notificaitons.
HRESULT
Deletes the registration of a notification interface that the client registered in a previous call to the method.
Implementation of the which should be unregistered from any notifications.
Deletes the registration of a notification interface that the client registered in a previous call to the method.
Implementation of the which should be unregistered from any notifications.
HRESULT
Is used to create buffer objects, manage devices, and set up the environment. This object supersedes and adds new methods.
Obtain a instance by calling the method.
Initializes a new instance of the class.
The native pointer of the COM object.
Ascertains whether the device driver is certified for DirectX.
Receives a value which indicates whether the device driver is certified for DirectX.
DSResult
Ascertains whether the device driver is certified for DirectX.
A value which indicates whether the device driver is certified for DirectX. On emulated devices, the method returns .
Used to create buffer objects, manage devices, and set up the environment.
Returns a new instance of the class.
The device to use for the initialization.
The new instance of the class.
Returns a new instance of the class.
The device to use for the initialization.
The new instance of the class.
Gets the capabilities.
Initializes a new instance of the class.
The native pointer of the DirectSound COM object.
Checks whether the specified is supported.
The wave format.
A value indicating whether the specified is supported. If true, the is supported; Otherwise false.
Sets the cooperative level of the application for this sound device.
Handle to the application window.
The requested level.
Sets the cooperative level of the application for this sound device.
Handle to the application window.
The requested level.
DSResult
Creates a sound buffer object to manage audio samples.
A structure that describes the sound buffer to create.
Must be .
A variable that receives the IDirectSoundBuffer interface of the new buffer object.
For more information, see .
Creates a sound buffer object to manage audio samples.
A structure that describes the sound buffer to create.
Must be .
A variable that receives the IDirectSoundBuffer interface of the new buffer object.
DSResult
For more information, see .
Retrieves the capabilities of the hardware device that is represented by the device object.
Receives the capabilities of this sound device.
DSResult
Use the property instead.
Creates a new secondary buffer that shares the original buffer's memory.
Type of the buffer to duplicate.
The buffer to duplicate.
The duplicated buffer.
For more information, see .
Creates a new secondary buffer that shares the original buffer's memory.
Address of the IDirectSoundBuffer or IDirectSoundBuffer8 interface of the buffer to duplicate.
Address of a variable that receives the IDirectSoundBuffer interface pointer for the new buffer.
DSResult
For more information, see .
Has no effect. See remarks.
This method was formerly used for compacting the on-board memory of ISA sound cards.
DSResult
Has no effect. See remarks.
This method was formerly used for compacting the on-board memory of ISA sound cards.
Retrieves the speaker configuration.
Retrieves the speaker configuration.
DSResult
Retrieves the speaker configuration.
The speaker configuration.
Specifies the speaker configuration of the device.
The speaker configuration.
DSResult
In Windows Vista and later versions of Windows, is a NOP. For Windows Vista and later versions, the speaker configuration is a system setting that should not be modified by an application. End users can set the speaker configuration through control panels.
For more information, see .
Specifies the speaker configuration of the device.
The speaker configuration.
In Windows Vista and later versions of Windows, is a NOP. For Windows Vista and later versions, the speaker configuration is a system setting that should not be modified by an application. End users can set the speaker configuration through control panels.
For more information, see .
Initializes a device object that was created by using the CoCreateInstance function.
The globally unique identifier (GUID) specifying the sound driver to which this device object binds. Pass null to select the primary sound driver.
DSResult
Initializes a device object that was created by using the CoCreateInstance function.
The globally unique identifier (GUID) specifying the sound driver to which this device object binds. Pass null to select the primary sound driver.
Combines a value with a value.
Must be .
The value to combine with the .
Combination out of the and the value.
Must be stereo.; speakerConfiguration
Used to manage sound buffers.
Left only.
50% left, 50% right.
Right only.
The default frequency. For more information, see .
Initializes a new instance of the class.
The native pointer of the COM object.
Gets the capabilities of the buffer object.
Gets the status of the sound buffer.
Retrieves the capabilities of the buffer object.
Receives the capabilities of this sound buffer.
DSResult
Retrieves the capabilities of the buffer object.
The capabilities of this sound buffer.
Causes the sound buffer to play, starting at the play cursor.
Flags specifying how to play the buffer.
Causes the sound buffer to play, starting at the play cursor.
Flags specifying how to play the buffer.
Priority for the sound, used by the voice manager when assigning hardware mixing resources. The lowest priority is 0, and the highest priority is 0xFFFFFFFF. If the buffer was not created with the flag, this value must be 0.
Causes the sound buffer to play, starting at the play cursor.
Flags specifying how to play the buffer.
Priority for the sound, used by the voice manager when assigning hardware mixing resources. The lowest priority is 0, and the highest priority is 0xFFFFFFFF. If the buffer was not created with the flag, this value must be 0.
DSResult
Causes the sound buffer to stop playing.
For more information, see .
Causes the sound buffer to stop playing.
DSResult
For more information, see .
Restores the memory allocation for a lost sound buffer.
For more information, see .
Restores the memory allocation for a lost sound buffer.
DSResult
For more information, see .
Readies all or part of the buffer for a data write and returns pointers to which data can be written.
Offset, in bytes, from the start of the buffer to the point where the lock begins. This parameter is ignored if is specified in the parameter.
Size, in bytes, of the portion of the buffer to lock. The buffer is conceptually circular, so this number can exceed the number of bytes between and the end of the buffer.
Receives a pointer to the first locked part of the buffer.
Receives the number of bytes in the block at . If this value is less than , the lock has wrapped and points to a second block of data at the beginning of the buffer.
Receives a pointer to the second locked part of the capture buffer. If is returned, the parameter points to the entire locked portion of the capture buffer.
Receives the number of bytes in the block at . If is , this value is zero.
Flags modifying the lock event.
DSResult
Readies all or part of the buffer for a data write and returns pointers to which data can be written.
Offset, in bytes, from the start of the buffer to the point where the lock begins. This parameter is ignored if is specified in the parameter.
Size, in bytes, of the portion of the buffer to lock. The buffer is conceptually circular, so this number can exceed the number of bytes between and the end of the buffer.
Receives a pointer to the first locked part of the buffer.
Receives the number of bytes in the block at . If this value is less than , the lock has wrapped and points to a second block of data at the beginning of the buffer.
Receives a pointer to the second locked part of the capture buffer. If is returned, the parameter points to the entire locked portion of the capture buffer.
Receives the number of bytes in the block at . If is , this value is zero.
Flags modifying the lock event.
Releases a locked sound buffer.
Address of the value retrieved in the audioPtr1 parameter of the method.
Number of bytes written to the portion of the buffer at audioPtr1.
Address of the value retrieved in the audioPtr2 parameter of the method.
Number of bytes written to the portion of the buffer at audioPtr2.
DSResult
Releases a locked sound buffer.
Address of the value retrieved in the audioPtr1 parameter of the method.
Number of bytes written to the portion of the buffer at audioPtr1.
Address of the value retrieved in the audioPtr2 parameter of the method.
Number of bytes written to the portion of the buffer at audioPtr2.
Retrieves the position of the play and write cursors in the sound buffer.
Receives the offset, in bytes, of the play cursor.
Receives the offset, in bytes, of the write cursor.
DSResult
Retrieves the position of the play and write cursors in the sound buffer.
Receives the offset, in bytes, of the play cursor.
Receives the offset, in bytes, of the write cursor.
Sets the position of the play cursor, which is the point at which the next byte of data is read from the buffer.
Offset of the play cursor, in bytes, from the beginning of the buffer.
Sets the position of the play cursor, which is the point at which the next byte of data is read from the buffer.
Offset of the play cursor, in bytes, from the beginning of the buffer.
DSResult
Initializes a sound buffer object if it has not yet been initialized.
The device object associated with this buffer.
A structure that contains the values used to initialize this sound buffer.
DSResult
Initializes a sound buffer object if it has not yet been initialized.
The device object associated with this buffer.
A structure that contains the values used to initialize this sound buffer.
Retrieves the status of the sound buffer.
Receives the status of the sound buffer.
DSResult
Use the property instead.
Sets the frequency at which the audio samples are played.
Frequency, in hertz (Hz), at which to play the audio samples. A value of resets the frequency to the default value of the buffer format.
DSResult
Before setting the frequency, you should ascertain whether the frequency is supported by checking the and members of the structure for the device. Some operating systems do not support frequencies greater than 100,000 Hz.
Sets the frequency at which the audio samples are played.
Frequency, in hertz (Hz), at which to play the audio samples. A value of resets the frequency to the default value of the buffer format.
Before setting the frequency, you should ascertain whether the frequency is supported by checking the and members of the structure for the device. Some operating systems do not support frequencies greater than 100,000 Hz.
Retrieves the frequency, in samples per second, at which the buffer is playing.
A variable that receives the frequency at which the audio buffer is being played, in hertz.
DSResult
Gets the frequency, in samples per second, at which the buffer is playing.
The frequency at which the audio buffer is being played, in hertz.
Sets the relative volume of the left and right channels.
Relative volume between the left and right channels. Must be between and .
DSResult
For more information, see .
Sets the relative volume of the left and right channels.
Relative volume between the left and right channels. Must be between and .
For more information, see .
Sets the relative volume of the left and right channels as a scalar value.
Relative volume between the left and right channels. Must be between -1.0 and 1.0.
A value of -1.0 will set the volume of the left channel to 100% and the volume of the right channel to 0%.
A value of 1.0 will set the volume of the left channel to 0% and the volume of the right channel to 100%.
Retrieves the relative volume of the left and right audio channels.
A variable that receives the relative volume, in hundredths of a decibel.
DSResult
Retrieves the relative volume of the left and right audio channels.
The relative volume, in hundredths of a decibel.
Gets the relative volume of the left and right channels as a scalar value.
The relative volume between the left and right channels. A value of -1.0 indicates that the volume of the left channel is set to 100% and the volume of the right channel to 0%.
A value of 1.0 indicates that the volume of the left channel is set to 0% and the volume of the right channel is set to 100%.
Sets the attenuation of the sound.
Attenuation, in hundredths of a decibel (dB).
DSResult
Sets the attenuation of the sound.
Attenuation, in hundredths of a decibel (dB).
Sets the attenuation of the sound.
The attenuation of the sound. The attenuation is expressed as a normalized value in the range from 0.0 to 1.0.
Retrieves the attenuation of the sound.
A variable that receives the attenuation, in hundredths of a decibel.
DSResult
Returns the attenuation of the sound.
The attenuation, in hundredths of a decibel.
Returns the attenuation of the sound.
The attenuation of the sound. The attenuation is expressed as a normalized value in the range from 0.0 to 1.0.
Retrieves a description of the format of the sound data in the buffer, or the buffer size needed to retrieve the format description.
Address of a or instance that receives a description of the sound data in the buffer. To retrieve the buffer size needed to contain the format description, specify . In this case the variable at receives the size of the structure needed to receive the data.
Size, in bytes, of the structure at . If is not , this value must be equal to or greater than the size of the expected data.
A variable that receives the number of bytes written to the structure at .
DSResult
Returns a description of the format of the sound data in the buffer.
A description of the format of the sound data in the buffer. The returned description is either of the type or of the type .
Sets the format of the primary buffer. Whenever this application has the input focus, DirectSound will set the primary buffer to the specified format.
A waveformat that describes the new format for the primary sound buffer.
DSResult
Sets the format of the primary buffer. Whenever this application has the input focus, DirectSound will set the primary buffer to the specified format.
A waveformat that describes the new format for the primary sound buffer.
Enables effects on a buffer. For this method to succeed, CoInitialize must have been called. Additionally, the buffer must not be playing or locked.
Number of elements in the effectDescriptions and resultCodes arrays. If this value is 0, effectDescriptions and resultCodes must both be . Set to 0 to remove all effects from the buffer.
Address of an array of DSEFFECTDESC structures, of size effectsCount, that specifies the effects wanted on the buffer. Must be if effectsCount is 0.
Address of an array of DWORD elements, of size effectsCount.
DSResult
Allocates resources for a buffer that was created with the DSBCAPS_LOCDEFER flag in the DSBUFFERDESC structure.
Flags specifying how resources are to be allocated for a buffer created with the DSBCAPS_LOCDEFER flag.
Number of elements in the resultCodes array, or 0 if resultCodes is .
Address of an array of DWORD variables that receives information about the effects associated with the buffer. This array must contain one element for each effect that was assigned to the buffer by .
DSResult
Retrieves an interface for an effect object associated with the buffer.
Unique class identifier of the object being searched for, such as GUID_DSFX_STANDARD_ECHO. Set this parameter to GUID_All_Objects to search for objects of any class.
Index of the object within objects of that class in the path.
Unique identifier of the desired interface.
Address of a variable that receives the desired interface pointer.
DSResult
For more information, see .
Gets a value indicating whether the buffer is lost. True means that the buffer is lost; Otherwise False.
Writes data to the buffer by locking the buffer, copying data to the buffer and finally unlocking it.
The data to write to the buffer.
The zero-based offset in the at which to start copying data.
The number of bytes to write.
Returns true if writing data was successful; Otherwise false.
Writes data to the buffer by locking the buffer, copying data to the buffer and finally unlocking it.
The data to write to the buffer.
The zero-based offset in the at which to start copying data.
The number of shorts to write.
Returns true if writing data was successful; Otherwise false.
Describes the capabilities of a device.
Size of the structure, in bytes. This member must be initialized before the structure is used.
Flags describing device capabilities.
Minimum sample rate specification that is supported by this device's hardware secondary sound buffers.
Maximum sample rate specification that is supported by this device's hardware secondary sound buffers.
Number of primary buffers supported. This value will always be 1.
Number of buffers that can be mixed in hardware. This member can be less than the sum of and . Resource tradeoffs frequently occur.
Maximum number of static buffers.
Maximum number of streaming sound buffers.
Number of unallocated buffers. On WDM drivers, this includes .
Number of unallocated static buffers.
Number of unallocated streaming buffers.
Maximum number of 3D buffers.
Maximum number of static 3D buffers.
Maximum number of streaming 3D buffers.
Number of unallocated 3D buffers.
Number of unallocated static 3D buffers.
Number of unallocated streaming 3D buffers.
Size, in bytes, of the amount of memory on the sound card that stores static sound buffers.
Size, in bytes, of the free memory on the sound card.
Size, in bytes, of the largest contiguous block of free memory on the sound card.
The rate, in kilobytes per second, at which data can be transferred to hardware static sound buffers. This and the number of bytes transferred determines the duration of a call to the method.
The processing overhead, as a percentage of main processor cycles, needed to mix software buffers. This varies according to the bus type, the processor type, and the clock speed.
Represents a directsound-device.
The guid of the default playback device.
Gets the default playback device.
Enumerates all directsound-devices. Use the method instead.
A list, containing all enumerated directsound-devices.
Gets the textual description of the DirectSound device.
Gets the module name of the DirectSound driver corresponding to this device.
The that identifies the device being enumerated.
Initializes a new instance of the class.
The description.
The module.
The unique identifier.
Performs an explicit conversion from to .
The device.
The of the .
Returns a that represents this instance.
A that represents this instance.
Provides the functionality to enumerate directsound devices installed on the system.
Enumerates the directsound devices installed on the system.
A readonly collection, containing all enumerated devices.
Exception class which represents all DirectSound related exceptions.
Initializes a new instance of the class.
The Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
Initializes a new instance of the class.
The Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
Initializes a new instance of the class from serialization data.
The object that holds the serialized object data.
The StreamingContext object that supplies the contextual information about the source or
destination.
Gets the which got associated with the specified .
Throws an if the is not
.
Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
Sets up notification events for a playback or capture buffer.
Returns a new instance of the class for the specified .
The to create a instance for.
A new instance of the class for the specified
is null.
Initializes a new instance of the class based on the native pointer.
The native pointer of the COM object.
Sets the notification positions. During capture or playback, whenever the read or play cursor reaches one of the specified offsets, the associated event is signaled.
An array of structures.
Sets the notification positions. During capture or playback, whenever the read or play cursor reaches one of the specified offsets, the associated event is signaled.
An array of structures.
DSResult
Represents a primary directsound buffer.
Initializes a new instance of the class.
A instance which provides the method.
Initializes a new instance of the class.
A instance which provides the method.
The buffer description which describes the buffer to create.
The is invalid.
Initializes a new instance of the class.
The native pointer of the COM object.
Represents a secondary directsound buffer.
Initializes a new instance of the class.
A instance which provides the method.
The of the sound buffer.
The buffer size. Internally, the will be set to * 2.
or
must be a value between 4 and 0x0FFFFFFF.
Initializes a new instance of the class.
A instance which provides the method.
The buffer description which describes the buffer to create.
The is invalid.
Initializes a new instance of the class.
The native pointer of the COM object.
Defines possible flags for the method.
The default value.
Start the lock at the write cursor. The offset parameter is ignored.
Lock the entire buffer. The bytes parameter is ignored.
Flags specifying how to play a .
For more information, see .
None
After the end of the audio buffer is reached, play restarts at the beginning of the buffer. Play continues until explicitly stopped. This flag must be set when playing a primary buffer.
Play this voice in a hardware buffer only. If the hardware has no available voices and no voice management flags are set, the call to fails. This flag cannot be combined with .
Play this voice in a software buffer only. This flag cannot be combined with or any voice management flag.
If the hardware has no available voices, a currently playing nonlooping buffer will be stopped to make room for the new buffer. The buffer prematurely terminated is the one with the least time left to play.
If the hardware has no available voices, a currently playing buffer will be stopped to make room for the new buffer. The buffer prematurely terminated will be selected from buffers that have the buffer's flag set and are beyond their maximum distance. If there are no such buffers, the method fails.
If the hardware has no available voices, a currently playing buffer will be stopped to make room for the new buffer. The buffer prematurely terminated will be the one with the lowest priority as set by the priority parameter passed to for the buffer.
The structure describes a notification position. It is used by .
Zero offset.
Causes the event to be signaled when playback or capture stops, either because the end of the buffer has been reached (and playback or capture is not looping) or because the application called the or IDirectSoundCaptureBuffer8::Stop method.
Offset from the beginning of the buffer where the notify event is to be triggered, or .
Handle to the event to be signaled when the offset has been reached.
Initializes a new instance of the struct.
The offset from the beginning of the buffer where the notify event is to be triggered.
Handle to the event to be signaled when the offset has been reached
Defines flags that describe the status of a .
The buffer is playing. If this value is not set, the buffer is stopped.
The buffer is lost and must be restored before it can be played or locked.
The buffer is being looped. If this value is not set, the buffer will stop when it reaches the end of the sound data. This value is returned only in combination with .
The buffer is playing in hardware. Set only for buffers created with the flag.
The buffer is playing in software. Set only for buffers created with the flag.
The buffer was prematurely terminated by the voice manager and is not playing. Set only for buffers created with the flag.
Describes the capabilities of a DirectSound buffer object. It is used by the property.
For more information, see .
Size of the structure, in bytes. This member must be initialized before the structure is used.
Use the method to determine the size.
Flags that specify buffer-object capabilities.
Size of this buffer, in bytes.
The rate, in kilobytes per second, at which data is transferred to the buffer memory when is called. High-performance applications can use this value to determine the time required for to execute. For software buffers located in system memory, the rate will be very high because no processing is required. For hardware buffers, the rate might be slower because the buffer might have to be downloaded to the sound card, which might have a limited transfer rate.
The processing overhead as a percentage of main processor cycles needed to mix this sound buffer. For hardware buffers, this member will be zero because the mixing is performed by the sound device. For software buffers, this member depends on the buffer format and the speed of the system processor.
Flags that specify buffer-object capabilities.
None
The buffer is a primary buffer.
The buffer is in on-board hardware memory.
The buffer uses hardware mixing.
The buffer is in software memory and uses software mixing.
The buffer has 3D control capability.
The buffer has frequency control capability.
The buffer has pan control capability.
The buffer has volume control capability.
The buffer has position notification capability.
The buffer supports effects processing.
The buffer has sticky focus. If the user switches to another application not using DirectSound, the buffer is still audible. However, if the user switches to another DirectSound application, the buffer is muted.
The buffer is a global sound buffer. With this flag set, an application using DirectSound can continue to play its buffers if the user switches focus to another application, even if the new application uses DirectSound.
For more information, see .
The buffer uses the new behavior of the play cursor when is called. For more information, see .
The sound is reduced to silence at the maximum distance. The buffer will stop playing when the maximum distance is exceeded, so that processor time is not wasted. Applies only to software buffers.
The buffer can be assigned to a hardware or software resource at play time, or when is called.
Force to return the buffer's true play position. This flag is only valid in Windows Vista.
Describes the characteristics of a new buffer object.
Size of the structure, in bytes. This member must be initialized before the structure is used.
Use the or the method to
Flags specifying the capabilities of the buffer.
Size of the new buffer, in bytes. For more information, see .
Must be a value between 4 and 0x0FFFFFFF.
Address of a or class specifying the waveform format for the buffer. This value must be for primary buffers.
Unique identifier of the two-speaker virtualization algorithm to be used by DirectSound3D hardware emulation. If is not set in , this member must be .
For more information, see .
Flags describing device capabilities.
The driver has been tested and certified by Microsoft. This flag is always set for WDM drivers. To test for certification, use .
The device supports all sample rates between the and member values. Typically, this means that the actual output rate will be within +/- 10 hertz (Hz) of the requested frequency.
The device does not have a DirectSound driver installed, so it is being emulated through the waveform-audio functions. Performance degradation should be expected.
None
The device supports a primary buffer with 16-bit samples.
The device supports primary buffers with 8-bit samples.
The device supports monophonic primary buffers.
The device supports stereo primary buffers.
The device supports hardware-mixed secondary sound buffers with 16-bit samples.
The device supports hardware-mixed secondary buffers with 8-bit samples.
The device supports hardware-mixed monophonic secondary buffers.
The device supports hardware-mixed stereo secondary buffers.
Defines possible return values for the method.
For more information, see or .
Driver is certified for DirectSound.
Driver is not certified for DirectSound.
Not supported.
The method returned DSERR_UNSUPPORTED.
Defines cooperative levels which can be set by calling the
method.
For more information, see .
Sets the normal level. This level has the smoothest multitasking and resource-sharing behavior, but because it does
not allow the primary buffer format to change, output is restricted to the default 8-bit format.
Sets the priority level. Applications with this cooperative level can call the SetFormat and Compact methods.
For DirectX 8.0 and later, has the same effect as . For previous versions, sets the
application to the exclusive level. This means that when it has the input focus, the application will be the only
one audible; sounds from applications with the GlobalFocus flag set will be muted. With this level, it also
has all the privileges of the DSSCL_PRIORITY level. DirectSound will restore the hardware format, as specified by
the most recent call to the SetFormat method, after the application gains the input focus.
Sets the write-primary level. The application has write access to the primary buffer. No secondary buffers can be
played. This level cannot be set if the DirectSound driver is being emulated for the device; that is, if the
GetCaps method returns the DSCAPS_EMULDRIVER flag in the DSCAPS structure.
Defines possible DirectSound return values.
For more information, see .
The method succeeded.
The DirectSound subsystem could not allocate sufficient memory to complete the caller's request.
The requested COM interface is not available.
The buffer was created, but another 3D algorithm was substituted.
The method succeeded, but not all the optional effects were obtained.
The function called is not supported at this time.
An undetermined error occurred inside the DirectSound subsystem.
The request failed because access was denied.
An invalid parameter was passed to the returning function.
The request failed because resources, such as a priority level, were already in use by another caller.
The buffer control (volume, pan, and so on) requested by the caller is not available. Controls must be specified when the buffer is created, using the member of .
This function is not valid for the current state of this object.
A cooperative level of or higher is required.
The specified wave format is not supported.
No sound driver is available for use, or the given GUID is not a valid DirectSound device ID.
The object is already initialized.
The buffer memory has been lost and must be restored.
Another application has a higher priority level, preventing this call from succeeding.
The method has not been called or has not been called successfully before other methods were called.
The buffer size is not great enough to enable effects processing.
A DirectSound object of class CLSID_DirectSound8 or later is required for the requested functionality.
A circular loop of send effects was detected.
The GUID specified in an audiopath file does not match a valid mix-in buffer.
The effects requested could not be found on the system, or they are in the wrong order or in the wrong location; for example, an effect expected in hardware was found in software.
The requested object was not found.
Defines possible speaker configurations.
The audio is passed through directly, without being configured for speakers.
The audio is played through headphones.
The audio is played through a single speaker.
The audio is played through quadraphonic speakers.
The audio is played through stereo speakers (default value).
The audio is played through surround speakers.
The audio is played through a home theater speaker arrangement of five surround speakers with a subwoofer.
Obsolete 5.1 setting. Use instead.
The audio is played through a home theater speaker arrangement of seven surround speakers with a subwoofer.
Obsolete 7.1 setting. Use instead.
The audio is played through a home theater speaker arrangement of seven surround speakers with a subwoofer. This value applies to Windows XP SP2 or later.
The audio is played through a home theater speaker arrangement of five surround speakers with a subwoofer. This value applies to Windows Vista or later.
The audio is played through a wide speaker arrangement of seven surround speakers with a subwoofer. ( is still defined, but is obsolete as of Windows XP SP 2. Use instead.)
The audio is played through a speaker arrangement of five surround speakers with a subwoofer. ( is still defined, but is obsolete as of Windows Vista. Use instead.)
Defines values that can be combined with the value.
To combine the a value with the stereo value, use the method.
The speakers are directed over an arc of 5 degrees.
The speakers are directed over an arc of 10 degrees.
The speakers are directed over an arc of 20 degrees.
The speakers are directed over an arc of 180 degrees.
implementation for Dmo based streams.
Creates a new instance of the class.
Base source of the .
Gets or sets the position of the stream in bytes.
Gets the length of the stream in bytes.
Gets a value indicating whether the supports seeking.
Gets the of the .
Gets inputData to feed the Dmo MediaObject with.
InputDataBuffer which receives the inputData.
If this parameter is null or the length is less than the amount of inputData, a new byte array will be applied.
The requested number of bytes.
The number of bytes read. The number of actually read bytes does not have to be the number of requested bytes.
Gets the input format to use.
The input format.
Typically this is the of the .
Defines DMO-Categories for enumerating DMOs.
All DMOs.
AudioEffects
AudioCaptureEffects
Category which includes audio decoder.
Category which includes audio encoder.
Defines flags that specify search criteria when enumerating Microsoft DirectX Media Objects.
For more information, see .
A software key enables the developer of a DMO to control who uses the DMO. If a DMO has a software key,
applications must unlock the DMO to use it. The method for unlocking the DMO depends on the implementation. Consult
the documentation for the particular DMO.
None
The enumeration should include DMOs whose use is restricted by a software key. If this flag is absent, keyed DMOs
are omitted from the enumeration.
Encapsulates the properties of an enumerated dmo.
Gets or sets the CLSID of the dmo.
Gets or sets the friendly name of the dmo.
Error codes that are specific to Microsoft DirectX Media Objects.
Invalid stream index.
Invalid media type.
Media type was not set. One or more streams require a media type before this operation can be performed.
Data cannot be accepted on this stream. You might need to process more output data; see MediaObject::ProcessInput
(-> http://msdn.microsoft.com/en-us/library/windows/desktop/dd406959(v=vs.85).aspx).
Media type was not accepted.
Media-type index is out of range.
Encapsulates the values retrieved by the method.
Initializes a new instance of the class.
The minimum size of an input buffer for the stream, in bytes.
The required buffer alignment, in bytes. If the stream has no alignment requirement, the value is 1
The maximum amount of data that the DMO will hold for a lookahead, in bytes. If the DMO does not perform a lookahead on the stream, the value is zero.
Gets the maximum amount of data that the DMO will hold for a lookahead, in bytes. If the DMO does not perform a
lookahead on the stream, the value is zero.
Defines flags that describe an input stream.
None.
The stream requires whole samples. Samples must not span multiple buffers, and buffers must not contain partial
samples.
Each buffer must contain exactly one sample.
All the samples in this stream must be the same size.
The DMO performs lookahead on the incoming data, and may hold multiple input buffers for this stream.
Represents a Dmo output data buffer. For more details see .
Pointer to the interface of a buffer allocated by the application.
Status flags. After processing output, the DMO sets this member to a bitwise combination
of or more flags.
Time stamp that specifies the start time of the data in the buffer. If the buffer has a
valid time stamp, the DMO sets this member and also sets the
flag in the dwStatus member. Otherwise, ignore this member.
Reference time specifying the length of the data in the buffer. If the DMO sets this
member to a valid value, it also sets the flag in the
dwStatus member. Otherwise, ignore this member.
Initializes a new instance of the struct.
The maxlength (in bytes) of the internally used .
Gets the length of the .
Reads a sequence of bytes from the .
Array of bytes to store the read bytes in.
Zero-based byte offset in the specified buffer at which to begin storing the data read from the
buffer.
The number of read bytes.
Reads a sequence of bytes from the .
Array of bytes to store the read bytes in.
Zero-based byte offset in the specified buffer at which to begin storing the data read from the
buffer.
The maximum number of bytes to read from the buffer.
The number of read bytes.
Reads a sequence of bytes from the .
Array of bytes to store the read bytes in.
Zero-based byte offset in the specified buffer at which to begin storing the data read from the
buffer.
The maximum number of bytes to read from the buffer.
Zero-based offset inside of the source buffer at which to begin copying data.
The number of read bytes.
Resets the Buffer. Sets the length of the to zero and sets the
to .
Disposes the internally used .
The enumeration defines flags that describe an output stream.
None
The stream contains whole samples. Samples do not span multiple buffers, and buffers do not contain partial
samples.
Each buffer contains exactly one sample.
All the samples in this stream are the same size.
The stream is discardable. Within calls to IMediaObject::ProcessOutput, the DMO can discard data for this stream
without copying it to an output buffer.
The stream is optional. An optional stream is discardable. Also, the application can ignore this stream entirely;
it does not have to set the media type for the stream. Optional streams generally contain additional information,
or data not needed by all applications.
Describes a media type used by a Microsoft DirectX Media Object.
For more informatin, see .
Major type GUID. Use to match any major type.
Subtype GUID. Use to match any subtype.
Encapsulates the values retrieved by the - and the - method.
Initializes a new instance of the class.
The minimum size of an input buffer for the stream, in bytes.
The required buffer alignment, in bytes. If the stream has no alignment requirement, the value is 1.
Gets the minimum size of an input buffer for this stream, in bytes.
Gets the required buffer alignment, in bytes. If the input stream has no alignment requirement, the value is 1.
Base class for all Dmo based streams.
The default inputStreamIndex to use.
The default outputStreamIndex to use.
Gets the input format of the .
Reads a sequence of bytes from the stream.
An array of bytes. When this method returns, the buffer contains the read bytes.
The zero-based byte offset in buffer at which to begin storing the data read from the stream.
The maximum number of bytes to be read from the stream
The actual number of read bytes.
Gets or sets the position of the stream.
Gets the length of the stream.
Gets a value indicating whether the supports seeking.
Disposes the .
Gets the output format of the .
Gets inputData to feed the Dmo MediaObject with.
InputDataBuffer which receives the inputData.
If this parameter is null or the length is less than the amount of inputData, a new byte array will be applied.
The requested number of bytes.
The number of bytes read. The number of actually read bytes does not have to be the number of requested bytes.
Creates and returns a new instance to use for processing audio data. This can be a decoder, effect, ...
The input format of the to create.
The output format of the to create.
The created to use for processing audio data.
Gets the input format to use.
The input format.
Gets the output format to use.
The output format.
Initializes the DmoStream. Important: This has to be called before using the DmoStream.
Converts a position of the inputstream to the equal position in the outputstream.
Any position/offset of the inputstream, in bytes.
Position in the outputstream, in bytes.
Translates a position of the outputstream to the equal position in the inputstream.
Any position/offset of the outputstream, in bytes.
Position in the inputstream, in bytes.
Resets the overflowbuffer.
Releases the .
true to release both managed and unmanaged resources; false to release only unmanaged resources.
Finalizes an instance of the class.
Internal parameter structure for the effect.
The wet dry mix.
The depth.
The feedback.
The frequency.
The waveform.
The delay.
The phase.
Internal parameter structure for the effect.
The gain.
The attack.
The release.
The threshold.
The ratio.
The predelay.
Base class for any DirectSoundEffect.
Parameters type.
Default ctor for a ComObject.
Pointer of a DirectSoundEffect interface.
Gets or sets the Parameters of the Effect.
Gets the name of the COM interface. Used for generating error messages.
Sets the effects parameters.
Object that contains the new parameters of the effect.
HRESULT
Use the property instead.
Retrieves the effects parameters.
A variable which retrieves the set parameters of the effect.
HRESULT
Use the property instead.
The IDirectSoundFXChorus interface is used to set and retrieve effect parameters.
Creates a DirectSoundFXChorus wrapper based on a pointer to a IDirectSoundFXChorus cominterface.
Pointer of a DirectSoundFXChorus interface.
Interface name used for generating DmoExceptions.
The DirectSoundFXCompressor interface is used to set and retrieve effect parameters.
Creates a DirectSoundFXCompressor wrapper based on a pointer to a IDirectSoundFXCompressor cominterface.
Pointer of a DirectSoundFXCompressor interface.
Interface name used for generating DmoExceptions.
The DirectSoundFXDistortion interface is used to set and retrieve effect parameters.
Creates a DirectSoundFXDistortion wrapper based on a pointer to a IDirectSoundFXDistortion cominterface.
Pointer of a DirectSoundFXDistortion interface.
Interface name used for generating DmoExceptions.
The IDirectSoundFXEcho interface is used to set and retrieve effect parameters.
Creates a DirectSoundFXEcho wrapper based on a pointer to a IDirectSoundFXEcho cominterface.
Pointer of a DirectSoundFXEcho interface.
Interface name used for generating DmoExceptions.
The DirectSoundFXFlanger interface is used to set and retrieve effect parameters.
Creates a DirectSoundFXFlanger wrapper based on a pointer to a IDirectSoundFXFlanger cominterface.
Pointer of a DirectSoundFXFlanger interface.
Interface name used for generating DmoExceptions.
The IDirectSoundFXChorus interface is used to set and retrieve effect parameters.
Creates a DirectSoundFXGargle wrapper based on a pointer to a IDirectSoundFXGargle cominterface.
Pointer of a DirectSoundFXGargle interface.
Interface name used for generating DmoExceptions.
The DirectSoundFXReverb interface is used to set and retrieve effect parameters.
Creates a DirectSoundFXWavesReverb wrapper based on a pointer to a IDirectSoundFXWavesReverb cominterface.
Pointer of a DirectSoundFXWavesReverb interface.
Interface name used for generating DmoExceptions.
Internal parameter structure for the effect.
The gain.
The edge.
The post eq center frequency.
The post eq bandwidth.
The pre lowpass cutoff.
Internal parameter structure for the effect.
The wet dry mix.
The feedback.
The left delay.
The right delay.
The pan delay.
Internal parameter structure for the effect.
The wet dry mix.
The depth.
The feedback.
The frequency.
The waveform.
The delay.
The phase.
Internal parameter structure for the effect.
The rate hz.
The wave shape.
Internal parameter structure for the effect.
The in gain.
The reverb mix.
The reverb time.
The high freq rt ratio.
Provides methods for enumerating Microsoft DirectX Media Objects.
Initializes a new instance of the class.
The native pointer of the COM object.
Enumerates DMOs listed in the registry. The caller can search by category, media type, or both.
GUID that specifies which category of DMO to search. Use Guid.Empty to search every category.
See for a list of category guids.
Flags that specify search criteria.
Array of input-Mediatypes.
Array of output-Mediatypes.
EnumDMO
Enumerates DMOs listed in the registry.
GUID that specifies which category of DMO to search. Use Guid.Empty to search every category.
See for a list of category guids.
Flags that specify search criteria.
An that can be used to iterate through the enumerated DMOs.
Retrieves a specified number of items in the enumeration sequence.
Number of items to retrieve.
Array that is filled with the CLSIDs of the enumerated DMOs.
Array that is filled with the friendly names of the enumerated DMOs.
Actual number of items retrieved.
HRESULT
Retrieves a specified number of items in the enumeration sequence.
Number of items to retrieve.
Array of enumerated DMOs.
Skips over a specified number of items in the enumeration sequence.
Number of items to skip.
HRESULT
Skips over a specified number of items in the enumeration sequence.
Number of items to skip.
Resets the enumeration sequence to the beginning.
HRESULT
Resets the enumeration sequence to the beginning.
This method is not implemented.
Reserved
This method is not implemented.
This method is not implemented an will throw an with the error code .
The interface provides methods for manipulating a data buffer.
For more information, .
The SetLength method specifies the length of the data currently in the buffer.
Size of the data, in bytes. The value must not exceed the buffer's maximum size. Call the method to obtain the maximum size.
HRESULT
The method retrieves the maximum number of bytes this buffer can hold.
A variable that receives the buffer's maximum size, in bytes.
HRESULT
The method retrieves the buffer and the size of the valid data in the buffer.
Address of a pointer that receives the buffer array. Can be if is not .
Pointer to a variable that receives the size of the valid data, in bytes. Can be if is not .
HRESULT
Defines flags that describe an input buffer.
See http://msdn.microsoft.com/en-us/library/windows/desktop/dd375501(v=vs.85).aspx
None
The beginning of the data is a synchronization point.
The buffer's time stamp is valid. The buffer's indicated time length is valid.
The buffer's indicated time length is valid.
InputStatusFlags.
See also: http://msdn.microsoft.com/en-us/library/windows/desktop/dd406950(v=vs.85).aspx
None
The stream accepts data.
Defines flags that describe an input stream.
See http://msdn.microsoft.com/en-us/library/windows/desktop/dd375502(v=vs.85).aspx.
None
The stream contains whole samples. Samples do not span multiple buffers, and buffers do
not contain partial samples.
Each buffer contains exactly one sample.
The stream is discardable. Within calls to IMediaObject::ProcessOutput, the DMO can
discard data for this stream without copying it to an output buffer.
The DMO performs lookahead on the incoming data, and may hold multiple input buffers for
this stream.
Defines flags that describe an output buffer.
See http://msdn.microsoft.com/en-us/library/windows/desktop/dd375508(v=vs.85).aspx.
None
The beginning of the data is a synchronization point. A synchronization point is a
random access point. For encoded video, this a sample that can be used as a decoding
start point (key frame). For uncompressed audio or video, every sample is a
synchronization point.
The buffer's time stamp is valid. The buffer's indicated time length is valid.
The buffer's indicated time length is valid.
There is still input data available for processing, but the output buffer is full.
Flags that describe an output stream.
See http://msdn.microsoft.com/en-us/library/windows/desktop/dd375509(v=vs.85).aspx.
None
The stream contains whole samples. Samples do not span multiple buffers, and buffers do
not contain partial samples.
Each buffer contains exactly one sample.
All the samples in this stream are the same size.
The stream is discardable. Within calls to IMediaObject::ProcessOutput, the DMO can
discard data for this stream without copying it to an output buffer.
The stream is optional. An optional stream is discardable. Also, the application can
ignore this stream entirely; it does not have to set the media type for the stream.
Optional streams generally contain additional information, or data not needed by all
applications.
Defines flags that specify output processing requests.
See http://msdn.microsoft.com/en-us/library/windows/desktop/dd375511(v=vs.85).aspx
None
Discard the output when the pointer to the output buffer is NULL.
Defines flags for setting the media type on a stream.
See http://msdn.microsoft.com/en-us/library/windows/desktop/dd375514(v=vs.85).aspx.
None
Test the media type but do not set it.
Clear the media type that was set for the stream.
Default-Implementation of the IMediaBuffer interface.
For more information, see .
Creates a MediaBuffer and allocates the specified number of bytes in the memory.
The number of bytes which has to be allocated in the memory.
Gets the maximum number of bytes this buffer can hold.
Gets the length of the data currently in the buffer.
Frees the allocated memory of the internally used buffer.
The SetLength method specifies the length of the data currently in the buffer.
Size of the data, in bytes. The value must not exceed the buffer's maximum size. Call the method to obtain the maximum size.
HRESULT
The method retrieves the maximum number of bytes this buffer can hold.
A variable that receives the buffer's maximum size, in bytes.
HRESULT
The method retrieves the buffer and the size of the valid data in the buffer.
Address of a pointer that receives the buffer array. Can be if is not .
Pointer to a variable that receives the size of the valid data, in bytes. Can be if is not .
HRESULT
Writes a sequence of bytes to the internally used buffer.
Array of bytes. The Write method copies data from the specified array of bytes to the internally
used buffer.
Zero-based bytes offset in the specified buffer at which to begin copying bytes to the internally
used buffer.
The number of bytes to be copied.
Reads a sequence of bytes from the internally used buffer.
Array of bytes to store the read bytes in.
Zero-based byte offset in the specified buffer at which to begin storing the data read from the
buffer.
Reads a sequence of bytes from the buffer.
Array of bytes to store the read bytes in.
Zero-based byte offset in the specified buffer at which to begin storing the data read from the
buffer.
The maximum number of bytes to read from the buffer.
Reads a sequence of bytes from the buffer.
Array of bytes to store the read bytes in.
Zero-based byte offset in the specified buffer at which to begin storing the data read from the
buffer.
The maximum number of bytes to read from the buffer.
Zero-based offset inside of the source buffer at which to begin copying data.
Frees the allocated memory of the internally used buffer.
Frees the allocated memory of the internally used buffer.
Represents a DMO MediaObject.
Initializes a new instance of the class.
The native pointer of the COM object.
Gets the number of input streams.
Gets the number of output streams.
Creates a MediaObject from any ComObjects.
Internally the IUnknown::QueryInterface method of the specified COM Object gets called.
The COM Object to cast to a .
The .
Retrieves the number of input and output streams.
A variable that receives the number of input streams.
A variable that receives the number of output streams.
HRESULT
Retrieves the number of input and output streams.
A variable that receives the number of input streams.
A variable that receives the number of output streams.
Retrieves information about a specified input stream.
Zero-based index of an input stream on the DMO.
Bitwise combination of zero or more flags.
HRESULT
Retrieves information about a specified input stream.
Zero-based index of an input stream on the DMO.
The retrieved information about the specified input stream.
Retrieves information about a specified output stream.
Zero-based index of an output stream on the DMO.
Bitwise combination of zero or more flags.
HRESULT
Retrieves information about a specified output stream.
Zero-based index of an output stream on the DMO.
The information about the specified output stream.
Retrieves a preferred media type for a specified input stream.
Zero-based index on the set of acceptable media types.
Can be null to check whether the typeIndex argument is in range. If not, the errorcode will be
(0x80040206).
Zero-based index of an input stream on the DMO.
HRESULT
Retrieves a preferred media type for a specified input stream.
Zero-based index on the set of acceptable media types.
Zero-based index of an input stream on the DMO.
The preferred media type for the specified input stream.
Retrieves a preferred media type for a specified output stream.
Zero-based index on the set of acceptable media types.
Can be null to check whether the typeIndex argument is in range. If not, the errorcode will be
(0x80040206).
Zero-based index of an output stream on the DMO.
HRESULT
Retrieves a preferred media type for a specified output stream.
Zero-based index on the set of acceptable media types.
Zero-based index of an output stream on the DMO.
The preferred media type for the specified output stream.
Sets the media type on an input stream, or tests whether a media type is acceptable.
Zero-based index of an input stream on the DMO.
The new mediatype.
Bitwise combination of zero or more flags from the enumeration.
HRESULT
Clears the inputtype for a specific input stream.
Zero-based index of an input stream on the DMO.
Sets the media type on an input stream.
Zero-based index of an input stream on the DMO.
The new mediatype.
Bitwise combination of zero or more flags from the enumeration.
Sets the media type on an input stream.
Zero-based index of an input stream on the DMO.
The format to set as the new for the specified input stream.
Tests whether the given is supported.
Zero-based index of an input stream on the DMO.
The to test whether it is supported.
True = supported, False = not supported
Tests whether the given is supported.
Zero-based index of an input stream on the DMO.
The to test whether it is supported.
True = supported, False = not supported
Sets the on an output stream, or tests whether a is acceptable.
Zero-based index of an output stream on the DMO.
The new .
Bitwise combination of zero or more flags from the enumeration.
HRESULT
Clears the outputtype for a specific output stream.
Zero-based index of an output stream on the DMO.
Sets the on an output stream, or tests whether a is acceptable.
Zero-based index of an output stream on the DMO.
The new .
Bitwise combination of zero or more flags from the enumeration.
Sets the on an output stream, or tests whether a is acceptable.
Zero-based index of an output stream on the DMO.
The format to set as the new for the specified output stream.
Tests whether the given is supported as OutputFormat.
Zero-based index of an output stream on the DMO.
WaveFormat
True = supported, False = not supported
Tests whether the given is supported.
Zero-based index of an output stream on the DMO.
The to test whether it is supported.
True = supported, False = not supported
Retrieves the media type that was set for an input stream, if any.
Zero-based index of an input stream on the DMO.
A variable that receives the retrieved media type of the specified input stream.
HRESULT
Retrieves the media type that was set for an input stream, if any.
Zero-based index of an input stream on the DMO.
The retrieved media type of the specified input stream.
Retrieves the media type that was set for an output stream, if any.
Zero-based index of an output stream on the DMO.
A variable that receives the retrieved media type of the specified output stream.
HRESULT
Retrieves the media type that was set for an output stream, if any.
Zero-based index of an output stream on the DMO.
The media type that was set for the specified output stream.
Retrieves the buffer requirements for a specified input stream.
Zero-based index of an input stream on the DMO.
Minimum size of an input buffer for this stream, in bytes.
The maximum amount of data that the DMO will hold for a lookahead, in bytes. If the DMO does
not perform a lookahead on the stream, the value is zero.
The required buffer alignment, in bytes. If the input stream has no alignment requirement, the
value is 1.
HRESULT
This method retrieves the buffer requirements for a specified input stream.
Zero-based index of an input stream on the DMO.
The buffer requirements for the specified input stream.
This method retrieves the buffer requirements for a specified output stream.
Zero-based index of an output stream on the DMO.
Minimum size of an output buffer for this stream, in bytes.
The required buffer alignment, in bytes. If the output stream has no alignment requirement, the
value is 1.
HRESULT
This method retrieves the buffer requirements for a specified output stream.
Zero-based index of an output stream on the DMO.
The buffer requirements for the specified output stream.
Retrieves the maximum latency on a specified input stream.
Zero-based index of an input stream on the DMO.
Receives the maximum latency in reference type units. Unit = REFERENCE_TIME = 100 nanoseconds
HRESULT
Retrieves the maximum latency on a specified input stream.
Zero-based index of an input stream on the DMO.
The maximum latency in reference type units. Unit = REFERENCE_TIME = 100 nanoseconds
Sets the maximum latency on a specified input stream.
Zero-based index of an input stream on the DMO.
Maximum latency in reference time units. Unit = REFERENCE_TIME = 100 nanoseconds
HRESULT
For the definition of maximum latency, see .
Sets the maximum latency on a specified input stream.
Zero-based index of an input stream on the DMO.
Maximum latency in reference time units. Unit = REFERENCE_TIME = 100 nanoseconds
HRESULT
For the definition of maximum latency, see .
This method flushes all internally buffered data.
HRESULT
This method flushes all internally buffered data.
Signals a discontinuity on the specified input stream.
Zero-based index of an input stream on the DMO.
HRESULT
A discontinuity represents a break in the input. A discontinuity might occur because no more data is expected, the format is changing, or there is a gap in the data.
After a discontinuity, the DMO does not accept further input on that stream until all pending data has been processed.
The application should call the method until none of the streams returns the (see ) flag.
This method might fail if it is called before the client sets the input and output types on the DMO.
Signals a discontinuity on the specified input stream.
Zero-based index of an input stream on the DMO.
A discontinuity represents a break in the input. A discontinuity might occur because no more data is expected, the format is changing, or there is a gap in the data.
After a discontinuity, the DMO does not accept further input on that stream until all pending data has been processed.
The application should call the method until none of the streams returns the (see ) flag.
This method might fail if it is called before the client sets the input and output types on the DMO.
Allocates any resources needed by the DMO. Calling this method is always
optional.
HRESULT
For more information, see .
Allocates any resources needed by the DMO. Calling this method is always
optional.
For more information, see .
Frees resources allocated by the DMO. Calling this method is always optional.
HREUSLT
For more information, see .
Frees resources allocated by the DMO. Calling this method is always optional.
For more information, see .
Queries whether an input stream can accept more input data.
Zero-based index of an input stream on the DMO.
The queried input status.
For more information, see .
Queries whether an input stream can accept more input data.
Zero-based index of an input stream on the DMO.
A variable that receives either or .
HRESULT
For more information, see .
Queries whether an input stream can accept more input data.
Zero-based index of an input stream on the DMO.
If the return value is True, the input stream can accept more input data. Otherwise false.
Delivers a buffer to the specified input stream.
Zero-based index of an input stream on the DMO.
The to process.
Delivers a buffer to the specified input stream.
Zero-based index of an input stream on the DMO.
The to process.
Bitwise combination of or more flags from the enumeration.
Delivers a buffer to the specified input stream.
Zero-based index of an input stream on the DMO.
The to process.
Bitwise combination of or more flags from the enumeration.
Time stamp that specifies the start time of the data in the buffer. If the buffer has a valid
time stamp, set the Time flag in the flags parameter.
Reference time specifying the duration of the data in the buffer. If the buffer has a valid
time stamp, set the TimeLength flag in the flags parameter.
Delivers a buffer to the specified input stream.
Zero-based index of an input stream on the DMO.
The to process.
Bitwise combination of or more flags from the enumeration.
Time stamp that specifies the start time of the data in the buffer. If the buffer has a valid
time stamp, set the Time flag in the flags parameter.
Reference time specifying the duration of the data in the buffer. If the buffer has a valid
time stamp, set the TimeLength flag in the flags parameter.
HRESULT
Generates output from the current input data.
Bitwise combination of or more flags from the enumeration.
An array of output buffers to process.
Generates output from the current input data.
Bitwise combination of or more flags from the enumeration.
An array of output buffers to process.
Number of output buffers.
Generates output from the current input data.
Bitwise combination of or more flags from the enumeration.
An array of output buffers to process.
Number of output buffers.
Receives a reserved value (zero). The application should ignore this value.
HREUSLT
Acquires or releases a lock on the DMO. Call this method to keep the DMO serialized when performing multiple
operations.
Value that specifies whether to acquire or release the lock. If the value is non-zero, a lock is
acquired. If the value is zero, the lock is released.
HRESULT
Acquires or releases a lock on the DMO. Call this method to keep the DMO serialized when performing multiple
operations.
Value that specifies whether to acquire or release the lock. If the value is non-zero, a lock is
acquired. If the value is zero, the lock is released.
Acquires or releases a lock on the DMO. Call this method to keep the DMO serialized when performing multiple
operations.
A disposable object which can be used to unlock the by calling its method.
This example shows how to use the method:
partial class TestClass
{
public void DoStuff(MediaObject mediaObject)
{
using(var lock = mediaObject.Lock())
{
//do some stuff
}
//the mediaObject gets automatically unlocked by the using statement after "doing your stuff"
}
}
Used to unlock a after locking it by calling the method.
Unlocks the locked .
The structure describes the format of the data used by a stream in a Microsoft DirectX Media Object (DMO).
For more information, .
Creates a MediaType based on a given WaveFormat. Don't forget to call Free() for the returend MediaType.
WaveFormat to create a MediaType from.
Dmo MediaType
A GUID identifying the stream's major media type. This must be one of the DMO Media
Types(see ).
Subtype GUID of the stream.
If TRUE, samples are of a fixed size. This field is informational only. For audio, it is
generally set to TRUE. For video, it is usually TRUE for uncompressed video and FALSE
for compressed video.
If TRUE, samples are compressed using temporal (interframe) compression. A value of TRUE
indicates that not all frames are key frames. This field is informational only.
Size of the sample, in bytes. For compressed data, the value can be zero.
GUID specifying the format type. The pbFormat member points to the corresponding format
structure. (see )
Size of the format block of the media type.
Pointer to the format structure. The structure type is specified by the formattype
member. The format structure must be present, unless formattype is GUID_NULL or
FORMAT_None.
Frees the allocated members of a media type structure by calling the MoFreeMediaType function.
Sets properties on the audio resampler DSP.
Initializes a new instance of the class.
The native pointer of the COM object.
Specifies the quality of the output.
Specifies the quality of the output. The valid range is 1 to 60,
inclusive.
Specifies the channel matrix.
An array of floating-point values that represents a channel conversion matrix.
Use the class to build the channel-conversation-matrix and its
method to convert the channel-conversation-matrix into a
compatible array which can be passed as value for the parameter.
For more information,
.
Specifies the quality of the output.
Specifies the quality of the output. The valid range is 1 to 60,
inclusive.
HRESULT
Specifies the channel matrix.
An array of floating-point values that represents a channel conversion matrix.
HRESULT
Use the class to build the channel-conversation-matrix and its
method to convert the channel-conversation-matrix into a
compatible array which can be passed as value for the parameter.
For more information,
.
DirectX Media Object COM Exception
Initializes a new instance of the class.
Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
Initializes a new instance of the class from serialization data.
The object that holds the serialized object data.
The StreamingContext object that supplies the contextual information about the source or
destination.
Throws an if the is not .
Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
Specifies the quality of the output.
Specifies the quality of the output. The valid range is 1 to 60,
inclusive.
HRESULT
Used to apply a bandpass-filter to a signal.
Initializes a new instance of the class.
The sample rate.
The filter's corner frequency.
Calculates all coefficients.
Represents a biquad-filter.
The a0 value.
The a1 value.
The a2 value.
The b1 value.
The b2 value.
The q value.
The gain value in dB.
The z1 value.
The z2 value.
Gets or sets the frequency.
value;The samplerate has to be bigger than 2 * frequency.
Gets the sample rate.
The q value.
Gets or sets the gain value in dB.
Initializes a new instance of the class.
The sample rate.
The frequency.
sampleRate
or
frequency
or
q
Initializes a new instance of the class.
The sample rate.
The frequency.
The q.
sampleRate
or
frequency
or
q
Processes a single sample and returns the result.
The input sample to process.
The result of the processed sample.
Processes multiple samples.
The input samples to process.
The result of the calculation gets stored within the array.
Calculates all coefficients.
Represents an element inside of a .
Gets the assigned input channel of the .
Gets the assigned output channel of the .
Gets or sets the coefficient in the range from 0.0f to 1.0f.
Initializes a new instance of the class.
The input channel.
The output channel.
Provides an Fast Fourier Transform implementation including a few utils method which are commonly used in combination with FFT (e.g. the hamming window function).
Obsolete. Use the property instead.
The intensity of the complex value .
sqrt(r² + i²)
Implementation of the Hamming Window using double-precision floating-point numbers.
Current index of the input signal.
Window width.
Hamming window multiplier.
Hamming window implementation using single-precision floating-point numbers.
Current index of the input signal.
Window width.
Hamming Window multiplier.
Computes an Fast Fourier Transform.
Array of complex numbers. This array provides the input data and is used to store the result of the FFT.
The exponent n.
The to use. Use as the default value.
Computes an Fast Fourier Transform.
Array of complex numbers. This array provides the input data and is used to store the result of the FFT.
The exponent n.
The to use. Use as the default value.
Provides FFT calculations.
Usage: Use the -method to input samples to the . Use the method to
calculate the Fast Fourier Transform.
Gets the specified fft size.
Gets a value which indicates whether new data is available.
Initializes a new instance of the class.
Number of channels of the input data.
The number of bands to use.
is less than zero.
Adds a and a sample to the . The and the sample will be merged together.
The sample of the left channel.
The sample of the right channel.
Adds multiple samples to the .
Float Array which contains samples.
Number of samples to add to the .
Calculates the Fast Fourier Transform and stores the result in the .
The output buffer.
Returns a value which indicates whether the Fast Fourier Transform got calculated. If there have not been added any new samples since the last transform, the FFT won't be calculated. True means that the Fast Fourier Transform got calculated.
Calculates the Fast Fourier Transform and stores the result in the .
The output buffer.
Returns a value which indicates whether the Fast Fourier Transform got calculated. If there have not been added any new samples since the last transform, the FFT won't be calculated. True means that the Fast Fourier Transform got calculated.
Fft mode.
Forward
Backward
Defines FFT data size constants that can be used for FFT calculations.
Note that only the half of the specified size can be used for visualizations.
64 bands.
128 bands.
256 bands.
512 bands.
1024 bands.
2014 bands.
4096 bands.
8192 bands.
16384 bands.
Used to apply a highpass-filter to a signal.
Initializes a new instance of the class.
The sample rate.
The filter's corner frequency.
Calculates all coefficients.
Used to apply a highshelf-filter to a signal.
Initializes a new instance of the class.
The sample rate.
The filter's corner frequency.
Gain value in dB.
Calculates all coefficients.
Used to apply a lowpass-filter to a signal.
Initializes a new instance of the class.
The sample rate.
The filter's corner frequency.
Calculates all coefficients.
Used to apply a lowshelf-filter to a signal.
Initializes a new instance of the class.
The sample rate.
The filter's corner frequency.
Gain value in dB.
Calculates all coefficients.
Used to apply a notch-filter to a signal.
Initializes a new instance of the class.
The sample rate.
The filter's corner frequency.
Calculates all coefficients.
Used to apply an peak-filter to a signal.
Gets or sets the bandwidth.
Initializes a new instance of the class.
The sampleRate of the audio data to process.
The center frequency to adjust.
The bandWidth.
The gain value in dB.
Calculates all coefficients.
Represents a channel conversion matrix. For more information, see
.
Defines a stereo to 5.1 surround (with rear) channel conversion matrix.
Defines a 5.1 surround (with rear) to stereo channel conversion matrix.
Defines a stereo to 5.1 surround (with side) channel conversion matrix.
Defines a 5.1 surround (with side) to stereo channel conversion matrix.
Defines a stereo to 7.1 surround channel conversion matrix.
Defines a 7.1 surround to stereo channel conversion matrix.
Defines a mono to 5.1 surround (with rear) channel conversion matrix.
Defines a 5.1 surround (with rear) to mono channel conversion matrix.
Defines a mono to 5.1 surround (with side) channel conversion matrix.
Defines a 5.1 surround (with side) to mono channel conversion matrix.
Defines a mono to 7.1 surround channel conversion matrix.
Defines a 7.1 surround channel to mono conversion matrix.
Defines a stereo to mono conversion matrix.
Defines a mono to stereo conversion matrix.
Defines a 5.1 surround (with rear) to 7.1 surround channel conversion matrix.
Defines a 7.1 surround to 5.1 surround (with rear) channel conversion matrix
Defines a 5.1 surround (with side) to 7.1 surround channel conversion matrix.
Defines a 7.1 surround to 5.1 surround (with side) channel conversion matrix
Gets a to convert between the two specified s.
The of the input stream.
The desired of the output stream.
A to convert between the two specified s.
equals
No accurate was found.
Gets a to convert between the two specified formats.
The input waveformat.
The output waveformat.
A to convert between the two specified formats.
If no channelmask could be found, the return value is null.
The channelmask of the input format equals the channelmask of the output format.
No accurate was found.
Initializes a new instance of the class.
The of the input signal.
The of the output signal.
Invalid /.
Gets the of the input signal.
Gets the of the output signal.
Gets the number of rows of the channel conversion matrix.
Gets the number of columns of the channel conversion matrix.
Gets the input signals number of channels.
The property always returns the same value as the
property.
Gets the output signals number of channels.
The property always returns the same value as the
property.
Gets or sets a of the .
The zero-based index of the input channel.
The zero-based index of the output channel.
The of the at the specified position.
Sets the channel conversion matrix.
The x-axis of the specifies the output channels. The y-axis
of the specifies the input channels.
Channel conversion matrix to use.
Returns a one dimensional array which contains the channel conversion matrix coefficients.
A one dimensional array which contains the channel conversion matrix coefficients
This method is primarily used in combination with the
method.
Flips the axis of the matrix and returns the new matrix with the flipped axis.
A matrix with flipped axis.
This could be typically used in the following scenario: There is a
5.1 to stereo matrix. By using the method the 5.1 to stereo matrix can be
converted into a stereo to 5.1 matrix.
Resampler based on the which can change the number of channels based on a
. Supported since Windows XP.
Initializes a new instance of the class.
Underlying source which has to get resampled.
which defines how to map each channel.
Initializes a new instance of the class.
Underlying source which has to get resampled.
which defines how to map each channel.
Waveformat, which specifies the new format. Note, that by far not all formats are supported.
source
or
channelMatrix
or
outputFormat
The number of channels of the source has to be equal to the number of input channels specified by the channelMatrix.
Initializes a new instance of the class.
Underlying source which has to get resampled.
which defines how to map each channel.
The destination sample rate.
Gets the channel matrix.
If any changes to the channel matrix are made, use the method to commit them.
Commits all channel-matrix-changes.
Resampler based on the DmoResampler. Supported since Windows XP.
Initializes a new instance of the class.
which has to get resampled.
The new output samplerate specified in Hz.
Initializes a new instance of the class.
which has to get resampled.
Waveformat, which specifies the new format. Note, that by far not all formats are supported.
Initializes a new instance of the class.
which has to get resampled.
Waveformat, which specifies the new format. Note, that by far not all formats are supported.
True to ignore the position of the for more accurate seeking. The default value is True.
For more details see remarks.
Since the resampler transforms the audio data of the to a different samplerate,
the position might differ from the actual amount of read data. In order to avoid that behavior set
to True. This will cause the property to return the number of actually read bytes.
Note that seeking the won't have any effect on the of the .
Gets the new output format.
Gets or sets the position of the source.
Gets the length of the source.
Specifies the quality of the output. The valid range is from 1 to 60.
Specifies the quality of the resampled output. The valid range is: 1 >= value <= 60.
Reads a resampled sequence of bytes from the and advances the position within the
stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Disposes the allocated resources of the resampler but does not dispose the underlying source.
Disposes the .
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Provides a basic fluent API for creating a source chain.
Appends a source to an already existing source.
Input
Output
Already existing source.
Function which appends the new source to the already existing source.
The return value of the delegate.
Appends a source to an already existing source.
Input
Output
Already existing source.
Function which appends the new source to the already existing source.
Receives the return value.
The return value of the delegate.
Changes the SampleRate of an already existing wave source.
Already existing wave source whose sample rate has to be changed.
Destination sample rate.
Wave source with the specified .
Changes the SampleRate of an already existing sample source. Note: This extension has to convert the
to a and back to a .
Already existing sample source whose sample rate has to be changed.
Destination sample rate.
Sample source with the specified .
Converts the specified wave source with n channels to a wave source with two channels.
Note: If the has only one channel, the
extension has to convert the to a and back to a
.
Already existing wave source.
instance with two channels.
Converts the specified sample source with n channels to a wave source with two channels.
Note: If the has more than two channels, the
extension has to convert the to a
and back to a .
Already existing sample source.
instance with two channels.
Converts the specified wave source with n channels to a wave source with one channel.
Note: If the has two channels, the extension
has to convert the to a and back to a
.
Already existing wave source.
instance with one channel.
Converts the specified sample source with n channels to a wave source with one channel.
Note: If the has only one channel, the
extension has to convert the to a and back to a
.
Already existing sample source.
instance with one channels
Appends a new instance of the class to the audio chain.
The underlying which should be looped.
The new instance instance.
Converts a SampleSource to either a Pcm (8, 16, or 24 bit) or IeeeFloat (32 bit) WaveSource.
Sample source to convert to a wave source.
Bits per sample.
Wave source
Converts a to IeeeFloat (32bit) .
The to convert to a .
The wrapped around the specified .
Converts a to a .
The to convert to a .
The wrapped around the specified .
Returns a thread-safe (synchronized) wrapper around the specified object.
The object to synchronize.
Type of the argument.
The type of the data read by the Read method of the method.
A thread-safe wrapper around the specified object.
The is null.
Defines a generic base for all readable audio streams.
The type of the provided audio data.
Reads a sequence of elements from the and advances the position within the
stream by the
number of elements read.
An array of elements. When this method returns, the contains the specified
array of elements with the values between and ( +
- 1) replaced by the elements read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of elements to read from the current source.
The total number of elements read into the buffer.
Provides the method.
Used to write down raw byte data.
Byte array which contains the data to write down.
Zero-based offset in the .
Number of bytes to write.
Defines the CLSID values for several common mediafoundation audio decoders.
CLSID_CMSDDPlusDecMFT
CLSID_CMSMPEGAudDecMFT
CMSAACDecMFT
CWMADecMediaObject
CALawDecMediaObject
ACM Wrapper
CWMAudioSpdTxDMO
CWMSPDecMediaObject
Wrapper
IMA ADPCM ACM Wrapper
CMP3DecMediaObject
ADPCM ACM Wrapper
see http://msdn.microsoft.com/en-us/library/windows/desktop/ms696989%28v=vs.85%29.aspx
The is a generic decoder for all installed Mediafoundation codecs.
Initializes a new instance of the class.
Uri which points to an audio source which can be decoded.
Initializes a new instance of the class.
Stream which provides the audio data to decode.
Initializes a new instance of the class.
Stream which provides the audio data to decode.
Reads a sequence of bytes from the and advances the position within the
stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Disposes the .
Gets the format of the decoded audio data provided by the method.
Gets or sets the position of the output stream, in bytes.
Gets the total length of the decoded audio, in bytes.
Gets a value which indicates whether the seeking is supported. True means that seeking is supported. False means
that seeking is not supported.
Disposes the and its internal resources.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Finalizes an instance of the class.
A generic encoder for all installed Mediafoundation-Encoders.
Creates an new instance of the class.
Mediatype of the source to encode.
Stream which will be used to store the encoded data.
The format of the encoded data.
See container type. For a list of all available container types, see .
Gets the total duration of all encoded data.
Gets the underlying stream which operates as encoding target.
Gets the media type of the encoded data.
Gets the which is used to write to the .
Gets the destination stream which is used to store the encoded audio data.
Releases all resources used by the encoder and finalizes encoding.
Encodes raw audio data.
A byte-array which contains raw data to encode.
The zero-based byte offset in buffer at which to begin encoding bytes to the underlying stream.
The number of bytes to encode.
Sets and initializes the targetstream for the encoding process.
Stream which should be used as the targetstream.
Mediatype of the raw input data to encode.
Mediatype of the encoded data.
Container type which should be used.
Disposes the .
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Finalizes an instance of the class.
Encodes the whole with the specified . The encoding process
stops as soon as the method of the specified
returns 0.
The encoder which should be used to encode the audio data.
The which provides the raw audio data to encode.
Returns a new instance of the class, configured as mp3 encoder.
The input format, of the data to encode.
The bitrate to use. The final bitrate can differ from the specified value.
The file to write to.
For more information about supported input and output formats, see .
A new instance of the class, configured as mp3 encoder.
Returns a new instance of the class, configured as mp3 encoder.
The input format, of the data to encode.
The bitrate to use. The final bitrate can differ from the specified value.
The stream to write to.
For more information about supported input and output formats, see .
A new instance of the class, configured as mp3 encoder.
Returns a new instance of the class, configured as wma encoder.
The input format, of the data to encode.
The bitrate to use. The final bitrate can differ from the specified value.
The file to write to.
For more information about supported input and output formats, see .
A new instance of the class, configured as wma encoder.
Returns a new instance of the class, configured as wma encoder.
The input format, of the data to encode.
The bitrate to use. The final bitrate can differ from the specified value.
The stream to write to.
For more information about supported input and output formats, see .
A new instance of the class, configured as wma encoder.
Returns a new instance of the class, configured as aac encoder.
The input format, of the data to encode.
The bitrate to use. The final bitrate can differ from the specified value.
The file to write to.
For more information about supported input and output formats, see .
A new instance of the class, configured as aac encoder.
Returns a new instance of the class, configured as aac encoder.
The input format, of the data to encode.
The bitrate to use. The final bitrate can differ from the specified value.
The stream to write to.
For more information about supported input and output formats, see .
A new instance of the class, configured as aac encoder.
Tries to find the which fits best the requested format specified by the parameters:
, , and
.
The audio subtype. For more information, see the class.
The requested sample rate.
The requested number of channels.
The requested bit rate.
A which fits best the requested format. If no mediatype could be found the
method returns null.
Returns all s available for encoding the specified .
The audio subtype to search available s for.
Available s for the specified . If the returns an empty array, no encoder for the specified was found.
Enables the application to defer the creation of an object. This interface is exposed by activation objects.
For more information, see .
Initializes a new instance of the class.
The underlying native pointer.
Creates the object associated with this activation object.
Interface identifier (IID) of the requested interface.
Receives a pointer to the requested interface. The caller must release the interface.
HRESULT
Creates the object associated with this activation object.
The type of the com object to create.
Interface identifier (IID) of the requested interface.
An instance of the requested interface.
Creates the object associated with this activation object.
Interface identifier (IID) of the requested interface.
A pointer to the requested interface. The caller must release the interface.
Shuts down the created object.
HRESULT
Shuts down the created object.
Detaches the created object from the activation object.
HRESULT
Detaches the created object from the activation object.
Gets the name of the MFT.
Gets the available input types.
Gets the available output types.
Contains media type information for registering a Media Foundation transform (MFT).
The major media type.
The media subtype.
Represents a byte stream from some data source, which might be a local file, a network file, or some other source. The interface supports the typical stream operations, such as reading, writing, and seeking.
Gets the characteristics of the .
Gets or sets the length of the stream in bytes.
Initializes a new instance of the class.
The native pointer of the COM object.
Initializes a new instance of the class which acts as a wrapper for the specified to use it in a media foundation context.
The stream to wrap for media foundation usage.
A value indicating whether the should be closed when the
method is being called.
Retrieves the characteristics of the byte stream.
Receives a bitwise OR of zero or more flags.
HRESULT
Use the property for easier usage with automated error handling.
Retrieves the length of the stream.
Receives the length of the stream, in bytes. If the length is unknown, this value is -1.
HRESULT
Use the property for easier usage with automated error handling.
Sets the length of the stream.
The length of the stream in bytes.
HRESULT
Use the property for easier usage with automated error handling.
Retrieves the current read or write position in the stream.
The current position, in bytes.
HRESULT
Sets the current read or write position.
New position in the stream, as a byte offset from the start of the stream.
HRESULT
Gets or sets the current read/write position in bytes.
Gets a value indicating whether the has reached the end of the stream.
Queries whether the current position has reached the end of the stream.
Receives the value if the end of the stream has been reached, or otherwise.
HREUSLT
Reads data from the stream.
Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Receives the number of bytes that are copied into the buffer.
HRESULT
Reads data from the stream.
The buffer that receives the data.
The number of bytes to read.
HRESULT
buffer is null.
count is bigger than the length of the buffer.
Begins an asynchronous read operation from the stream.
Pointer to a buffer that receives the data. The caller must allocate the buffer.
Size of the buffer in bytes.
Pointer to the IMFAsyncCallback interface of a callback object. The caller must implement this interface.
Pointer to the IUnknown interface of a state object, defined by the caller. Can be Zero.
HRESULT
Completes an asynchronous read operation.
Pointer to the IMFAsyncResult interface. Pass in the same pointer that your callback object received in the IMFAsyncCallback::Invoke method.
Receives the number of bytes that were read.
HRESULT
Writes data to the stream.
Pointer to a buffer that contains the data to write.
Size of the buffer in bytes.
Receives the number of bytes that are written.
HRESULT
Writes data to the stream.
Buffer that contains the data to write.
The number of bytes to write.
The number of bytes that were written.
buffer is null.
count is bigger than the length of the buffer.
Begins an asynchronous write operation to the stream.
Pointer to a buffer containing the data to write.
Size of the buffer in bytes.
Pointer to the IMFAsyncCallback interface of a callback object. The caller must implement this interface.
Pointer to the IUnknown interface of a state object, defined by the caller. Can be Zero.
HRESULT
Completes an asynchronous write operation.
Pointer to the IMFAsyncResult interface. Pass in the same pointer that your callback object received in the IMFAsyncCallback::Invoke method.
Receives the number of bytes that were written.
HRESULT
Moves the current position in the stream by a specified offset.
Specifies the origin of the seek as a member of the enumeration. The offset is calculated relative to this position.
Specifies the new position, as a byte offset from the seek origin.
Specifies whether all pending I/O requests are canceled after the seek request completes successfully.
Receives the new position after the seek.
The new position after the seek.
Moves the current position in the stream by a specified offset.
Specifies the origin of the seek as a member of the enumeration. The offset is calculated relative to this position.
Specifies the new position, as a byte offset from the seek origin.
Specifies whether all pending I/O requests are canceled after the seek request completes successfully.
The new position after the seek.
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
HRESULT
Clears any internal buffers used by the stream. If you are writing to the stream, the buffered data is written to the underlying file or device.
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
HRESULT
Closes the stream and releases any resources associated with the stream, such as sockets or file handles. This method also cancels any pending asynchronous I/O requests.
Releases the COM object.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Defines the characteristics of a .
None
The byte stream can be read.
The byte stream can be written to.
The byte stream can be seeked.
The byte stream is from a remote source, such as a network.
The byte stream represents a file directory.
Seeking within this stream might be slow. For example, the byte stream might download from a network.
The byte stream is currently downloading data to a local cache. Read operations on the byte stream might take longer until the data is completely downloaded.This flag is cleared after all of the data has been downloaded.
If the flag is also set, it means the byte stream must download the entire file sequentially. Otherwise, the byte stream can respond to seek requests by restarting the download from a new point in the stream.
Another thread or process can open this byte stream for writing. If this flag is present, the length of the byte stream could change while it is being read.
Requires Windows 7 or later.
The byte stream is not currently using the network to receive the content. Networking hardware may enter a power saving state when this bit is set.
Requires Windows 8 or later.
Specifies the origin for a seek request.
The seek position is specified relative to the start of the stream.
The seek position is specified relative to the current read/write position in the stream.
Provides the functionality to enumerate Mediafoundation-Transforms.
Enumerates Mediafoundation-Transforms that match the specified search criteria.
A that specifies the category of MFTs to enumerate.
For a list of MFT categories, see .
The bitwise OR of zero or more flags from the enumeration.
Specifies an input media type to match. This parameter can be null. If null, all input types are matched.
Specifies an output media type to match. This parameter can be null. If null, all output types are matched.
A that can be used to iterate through the MFTs.
Enumerates Media Foundation transforms (MFTs) in the registry.
A that specifies the category of MFTs to enumerate.
For a list of MFT categories, see .
Specifies an input media type to match. This parameter can be null. If null, all input types are matched.
Specifies an output media type to match. This parameter can be null. If null, all output types are matched.
An array of CLSIDs. For more information, see .
On Windows 7/Windows Server 2008 R2, use the method instead.
Defines the characteristics of a media source.
For more information, see .
This flag indicates a data source that runs constantly, such as a live presentation. If the source is stopped and then restarted, there will be a gap in the content.
The media source supports seeking.
The source can pause.
The media source downloads content. It might take a long time to seek to parts of the content that have not been downloaded.
The media source delivers a playlist, which might contain more than one entry.
Requires Windows 7 or later.
The media source can skip forward in the playlist. Applies only if the flag is present.
Requires Windows 7 or later.
The media source can skip backward in the playlist. Applies only if the flag is present.
Requires Windows 7 or later.
The media source is not currently using the network to receive the content. Networking hardware may enter a power saving state when this bit is set.
Requires Windows 8 or later.
Defines common audio subtypes.
Advanced Audio Coding (AAC).
Not used
Dolby AC-3 audio over Sony/Philips Digital Interface (S/PDIF).
Encrypted audio data used with secure audio path.
Digital Theater Systems (DTS) audio.
Uncompressed IEEE floating-point audio.
MPEG Audio Layer-3 (MP3).
MPEG-1 audio payload.
Windows Media Audio 9 Voice codec.
Uncompressed PCM audio.
Windows Media Audio 9 Professional codec over S/PDIF.
Windows Media Audio 9 Lossless codec or Windows Media Audio 9.1 codec.
Windows Media Audio 8 codec, Windows Media Audio 9 codec, or Windows Media Audio 9.1 codec.
Windows Media Audio 9 Professional codec or Windows Media Audio 9.1 Professional codec.
Dolby Digital (AC-3).
MPEG-4 and AAC Audio Types
Dolby Audio Types
Dolby Audio Types
μ-law coding
Adaptive delta pulse code modulation (ADPCM)
Dolby Digital Plus formatted for HDMI output.
MSAudio1 - unknown meaning
Reference : wmcodecdsp.h
IMA ADPCM ACM Wrapper
WMSP2 - unknown meaning
Reference: wmsdkidl.h
Currently no flags are defined.
None
Implemented by the Microsoft Media Foundation sink writer object.
Stream index to selected all streams.
MF_SINK_WRITER_MEDIASINK constant.
Initializes a new instance of the class.
The native pointer of the COM object.
Initializes a new instance of the class with a underlying .
The underlying to use.
Attributes to configure the . For more information, see . Use null/nothing as the default value.
Adds a stream to the sink writer.
The target mediatype which specifies the format of the samples that will be written to the file. It does not need to match the input format. To set the input format, call .
Receives the zero-based index of the new stream.
HRESULT
Adds a stream to the sink writer.
The target mediatype which specifies the format of the samples that will be written to the file. It does not need to match the input format. To set the input format, call .
The zero-based index of the new stream.
Sets the input format for a stream on the sink writer.
The zero-based index of the stream. The index is returned by the method.
The input media type that specifies the input format.
An attribute store. Use the attribute store to configure the encoder. This parameter can be NULL.
HRESULT
Sets the input format for a stream on the sink writer.
The zero-based index of the stream. The index is returned by the method.
The input media type that specifies the input format.
An attribute store. Use the attribute store to configure the encoder. This parameter can be NULL.
Initializes the sink writer for writing.
HRESULT
Initializes the sink writer for writing.
Delivers a sample to the sink writer.
The zero-based index of the stream for this sample.
The sample to write.
HRESULT
You must call before calling this method.
Delivers a sample to the sink writer.
The zero-based index of the stream for this sample.
The sample to write.
You must call before calling this method.
Indicates a gap in an input stream.
The zero-based index of the stream.
The position in the stream where the gap in the data occurs. The value is given in 100-nanosecond units, relative to the start of the stream.
HRESULT
Indicates a gap in an input stream.
The zero-based index of the stream.
The position in the stream where the gap in the data occurs. The value is given in 100-nanosecond units, relative to the start of the stream.
Places a marker in the specified stream.
The zero-based index of the stream.
Pointer to an application-defined value. The value of this parameter is returned to the caller in the pvContext parameter of the caller's IMFSinkWriterCallback::OnMarker callback method. The application is responsible for any memory allocation associated with this data. This parameter can be NULL.
HRESULT
Places a marker in the specified stream.
The zero-based index of the stream.
Pointer to an application-defined value. The value of this parameter is returned to the caller in the pvContext parameter of the caller's IMFSinkWriterCallback::OnMarker callback method. The application is responsible for any memory allocation associated with this data. This parameter can be NULL.
Notifies the media sink that a stream has reached the end of a segment.
The zero-based index of a stream, or to signal that all streams have reached the end of a segment.
HRESULT
Notifies the media sink that a stream has reached the end of a segment.
The zero-based index of a stream, or to signal that all streams have reached the end of a segment.
Flushes one or more streams.
The zero-based index of the stream to flush, or to flush all of the streams.
HRESULT
Flushes one or more streams.
The zero-based index of the stream to flush, or to flush all of the streams.
Completes all writing operations on the sink writer.
HRESULT
Completes all writing operations on the sink writer.
Renamed from 'Finalize' to 'FinalizeWriting' to suppress "CS0465 warning".
Queries the underlying media sink or encoder for an interface.
The zero-based index of a stream to query, or to query the media sink itself.
A service identifier GUID, or . If the value is , the method calls QueryInterface to get the requested interface. Otherwise, the method calls IMFGetService::GetService.
For a list of service identifiers, see .
The interface identifier (IID) of the interface being requested.
Receives a pointer to the requested interface. The caller must release the interface.
HRESULT
Queries the underlying media sink or encoder for an interface.
The zero-based index of a stream to query, or to query the media sink itself.
A service identifier GUID, or . If the value is , the method calls QueryInterface to get the requested interface. Otherwise, the method calls IMFGetService::GetService.
For a list of service identifiers, see .
The interface identifier (IID) of the interface being requested.
A pointer to the requested interface. The caller must release the interface.
Gets statistics about the performance of the sink writer.
The zero-based index of a stream to query, or to query the media sink itself.
Receives statistics about the performance of the sink writer.
HRESULT
Gets statistics about the performance of the sink writer.
The zero-based index of a stream to query, or to query the media sink itself.
Statistics about the performance of the sink writer.
Contains statistics about the performance of the sink writer.
The size of the structure, in bytes.
The time stamp of the most recent sample given to the sink writer. The sink writer updates this value each time the application calls .
The time stamp of the most recent sample to be encoded. The sink writer updates this value whenever it calls IMFTransform::ProcessOutput on the encoder.
The time stamp of the most recent sample given to the media sink. The sink writer updates this value whenever it calls IMFStreamSink::ProcessSample on the media sink.
The time stamp of the most recent stream tick. The sink writer updates this value whenever the application calls .
The system time of the most recent sample request from the media sink. The sink writer updates this value whenever it receives an MEStreamSinkRequestSample event from the media sink. The value is the current system time.
The number of samples received.
The number of samples encoded.
The number of samples given to the media sink.
The number of stream ticks received.
The amount of data, in bytes, currently waiting to be processed.
The total amount of data, in bytes, that has been sent to the media sink.
The number of pending sample requests.
The average rate, in media samples per 100-nanoseconds, at which the application sent samples to the sink writer.
The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the encoder.
The average rate, in media samples per 100-nanoseconds, at which the sink writer sent samples to the media sink.
Defines flags that indicate the status of the method.
None
An error occurred. If you receive this flag, do not make any further calls to
methods.
The source reader reached the end of the stream.
One or more new streams were created.
The native format has changed for one or more streams. The native format is the format delivered by the media
source before any decoders are inserted.
The current media has type changed for one or more streams. To get the current media type, call the
method.
There is a gap in the stream. This flag corresponds to an MEStreamTick event from the media source.
All transforms inserted by the application have been removed for a particular stream. This could be due to a
dynamic format change from a source or decoder that prevents custom transforms from being used because they cannot
handle the new media type.
Defines categories for Media Foundation transforms (MFTs). These categories are used to register and enumerate MFTs.
For more information, see .
Audio decoders.
Audio encoders.
Audio effects.
Video encoders.
Video decoders.
Video effects.
Video processors.
Demultiplexers.
Multiplexers.
Miscellaneous MFTs.
Defines flags for registering and enumeration Media Foundation transforms (MFTs).
None
The MFT performs synchronous data processing in software.
This flag does not apply to hardware transforms.
The MFT performs asynchronous data processing in software. See .
This flag does not apply to hardware transforms.
The MFT performs hardware-based data processing, using either the AVStream driver or a GPU-based proxy MFT. MFTs in this category always process data asynchronously.
See .
Must be unlocked by the app before use. For more information, see .
For enumeration, include MFTs that were registered in the caller's process.
The MFT is optimized for transcoding rather than playback.
For enumeration, sort and filter the results. For more information, see .
Bitwise OR of all the flags, excluding .
Contains media type information for registering a Media Foundation transform (MFT).
The major media type.
The media subtype.
Defines flags for the method.
None
Retrieve any pending samples, but do not request any more samples from the media source. To get all of the pending samples, call with this flag until the method returns a NULL media sample pointer.
Defines the GUIDs for different types of container formats.
MPEG2
ADTS
AC3
3GP
MP3
MPEG4
ASF
FMPEG4
AMR
WAVE
AVI
Mediafoundation COM Exception
Initializes a new instance of the class.
Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
Initializes a new instance of the class from serialization data.
The object that holds the serialized object data.
The StreamingContext object that supplies the contextual information about the source or
destination.
Throws an if the is not
.
Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
Indicates the degree of similarity between the two media types.
None
The major types are the same.
The subtypes are the same, or neither media type has a subtype.
The attributes in one of the media types are a subset of the attributes in the other, and the values of these
attributes match, excluding the value of the MF_MT_USER_DATA, MF_MT_FRAME_RATE_RANGE_MIN, and
MF_MT_FRAME_RATE_RANGE_MAX attributes.
The user data is identical, or neither media type contains user data. User data is specified by the MF_MT_USER_DATA
attribute.
Represents a MediaFoundation-attribute.
The type of the value of the
Gets the key of the attribute.
Gets the value of the attribute.
Initializes a new instance of the class.
The key.
The value.
Specifies how to compare the attributes on two objects.
Check whether all the attributes in pThis exist in pTheirs and have the same data, where pThis is the object whose
method is being called and pTheirs is the object given in the pTheirs
parameter.
Check whether all the attributes in pTheirs exist in pThis and have the same data, where pThis is the object whose
method is being called and pTheirs is the object given in the pTheirs
parameter.
Check whether both objects have identical attributes with the same data.
Check whether the attributes that exist in both objects have the same data.
Find the object with the fewest number of attributes, and check if those attributes exist in the other object and
have the same data.
Provides a generic way to store key/value pairs on an object.
Initializes a new instance of the class.
The native pointer of the COM object.
Initializes a new instance of the class.
Initializes a new instance of the class with a initial size.
The initial size in bytes.
Gets or sets an item specified by its index.
The index of the item.
Gets or sets an item specified by its key.
The key of the item.
Gets the number of attributes that are set on this object.
Retrieves the value associated with a key.
A that identifies which value to retrieve.
A pointer to a that receives the value.
HRESULT
For more information, see
.
Retrieves the value associated with a key.
A that identifies which value to retrieve.
A that receives the value.
For more information, see
.
Retrieves the data type of the value associated with a key.
that identifies which value to query.
The type of the item, associated with the specified .
HRESULT
For more information, see
.
Retrieves the data type of the value associated with a key.
that identifies which value to query.
The type of the item, associated with the specified .
For more information, see
.
Queries whether a stored attribute value equals a specified .
that identifies which value to query.
that contains the value to compare.
Receives a boolean value indicating whether the attribute matches the value given in
.
HRESULT
For more information, see
.
Queries whether a stored attribute value equals a specified .
that identifies which value to query.
that contains the value to compare.
A boolean value indicating whether the attribute matches the value given in .
For more information, see
.
Compares the attributes on this object with the attributes on another object.
The interface of the object to compare with this object.
A value, specifying the type of comparison to make.
Receives a Boolean value. The value is if the two sets of
attributes match in the way specified by the parameter. Otherwise, the value is
.
HRESULT
Compares the attributes on this object with the attributes on another object.
The interface of the object to compare with this object.
A value, specifying the type of comparison to make.
Returns true if the two sets of attributes match in the way specified by the
parameter; otherwise, false.
Retrieves a UINT32 value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
Receives a UINT32 value. If the key is found and the data type is , the method
copies the
value into this parameter.
HRESULT
Retrieves a UINT32 value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
If the key is found and the data type is , the method returns the
associated value.
Retrieves a UINT64 value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
Receives a UINT64 value. If the key is found and the data type is , the method
copies the
value into this parameter.
HRESULT
Retrieves a UINT64 value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
If the key is found and the data type is , the method returns the
associated value.
Retrieves a Double value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
Receives a Double value. If the key is found and the data type is , the method
copies the
value into this parameter.
HRESULT
Retrieves a Double value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
If the key is found and the data type is , the method returns the
associated value.
Retrieves a Guid value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
Receives a Guid value. If the key is found and the data type is , the method
copies the
value into this parameter.
HRESULT
Retrieves a Guid value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
If the key is found and the data type is , the method returns the
associated value.
Retrieves the length of a string value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
If the key is found and the data type is ,
this parameter receives the number of characters in the string, not including the terminating NULL character.
HRESULT
Retrieves the length of a string value associated with a key.
that identifies which value to retrieve. The attribute type must be
.
If the key is found and the data type is ,
this method returns the number of characters in the string, not including the terminating NULL character.
Retrieves a wide-character string associated with a key.
that identifies which value to retrieve. The attribute type must be
.
Pointer to a wide-character array allocated by the caller.
The array must be large enough to hold the string, including the terminating NULL character.
If the key is found and the value is a string type, the method copies the string into this buffer.
To find the length of the string, call .
The size of the pwszValue array, in characters. This value includes the terminating NULL character.
Receives the number of characters in the string, excluding the terminating NULL character. This parameter can be NULL.
HRESULT
Retrieves a wide-character string associated with a key.
that identifies which value to retrieve. The attribute type must be
.
If the key is found and the data type is , the method returns the
associated value.
Retrieves a wide-character string associated with a key. This method allocates the
memory for the string.
that identifies which value to retrieve. The attribute type must be
.
If the key is found and the value is a string type, this parameter receives a copy of the string. The caller must free the memory for the string by calling .
Receives the number of characters in the string, excluding the terminating NULL character.
HRESULT
Don't use the method. Use the method instead.
Retrieves the length of a byte array associated with a key.
that identifies which value to retrieve. The attribute type must be .
If the key is found and the value is a byte array, this parameter receives the size of the array, in bytes.
HRESULT
Retrieves the length of a byte array associated with a key.
that identifies which value to retrieve. The attribute type must be .
If the key is found and the value is a byte array, this method returns the size of the array, in bytes.
Retrieves a byte array associated with a key.
that identifies which value to retrieve. The attribute type must be .
Pointer to a buffer allocated by the caller. If the key is found and the value is a byte array, the method copies the array into this buffer. To find the required size of the buffer, call .
The size of the buffer, in bytes.
Receives the size of the byte array. This parameter can be .
HRESULT
Retrieves a byte array associated with a key.
that identifies which value to retrieve. The attribute type must be .
The byte array associated with the .
Retrieves an object associated with a key.
that identifies which value to retrieve. The attribute type must be .
The type of the object (type of the returned object -> see return value).
The object associated with the .
Type is null.
Internally this method retrieves a byte-array with gets converted to a instance of the specified .
Retrieves a byte array associated with a key.
that identifies which value to retrieve. The attribute type must be .
If the key is found and the value is a byte array, this parameter receives a copy of the array.
Receives the size of the array, in bytes.
HRESULT
Obsolete, use the method instead.
Retrieves an interface pointer associated with a key.
that identifies which value to retrieve. The attribute type must be .
Interface identifier (IID) of the interface to retrieve.
Receives a pointer to the requested interface. The caller must release the interface.
HRESULT
Associates an attribute value with a key.
A that identifies the value to set. If this key already exists, the method overwrites the old value.
A that contains the attribute value. The method copies the value. The type must be one of the types listed in the enumeration.
HRESULT
Associates an attribute value with a key.
A that identifies the value to set. If this key already exists, the method overwrites the old value.
A that contains the attribute value. The method copies the value. The type must be one of the types listed in the enumeration.
Removes a key/value pair from the object's attribute list.
that identifies the value to delete.
HRESULT
Removes a key/value pair from the object's attribute list.
that identifies the value to delete.
Removes all key/value pairs from the object's attribute list.
HRESULT
Removes all key/value pairs from the object's attribute list.
Associates a UINT32 value with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
HRESULT
Associates a UINT32 value with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
Associates a UINT64 value with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
HRESULT
Associates a UINT64 value with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
Associates a Double value with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
HRESULT
Associates a Double value with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
Associates a value with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
HRESULT
Associates a value with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
Associates a wide-character string with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
HRESULT
Internally this method stores a copy of the string specified by the parameter.
Associates a wide-character string with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
New value for this key.
Internally this method stores a copy of the string specified by the parameter.
Associates a byte array with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
Pointer to a byte array to associate with this key. The method stores a copy of the array.
Size of the array, in bytes.
HRESULT
Associates a byte array with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
The byte array to associate with the
Associates an IUnknown pointer with a key.
that identifies the value to set. If this key already exists, the method overwrites the old value.
IUnknown pointer to be associated with this key.
HRESULT
Locks the attribute store so that no other thread can access it. If the attribute store is already locked by another thread, this method blocks until the other thread unlocks the object. After calling this method, call to unlock the object.
HRESULT
Locks the attribute store so that no other thread can access it. If the attribute store is already locked by another thread, this method blocks until the other thread unlocks the object. After calling this method, call to unlock the object.
Unlocks the attribute store after a call to the method. While the object is unlocked, multiple threads can access the object's attributes.
HRESULT
Unlocks the attribute store after a call to the method. While the object is unlocked, multiple threads can access the object's attributes.
Retrieves the number of attributes that are set on this object.
Receives the number of attributes.
HRESULT
Retrieves the number of attributes that are set on this object.
Returns the number of attributes.
Retrieves an attribute at the specified index.
Index of the attribute to retrieve. To get the number of attributes, call .
Receives the that identifies this attribute.
Pointer to a that receives the value. This parameter can be . If it is not , the method fills the with a copy of the attribute value. Call to free the memory allocated by this method.
HRESULT
Retrieves an attribute at the specified index.
Index of the attribute to retrieve. To get the number of attributes, call .
Receives the that identifies this attribute.
Returns the value of the attribute specified by the .
Copies all of the attributes from this object into another attribute store.
The attribute store that recevies the copy.
HRESULT
Copies all of the attributes from this object into another attribute store.
The attribute store that recevies the copy.
Determines whether the attribute store contains an attribute with the specified .
The key of the attribute.
True if the attribute exists; otherwise, false
An unexpected error occurred.
Gets the item which got associated with the specified .
The key of the item.
The item which got associated with the specified .
The value type of the associated item is not supported.
Gets the item which got associated with the specified .
The key of the item.
Type of the returned item.
The item which got associated with the specified .
The specified is not supported.
Sets the value of a property specified by its .
The key of the property.
The value to set.
The type of the property.
The specified is not supported.
Sets the value of a property specified by the key of the object.
The type of the property.
Specifies the key of the property and the new value to set.
Defines data types for key/value pairs.
Unsigned 32-bit integer.
Unsigned 64-bit integer.
Floating-point number.
value.
Wide-character string.
Byte array.
IUnknown pointer.
Represents a block of memory that contains media data. Use this interface to access the data in the buffer.
Gets or sets the length of the valid data, in bytes. If the buffer does not contain any valid data, the value is zero.
Gets the allocated size of the buffer, in bytes.
Initializes a new instance of the class.
The native pointer of the COM object.
Initializes a new instance of the class with the specified maximum .
The size of the in bytes. The specified will be the of the constructed .
The caller needs to release the allocated memory by disposing the .
Gives the caller access to the memory in the buffer, for reading or writing.
Receives a pointer to the start of the buffer.
Receives the maximum amount of data that can be written to the buffer. The same value is returned by the method.
Receives the length of the valid data in the buffer, in bytes. The same value is returned by the method.
HRESULT
When you are done accessing the buffer, call to unlock the buffer. You must call once for each call to .
Gives the caller access to the memory in the buffer, for reading or writing.
Receives the maximum amount of data that can be written to the buffer. The same value is returned by the method.
Receives the length of the valid data in the buffer, in bytes. The same value is returned by the method.
A pointer to the start of the buffer.
When you are done accessing the buffer, call to unlock the buffer. You must call once for each call to .
Gives the caller access to the memory in the buffer, for reading or writing.
A disposable object which provides the information returned by the method. Call its method to unlock the .
This example shows how to use the method:
partial class TestClass
{
public void DoStuff(MFMediaBuffer mediaBuffer)
{
using(var lock = mediaBuffer.Lock())
{
//do some stuff
}
//the mediaBuffer gets automatically unlocked by the using statement after "doing your stuff"
}
}
Unlocks a buffer that was previously locked. Call this method once for every call to .
HRESULT
Unlocks a buffer that was previously locked. Call this method once for every call to .
Retrieves the length of the valid data in the buffer.
Receives the length of the valid data, in bytes. If the buffer does not contain any valid data, the value is zero.
HRESULT
Retrieves the length of the valid data in the buffer.
The length of the valid data, in bytes. If the buffer does not contain any valid data, the value is zero.
Sets the length of the valid data in the buffer.
Length of the valid data, in bytes. This value cannot be greater than the allocated size of the buffer, which is returned by the method.
HRESULT
Sets the length of the valid data in the buffer.
Length of the valid data, in bytes. This value cannot be greater than the allocated size of the buffer, which is returned by the method.
Retrieves the allocated size of the buffer.
Receives the allocated size of the buffer, in bytes.
HRESULT
Retrieves the allocated size of the buffer.
The allocated size of the buffer, in bytes.
Used to unlock a after locking it by calling the method.
Gets a pointer to the start of the buffer.
Gets the maximum amount of data that can be written to the buffer.
Gets the length of the valid data in the buffer, in bytes.
Unlocks the locked .
Represents a description of a media format.
Creates an empty .
Returns an empty .
Creates a new based on a specified .
which should be "converted" to a .
Returns a new .
Initializes a new instance of the class.
The native pointer of the COM object.
Gets or sets the number of channels.
Gets or sets the number of bits per sample.
Gets or sets the number of samples per second (for one channel each).
Gets or sets the channelmask.
Gets or sets the average number of bytes per second.
Gets or sets the audio subtype.
Gets or sets the major type.
Gets a value, indicating whether the media type is a temporally compressed format.
Temporal compression uses information from previously decoded samples when
decompressing the current sample.
Gets the major type of the format.
Receives the major type .
The major type describes the broad category of the format, such as audio or video. For a list of possible values, see .
HRESULT
Gets the major type of the format.
The major type . The major type describes the broad category of the format, such as audio or video. For a list of possible values, see .
Queries whether the media type is a temporally compressed format. Temporal compression
uses information from previously decoded samples when decompressing the current sample.
Receives a Boolean value. The value is TRUE if the format uses temporal compression, or FALSE if the format does not use temporal compression.
HRESULT
Queries whether the media type is a temporally compressed format. Temporal compression
uses information from previously decoded samples when decompressing the current sample.
if the format uses temporal compression. if the format does not use temporal compression.
Compares two media types and determines whether they are identical. If they are not
identical, the method indicates how the two formats differ.
The to compare.
Receives a bitwise OR of zero or more flags, indicating the degree of similarity between the two media types.
HRESULT
Compares two media types and determines whether they are identical. If they are not
identical, the method indicates how the two formats differ.
The to compare.
A bitwise OR of zero or more flags, indicating the degree of similarity between the two media types.
Retrieves an alternative representation of the media type. Currently only the DirectShow
AM_MEDIA_TYPE structure is supported.
that specifies the representation to retrieve. The following values are defined.
Receives a pointer to a structure that contains the representation. The method allocates the memory for the structure. The caller must release the memory by calling .
HRESULT
For more information, see .
Retrieves an alternative representation of the media type. Currently only the DirectShow
AM_MEDIA_TYPE structure is supported.
that specifies the representation to retrieve. The following values are defined.
A pointer to a structure that contains the representation. The method allocates the memory for the structure. The caller must release the memory by calling .
For more information, see .
Frees memory that was allocated by the method.
that was passed to the method.
Pointer to the buffer that was returned by the method.
HRESULT
For more information, see .
Frees memory that was allocated by the method.
that was passed to the method.
Pointer to the buffer that was returned by the method.
For more information, see .
Converts the to a .
Contains a flag from the enumeration.
The which got created based on the .
Represents a media sample, which is a container object for media data. For video, a sample typically contains one video frame. For audio data, a sample typically contains multiple audio samples, rather than a single sample of audio.
Initializes a new instance of the class.
Calls the MFCreateSample function.
Initializes a new instance of the class.
The native pointer of the COM object.
Currently no flags are defined. Instead, metadata for samples is defined using
attributes. To get attibutes from a sample, use the object, which
inherits.
Receives the value .
HRESULT
Currently no flags are defined. Instead, metadata for samples is defined using
attributes. To get attibutes from a sample, use the object, which
inherits.
Returns the .
Currently no flags are defined. Instead, metadata for samples is defined using
attributes. To set attibutes on a sample, use the object, which
IMFSample inherits.
Must be .
HRESULT
Currently no flags are defined. Instead, metadata for samples is defined using
attributes. To set attibutes on a sample, use the object, which
IMFSample inherits.
Must be .
Retrieves the presentation time of the sample.
Presentation time, in 100-nanosecond units.
HRESULT
Retrieves the presentation time of the sample.
Presentation time, in 100-nanosecond units.
Sets the presentation time of the sample.
The presentation time, in 100-nanosecond units.
HRESULT
Sets the presentation time of the sample.
The presentation time, in 100-nanosecond units.
Retrieves the presentation time of the sample.
Receives the presentation time, in 100-nanosecond units.
HRESULT
Retrieves the presentation time of the sample.
The presentation time, in 100-nanosecond units.
Sets the duration of the sample.
Duration of the sample, in 100-nanosecond units.
HRESULT
Sets the duration of the sample.
Duration of the sample, in 100-nanosecond units.
Retrieves the number of buffers in the sample.
Receives the number of buffers in the sample. A sample might contain zero buffers.
HRESULT
Retrieves the number of buffers in the sample.
The number of buffers in the sample. A sample might contain zero buffers.
Gets a buffer from the sample, by index.
Index of the buffer. To find the number of buffers in the sample, call . Buffers are indexed from zero.
Receives the instance. The caller must release the object.
HRESULT
Note: In most cases, it is safer to use the method.
If the sample contains more than one buffer, the method replaces them with a single buffer, copies the original data into that buffer, and returns the new buffer to the caller.
The copy operation occurs at most once. On subsequent calls, no data is copied.
Gets a buffer from the sample, by index.
Index of the buffer. To find the number of buffers in the sample, call . Buffers are indexed from zero.
The instance. The caller must release the object.
Note: In most cases, it is safer to use the method.
If the sample contains more than one buffer, the method replaces them with a single buffer, copies the original data into that buffer, and returns the new buffer to the caller.
The copy operation occurs at most once. On subsequent calls, no data is copied.
Converts a sample with multiple buffers into a sample with a single buffer.
Receives a instance. The caller must release the instance.
HRESULT
Converts a sample with multiple buffers into a sample with a single buffer.
A instance. The caller must release the instance.
Adds a buffer to the end of the list of buffers in the sample.
The to add.
HRESULT
Adds a buffer to the end of the list of buffers in the sample.
The to add.
Removes a buffer at a specified index from the sample.
Index of the buffer. To find the number of buffers in the sample, call . Buffers are indexed from zero.
HRESULT
Removes a buffer at a specified index from the sample.
Index of the buffer. To find the number of buffers in the sample, call . Buffers are indexed from zero.
Removes all of the buffers from the sample.
HRESULT
Removes all of the buffers from the sample.
Retrieves the total length of the valid data in all of the buffers in the sample. The length is calculated as the sum of the values retrieved by the method.
Receives the total length of the valid data, in bytes.
HRESULT
Retrieves the total length of the valid data in all of the buffers in the sample. The length is calculated as the sum of the values retrieved by the method.
The total length of the valid data, in bytes.
Copies the sample data to a buffer. This method concatenates the valid data from all of the buffers of the sample, in order.
The object of the destination buffer.
The buffer must be large enough to hold the valid data in the sample.
To get the size of the data in the sample, call .
HRESULT
Copies the sample data to a buffer. This method concatenates the valid data from all of the buffers of the sample, in order.
The object of the destination buffer.
The buffer must be large enough to hold the valid data in the sample.
To get the size of the data in the sample, call .
Implemented by the Microsoft Media Foundation source reader object.
Initializes a new instance of the class.
The native pointer of the COM object.
Initializes a new instance of the class based on a given .
The URL.
Gets a value indicating whether this instance can seek.
true if this instance can seek; otherwise, false.
Gets the media source characteristics.
The media source characteristics.
Queries whether a stream is selected.
The stream to query. For more information, see .
Receives if the stream is selected and will generate data. Receives if the stream is not selected and will not generate data.
HRESULT
Queries whether a stream is selected.
The stream to query. For more information, see .
if the stream is selected and will generate data; if the stream is not selected and will not generate data.
Selects or deselects one or more streams.
The stream to set. For more information, see .
Specify to select streams or to deselect streams. If a stream is deselected, it will not generate data.
HRESULT
Selects or deselects one or more streams.
The stream to set. For more information, see .
Specify to select streams or to deselect streams. If a stream is deselected, it will not generate data.
Gets a format that is supported natively by the media source.
Specifies which stream to query. For more information, see .
The zero-based index of the media type to retrieve.
Receives the . The caller must dispose the object.
HRESULT
Gets a format that is supported natively by the media source.
Specifies which stream to query. For more information, see .
The zero-based index of the media type to retrieve.
The . The caller must dispose the .
Gets the current media type for a stream.
Specifies which stream to query. For more information, see .
Receives the . The caller must dispose the .
HRESULT
Gets the current media type for a stream.
Specifies which stream to query. For more information, see .
The . The caller must dispose the .
Sets the media type for a stream.
This media type defines the format that the produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
The stream to configure. For more information, see .
Reserved. Set to .
The media type to set.
HRESULT
Sets the media type for a stream.
This media type defines the format that the produces as output. It can differ from the native format provided by the media source. See Remarks for more information.
The stream to configure. For more information, see .
The media type to set.
Seeks to a new position in the media source.
A GUID that specifies the time format. The time format defines the units for the varPosition parameter. Pass for "100-nanosecond units". Some media sources might support additional values.
The position from which playback will be started. The units are specified by the parameter. If the parameter is , set the variant type to .
HRESULT
Seeks to a new position in the media source.
A GUID that specifies the time format. The time format defines the units for the varPosition parameter. Pass for "100-nanosecond units". Some media sources might support additional values.
The position from which playback will be started. The units are specified by the parameter. If the parameter is , set the variant type to .
Reads the next sample from the media source.
Index of the stream.The stream to pull data from. For more information, see .
A bitwise OR of zero or more flags from the enumeration.
Receives the zero-based index of the stream.
Receives a bitwise OR of zero or more flags from the enumeration.
Receives the time stamp of the sample, or the time of the stream event indicated in . The time is given in 100-nanosecond units.
Receives the instance or null. If this parameter receives a non-null value, the caller must release the received .
HRESULT
Reads the next sample from the media source.
Index of the stream.The stream to pull data from. For more information, see .
A bitwise OR of zero or more flags from the enumeration.
Receives the zero-based index of the stream.
Receives a bitwise OR of zero or more flags from the enumeration.
Receives the time stamp of the sample, or the time of the stream event indicated in . The time is given in 100-nanosecond units.
The instance or null. If this parameter receives a non-null value, the caller must release the received .
Flushes one or more streams.
The stream to flush. For more information, see .
HRESULT
Flushes one or more streams.
The stream to flush. For more information, see .
Queries the underlying media source or decoder for an interface.
The stream or object to query. For more information, see .
A service identifier . If the value is , the method calls QueryInterface to get the requested interface. Otherwise, the method calls the IMFGetService::GetService method. For a list of service identifiers, see .
The interface identifier (IID) of the interface being requested.
Receives a pointer to the requested interface. The caller must release the interface.
HRESULT
Queries the underlying media source or decoder for an interface.
The stream or object to query. For more information, see .
A service identifier . If the value is , the method calls QueryInterface to get the requested interface. Otherwise, the method calls the IMFGetService::GetService method. For a list of service identifiers, see .
The interface identifier (IID) of the interface being requested.
A pointer to the requested interface. The caller must release the interface.
Gets an attribute from the underlying media source.
The stream or object to query. For more information, see .
A that identifies the attribute to retrieve. For more information, see .
Receives a that receives the value of the attribute. Call the method to free the .
HRESULT
Gets an attribute from the underlying media source.
The stream or object to query. For more information, see .
A that identifies the attribute to retrieve. For more information, see .
A that receives the value of the attribute. Call the method to free the .
Defines flags that specify how to convert an audio media type.
Convert the media type to a class if possible, or a class otherwise.
Convert the media type to a class..
Defines multi media error codes.
No error.
Unspecified error.
Invalid device id.
Driver failed enable.
Device already allocated.
Device handle is invalid.
No device driver present.
Memory allocation error.
Function isn't supported.
Error value out of range.
Invalid flag passed.
Invalid parameter passed.
Handle being used simultaneously on another thread (eg callback).
Specified alias not found.
Bad registry database.
Registry key not found.
Registry read error.
Registry write error.
Registry delete error.
Registry value not found.
Driver does not call DriverCallback.
More data to be returned.
Unsupported wave format.
Still something playing.
Header not prepared.
Device is synchronous.
Defines the states of a .
The is currently recording.
The is currently stopped.
Provides data for the event.
Initializes a new instance of the class.
Initializes a new instance of the class.
The associated exception. Can be null.
Captures audio data from a audio device (through Wasapi Apis). To capture audio from an output device, use the class.
Minimum supported OS: Windows Vista (see property).
Gets a value indicating whether the class is supported on the current platform.
If true, it is supported; otherwise false.
Reference time units per millisecond.
Reference time units per second.
Occurs when new data got captured and is available.
Occurs when stopped capturing audio.
Initializes a new instance of the class.
CaptureThreadPriority = AboveNormal.
DefaultFormat = null.
Latency = 100ms.
EventSync = true.
SharedMode = Shared.
Initializes a new instance of the class.
CaptureThreadPriority = AboveNormal.
DefaultFormat = null.
Latency = 100ms.
True, to use eventsynchronization instead of a simple loop and sleep behavior. Don't use this in combination with exclusive mode.
Specifies how to open the audio device. Note that if exclusive mode is used, the device can only be used once on the whole system. Don't use exclusive mode in combination with eventSync.
Initializes a new instance of the class.
CaptureThreadPriority = AboveNormal.
DefaultFormat = null.
True, to use eventsynchronization instead of a simple loop and sleep behavior. Don't use this in combination with exclusive mode.
Specifies how to open the audio device. Note that if exclusive mode is used, the device can only be used once on the whole system. Don't use exclusive mode in combination with eventSync.
Latency of the capture specified in milliseconds.
Initializes a new instance of the class.
CaptureThreadPriority = AboveNormal.
True, to use eventsynchronization instead of a simple loop and sleep behavior. Don't use this in combination with exclusive mode.
Specifies how to open the audio device. Note that if exclusive mode is used, the device can only be used once on the whole system. Don't use exclusive mode in combination with eventSync.
Latency of the capture specified in milliseconds.
The default WaveFormat to use for the capture. If this parameter is set to null, the best available format will be chosen automatically.
Initializes a new instance of the class. SynchronizationContext = null.
True, to use eventsynchronization instead of a simple loop and sleep behavior. Don't use this in combination with exclusive mode.
Specifies how to open the audio device. Note that if exclusive mode is used, the device can only be used once on the whole system. Don't use exclusive mode in combination with eventSync.
Latency of the capture specified in milliseconds.
ThreadPriority of the capturethread which runs in background and provides the audiocapture itself.
The default WaveFormat to use for the capture. If this parameter is set to null, the best available format will be chosen automatically.
Initializes a new instance of the class.
True, to use eventsynchronization instead of a simple loop and sleep behavior. Don't use this in combination with exclusive mode.
Specifies how to open the audio device. Note that if exclusive mode is used, the device can only be used once on the whole system. Don't use exclusive mode in combination with eventSync.
Latency of the capture specified in milliseconds.
ThreadPriority of the capturethread which runs in background and provides the audiocapture itself.
The default WaveFormat to use for the capture. If this parameter is set to null, the best available format will be chosen automatically.
The to use to fire events on.
The current platform does not support Wasapi. For more details see: .
The parameter is set to true while the is set to .
Initializes WasapiCapture and prepares all resources for recording.
Note that properties like Device, etc. won't affect WasapiCapture after calling Initialize.
Start recording.
Stop recording.
Gets the RecordingState.
Gets or sets the capture device to use.
Set this property before calling Initialize.
Gets the OutputFormat.
Random ID based on internal audioclients memory address for debugging purposes.
Returns the default device.
The default device.
Returns the stream flags to use for the audioclient initialization.
The stream flags to use for the audioclient initialization.
Stops the capture and frees all resources.
Releases unmanaged and - optionally - managed resources.
true to release both managed and unmanaged resources; false to release only unmanaged resources.
Finalizes an instance of the class.
Provides audio loopback capture through Wasapi. That enables a client to capture the audio stream that is being played by a rendering endpoint device (e.g. speakers, headset, etc.).
Minimum supported OS: Windows Vista (see property).
Read more about loopback recording here: http://msdn.microsoft.com/en-us/library/windows/desktop/dd316551(v=vs.85).aspx.
Initializes a new instance of the class.
Initializes a new instance of the class with the specified in milliseconds.
The latency specified in milliseconds. The default value is 100ms.
Initializes a new instance of the class with the specified in milliseconds
and the to use.
The latency specified in milliseconds. The default value is 100ms.
The default to use.
Note: The is just a suggestion. If the driver does not support this format,
any other format will be picked. After calling , the
property will return the actually picked .
Initializes a new instance of the class with the specified in milliseconds,
the to use and the of the internal capture thread.
The latency specified in milliseconds. The default value is 100ms.
The default to use.
Note: The is just a suggestion. If the driver does not support this format,
any other format will be picked. After calling , the
property will return the actually picked .
The , the internal capture thread will run on.
Returns the default rendering device.
Default rendering device.
Returns the stream flags to use for the audioclient initialization.
The stream flags to use for the audioclient initialization.
Captures audio from a audio device (through WaveIn Apis).
Gets or sets the which should be used for capturing audio.
The property has to be set before initializing. The systems default recording device is used
as default value
of the property.
The value must not be null.
Gets or sets the latency of the wavein specified in milliseconds.
The property has to be set before initializing.
Initializes a new instance of the class using the a default format (44.1kHz, 16 bit, 2 channels, pcm).
Initializes a new instance of the class.
The default format to use. The final format must not equal the specified .
waveFormat
Occurs when new data got captured and is available.
Occurs when the recording stopped.
Gets the format of the captured audio data.
Initializes the instance.
has to be . Call to stop.
Starts recording.
Stops recording.
Gets the current .
Creates and returns the WaveOut handle.
The waveformat to use.
A valid WaveOut handle.
Disposes and stops the instance.
Disposes and stops the instance.
true to release both managed and unmanaged resources; false to release only unmanaged resources.
Finalizes an instance of the class.
Represents a -device.
Enumerates the WaveIn devices installed on the system.
A an iterator to iterate through all enumerated WaveIn devices.
Gets the default WaveOut device.
Initializes a new instance of the class.
The device identifier.
Gets the device identifier.
Gets the name of the device.
Gets the version of the driver.
Gets the standard formats that are supported.
Gets the supported formats.
Provides data for the event.
Gets the available data.
Gets the number of available bytes.
Gets the zero-based offset inside of the array at which the available data starts.
Gets the format of the available .
Initializes a new instance of the class.
A byte array which contains the data.
The offset inside of the array at which the available data starts.
The number of available bytes.
The format of the .
data
or
format
offset must not be less than zero.
bytecount must not be or equal to zero.
Defines a interface for capturing audio.
Occurs when new data got captured and is available.
Occurs when the recording stopped.
Gets the format of the captured audio data.
Initializes the instance.
Starts capturing.
Stops capturing.
Gets the current .
Provides audioplayback through DirectSound.
Initializes an new instance of class.
Latency = 100.
EventSyncContext = SynchronizationContext.Current.
PlaybackThreadPriority = AboveNormal.
Initializes an new instance of class.
EventSyncContext = SynchronizationContext.Current.
PlaybackThreadPriority = AboveNormal.
Latency of the playback specified in milliseconds.
Initializes an new instance of class.
EventSyncContext = SynchronizationContext.Current.
Latency of the playback specified in milliseconds.
ThreadPriority of the playbackthread which runs in background and feeds the device
with data.
Initializes an new instance of class.
Latency of the playback specified in milliseconds.
ThreadPriority of the playbackthread which runs in background and feeds the device
with data.
The synchronizationcontext which is used to raise any events like the "Stopped"-event.
If the passed value is not null, the events will be called async through the SynchronizationContext.Post() method.
Random ID based on the internal directsounds memory address for debugging purposes.
Latency of the playback specified in milliseconds.
Gets or sets the device to use for the playing the waveform-audio data. Note that the
method has to get called
Occurs when the playback gets stopped.
Initializes and prepares all resources for playback.
Note that all properties like , ,... won't affect
after calling .
The source to prepare for playback.
Starts the playback.
Note: has to get called before calling Play.
If PlaybackState is Paused, Resume() will be called automatically.
Stops the playback and frees all allocated resources.
After calling the caller has to call again before another playback
can be started.
Resumes the paused playback.
Pauses the playback.
Gets the current of the playback.
The volume of the playback. Valid values are from 0.0 (0%) to 1.0 (100%).
Note that the if you for example set a volume of 33% => 0.33, the actual volume will be something like 0.33039999.
The currently initialized source.
To change the WaveSource property, call .
Disposes the instance and stops the playbacks.
Disposes and stops the instance.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Destructor which calls the method.
http: //msdn.microsoft.com/en-us/library/dd743869%28VS.85%29.aspx
WaveHeaderFlags: http://msdn.microsoft.com/en-us/library/aa909814.aspx#1
pointer to locked data buffer (lpData)
length of data buffer (dwBufferLength)
used for input only (dwBytesRecorded)
for client's use (dwUser)
assorted flags (dwFlags)
loop control counter (dwLoops)
PWaveHdr, reserved for driver (lpNext)
reserved for driver
uMsg
http: //msdn.microsoft.com/en-us/library/dd743869%28VS.85%29.aspx
Defines standard formats for MmDevices.
11.025 kHz, Mono, 8-bit
11.025 kHz, Stereo, 8-bit
11.025 kHz, Mono, 16-bit
11.025 kHz, Stereo, 16-bit
22.05 kHz, Mono, 8-bit
22.05 kHz, Stereo, 8-bit
22.05 kHz, Mono, 16-bit
22.05 kHz, Stereo, 16-bit
44.1 kHz, Mono, 8-bit
44.1 kHz, Stereo, 8-bit
44.1 kHz, Mono, 16-bit
44.1 kHz, Stereo, 16-bit
44.1 kHz, Mono, 8-bit
44.1 kHz, Stereo, 8-bit
44.1 kHz, Mono, 16-bit
44.1 kHz, Stereo, 16-bit
48 kHz, Mono, 8-bit
48 kHz, Stereo, 8-bit
48 kHz, Mono, 16-bit
48 kHz, Stereo, 16-bit
96 kHz, Mono, 8-bit
96 kHz, Stereo, 8-bit
96 kHz, Mono, 16-bit
96 kHz, Stereo, 16-bit
Defines functionalities supported by a device.
None
Supports pitch control.
Supports playback rate control.
Supports volume control.
Supports separate left and right volume control.
The driver is synchronous and will block while playing a buffer.
Returns sample-accurate position information.
DirectSound
Not documented on msdn.
Provides data for the event.
Initializes a new instance of the class.
Initializes a new instance of the class.
The associated exception. Can be null.
Provides audioplayback through Wasapi.
Minimum supported OS: Windows Vista (see property).
Initializes an new instance of class.
EventSyncContext = SynchronizationContext.Current.
PlaybackThreadPriority = AboveNormal.
Latency = 100ms.
EventSync = False.
ShareMode = Shared.
Initializes an new instance of class.
EventSyncContext = SynchronizationContext.Current.
PlaybackThreadPriority = AboveNormal.
True, to use eventsynchronization instead of a simple loop and sleep behavior.
Specifies how to open the audio device. Note that if exclusive mode is used, only one single
playback for the specified device is possible at once.
Latency of the playback specified in milliseconds.
Initializes an new instance of class.
EventSyncContext = SynchronizationContext.Current.
True, to use eventsynchronization instead of a simple loop and sleep behavior.
Specifies how to open the audio device. Note that if exclusive mode is used, only one single
playback for the specified device is possible at once.
Latency of the playback specified in milliseconds.
ThreadPriority of the playbackthread which runs in background and feeds the device
with data.
Initializes an new instance of class.
True, to use eventsynchronization instead of a simple loop and sleep behavior.
Specifies how to open the audio device. Note that if exclusive mode is used, only one single
playback for the specified device is possible at once.
Latency of the playback specified in milliseconds.
of the playbackthread which runs in background and feeds the device
with data.
The which is used to raise any events like the -event.
If the passed value is not null, the events will be called async through the method.
Gets a value which indicates whether Wasapi is supported on the current Platform. True means that the current
platform supports ; False means that the current platform does not support
.
Sets a value indicating whether the Desktop Window Manager (DWM) has to opt in to or out of Multimedia Class Schedule Service (MMCSS)
scheduling while the current process is alive.
True to instruct the Desktop Window Manager to participate in MMCSS scheduling; False to opt out or end participation in MMCSS scheduling.
DWM will be scheduled by the MMCSS as long as any process that called DwmEnableMMCSS to enable MMCSS is active and has not previously called DwmEnableMMCSS to disable MMCSS.
Gets or sets the stream routing options.
The stream routing options.
The flag can only be used
if the is the default device.
That behavior can be changed by overriding the method.
Gets or sets the which should be used for playback.
The property has to be set before initializing. The systems default playback device is used
as default value
of the property.
Make sure to set only activated render devices.
Gets a random ID based on internal audioclients memory address for debugging purposes.
Gets or sets the latency of the playback specified in milliseconds.
The property has to be set before initializing.
Occurs when the playback stops.
Initializes WasapiOut instance and prepares all resources for playback.
Note that properties like , ,... won't affect WasapiOut after calling
.
The source to prepare for playback.
Starts the playback.
Note: has to get called before calling Play.
If is , will be
called automatically.
Stops the playback and frees most of allocated resources.
Resumes the paused playback.
Pauses the playback.
Gets the current of the playback.
Gets or sets the volume of the playback.
Valid values are in the range from 0.0 (0%) to 1.0 (100%).
The currently initialized source.
To change the WaveSource property, call .
The value of the WaveSource might not be the value which was passed to the method, because
WasapiOut (depending on the waveformat of the source) has to use a DmoResampler.
Gets or sets a value indicating whether should try to use all available channels.
Stops the playback (if playing) and cleans up all used resources.
Updates the stream routing options.
If the current is not the default device,
the flag will be removed.
Disposes and stops the instance.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Finalizes an instance of the class.
Defines options for wasapi-streamrouting.
Disable stream routing.
Use stream routing when the device format has changed.
Use stream routing when the default device has changed.
Use stream routing when the current device was disconnected.
Combination of , and .
Provides audioplayback through the WaveOut api.
Initializes a new instance of the class with a latency of 100 ms.
Initializes a new instance of the class.
Latency of the playback specified in milliseconds.
latency must not be less or equal to zero.
Gets or sets the which should be used for playback.
The property has to be set before initializing. The systems default playback device is used
as default value
of the property.
The value must not be null.
Gets or sets the latency of the playback specified in milliseconds.
The property has to be set before initializing.
Starts the playback.
Note: has to get called before calling Play.
If is , will be
called automatically.
Pauses the playback.
Resumes the paused playback.
Stops the playback and frees most of allocated resources.
Initializes WaveOut instance and prepares all resources for playback.
Note that properties like , ,... won't affect WaveOut after calling
.
The source to prepare for playback.
Gets or sets the volume of the playback.
Valid values are in the range from 0.0 (0%) to 1.0 (100%).
The currently initialized source.
To change the WaveSource property, call .
The value of the WaveSource might not be the value which was passed to the method,
because
WaveOut uses the class to control the volume of the playback.
Gets the current of the playback.
Occurs when the playback stops.
Stops the playback (if playing) and cleans up all used resources.
Gets or sets a value indicating whether should try to use all available channels.
Creates and returns the WaveOut handle.
The waveformat to use.
A valid WaveOut handle.
Disposes and stops the instance.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Finalizes an instance of the class.
Represents a -device.
Enumerates the WaveOut devices installed on the system.
A an iterator to iterate through all enumerated WaveOut devices.
Gets the default WaveOut device.
Initializes a new instance of the class.
The device identifier.
Gets the device identifier.
Gets the name of the device.
Gets the supported functionalities of the device.
Gets the version of the driver.
Gets the standard formats that are supported.
Gets the supported formats.
Defines a interface for audio playbacks.
Starts the audio playback.
Pauses the audio playback.
Resumes the audio playback.
Stops the audio playback.
Initializes the for playing a .
which provides waveform-audio data to play.
Gets or sets the volume of the playback. The value of this property must be within the range from 0.0 to 1.0 where 0.0 equals 0% (muted) and 1.0 equals 100%.
Gets the which provides the waveform-audio data and was used to the .
Gets the of the . The playback state indicates whether the playback is currently playing, paused or stopped.
Occurs when the playback stops.
Defines playback states.
Playback is stopped.
Playback is playing.
Playback is paused.
Provides data for any stopped operations.
Initializes a new instance of the class.
Initializes a new instance of the class.
The associated exception. Can be null.
Gets a value which indicates whether the operation stopped due to an error. True means that that the operation
stopped due to an error. False means that the operation did not stop due to an error.
Gets the associated which caused the operation to stop.
Can be null.
Cached wave source.
Initializes a new instance of the class.
Source which will be copied to a cache.
Creates a stream to buffer data in.
An empty stream to use as buffer.
Reads a sequence of bytes from the cache and advances the position within the cache by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the cache.
The zero-based byte offset in the at which to begin storing the data
read from the cache.
The maximum number of bytes to read from the cache.
The total number of bytes read into the .
Gets the Waveformat of the data stored in the cache.
Gets or sets the position.
Gets a value indicating whether the supports seeking.
Gets the amount of bytes stored in the cache.
Disposes the cache.
Disposes the internal used cache.
Finalizes an instance of the class.
NOT RELEASED YET! Provides conversion between a set of input and output channels using a .
Initializes a new instance of the class.
The which provides input data.
The which defines the mapping of the input channels to the output channels.
Reads a sequence of samples from the and advances the position within the stream by
the number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Gets the output format.
Gets or sets the position in samples.
Gets the length in samples.
Defines possible values for the property.
Default value is .
180° Phase.
90° Phase.
Default value for .
0° Phase.
-90° Phase.
-180° Phase.
Defines possible values for the property.
Default value is WaveformSin (used for ).
Sine
Default value for .
Trinagle
Represents the dmo chorus effect in form of an implementation.
Creates a new instance of the class.
The base source, which feeds the effect with data.
Creates and returns a new instance of the native COM object.
A new instance of the native COM object.
Gets or sets the number of milliseconds the input is delayed before it is played back, in the range from 0 to 20. The default value is 16 ms.
Gets or sets the percentage by which the delay time is modulated by the low-frequency oscillator, in hundredths of a percentage point. Must be in the range from 0 through 100. The default value is 10.
Gets or sets the percentage of output signal to feed back into the effect's input, in the range from -99 to 99. The default value is 25.
Gets or sets the frequency of the LFO, in the range from 0 to 10. The default value is 1.1.
Gets or sets the waveform shape of the LFO. By default, the waveform is a sine.
Gets or sets the phase differential between left and right LFOs. The default value is Phase90.
Gets or sets the ratio of wet (processed) signal to dry (unprocessed) signal. Must be in the range from 0 through 100 (all wet). The default value is 50.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
180° Phase
90° Phase
Default value for the property.
Maximum value for the property.
Minimum value for the property.
-180° Phase
-90° Phase
0° Phase
Default value for the property.
Sine waveform
Triangle waveform
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Represents the dmo compressor effect in form of an implementation.
Creates a new instance of the class.
The base source, which feeds the effect with data.
Creates and returns a new instance of the native COM object.
A new instance of the native COM object.
Gets or sets the time before compression reaches its full value, in the range from 0.01 ms to 500 ms. The default value is 10 ms.
Gets or sets the output gain of signal after compression, in the range from -60 dB to 60 dB. The default value is 0 dB.
Gets or sets the time after is reached before attack phase is started, in milliseconds, in the range from 0 ms to 4 ms. The default value is 4 ms.
Gets or sets the compression ratio, in the range from 1 to 100. The default value is 3, which means 3:1 compression.
Gets or sets the speed at which compression is stopped after input drops below fThreshold, in the range from 50 ms to 3000 ms. The default value is 200 ms.
Gets or sets the point at which compression begins, in decibels, in the range from -60 dB to 0 dB. The default value is -20 dB.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Represents the dmo distortion effect in form of an implementation.
Creates a new instance of the class.
The base source, which feeds the effect with data.
Creates and returns a new instance of the native COM object.
A new instance of the native COM object.
Gets or sets the amount of signal change after distortion, in the range from -60 dB through 0 dB. The default value is -18 dB.
Gets or sets the percentage of distortion intensity, in the range in the range from 0 % through 100 %. The default value is 15 percent.
Gets or sets the center frequency of harmonic content addition, in the range from 100 Hz through 8000 Hz. The default value is 2400 Hz.
Gets or sets the width of frequency band that determines range of harmonic content addition, in the range from 100 Hz through 8000 Hz. The default value is 2400 Hz.
Gets or sets the filter cutoff for high-frequency harmonics attenuation, in the range from 100 Hz through 8000 Hz. The default value is 8000 Hz.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Represents the dmo echo effect in form of an implementation.
Creates a new instance of the class.
The base source, which feeds the effect with data.
Creates and returns a new instance of the native COM object.
A new instance of the native COM object.
Gets or sets the ratio of wet (processed) signal to dry (unprocessed) signal. Must be in the range from
0 through 100 (all wet). The default value is 50.
Gets or sets the percentage of output fed back into input, in the range from 0
through 100. The default value is 50.
Gets or sets the delay for left channel, in milliseconds, in the range from 1
through 2000. The default value is 500 ms.
Gets or sets the delay for right channel, in milliseconds, in the range from 1
through 2000. The default value is 500 ms.
Gets or sets the value that specifies whether to swap left and right delays with each successive echo.
The default value is false, meaning no swap.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Base class for all DMO effects.
DMO effect itself.
Parameter struct of the DMO effect.
Creates a new instance of class.
The base source, which feeds the effect with data.
Creates and returns a new instance of the native COM object.
A new instance of the native COM object.
Creates an MediaObject from the effect DMO.
The input format of the to create.
The output format of the to create.
The created to use for processing audio data.
Gets the output format of the effect.
The output format of the effect.
Gets the underlying effect.
Gets or sets whether the effect is enabled.
Sets the value for one of the effects parameter and updates the effect.
Type of the .
Name of the field to set the value for.
Value to set.
Reads a sequence of bytes from the stream and applies the Dmo effect to them (only if the property is set to true).
An array of bytes. When this method returns, the buffer contains the read bytes.
The zero-based byte offset in buffer at which to begin storing the data read from the stream.
The maximum number of bytes to be read from the stream
The actual number of read bytes.
Represents the dmo flanger effect in form of an implementation.
Creates a new instance of the class.
The base source, which feeds the effect with data.
Creates and returns a new instance of the native COM object.
A new instance of the native COM object.
Gets or sets the ratio of wet (processed) signal to dry (unprocessed) signal. Must be in the range from 0 through 100 (all wet). The default value is 50.
Gets or sets the percentage by which the delay time is modulated by the low-frequency oscillator (LFO), in hundredths of a percentage point. Must be in the range from 0 through 100. The default value is 100.
Gets or sets the percentage of output signal to feed back into the effect's input, in the range from -99 to 99. The default value is -50.
Gets or sets the frequency of the LFO, in the range from 0 to 10. The default value is 0.25.
Gets or sets the waveform shape of the LFO. By default, the waveform is a sine.
Gets or sets the number of milliseconds the input is delayed before it is played back, in the range from 0ms to 4ms. The default value is 2 ms.
Gets or sets the phase differential between left and right LFOs. The default value is .
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Represents the dmo gargle effect in form of an implementation.
Creates a new instance of the class.
The base source, which feeds the effect with data.
Creates and returns a new instance of the native COM object.
A new instance of the native COM object.
Gets or sets the rate of modulation, in Hertz. Must be in the range from 20Hz through 1000Hz. The default value is 20Hz.
Gets or sets the shape of the modulation waveform.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Use the enumeration instead.
Square Waveform.
Use the enumeration instead.
Triangle Waveform
Use the enumeration instead.
Represents the dmo waves reverb effect in form of an implementation.
Creates a new instance of the class.
The base source, which feeds the effect with data.
Creates and returns a new instance of the native COM object.
A new instance of the native COM object.
Gets or sets the input gain of signal, in decibels (dB), in the range from -96 dB through 0 dB. The default value is 0 dB.
Gets or sets the reverb mix, in dB, in the range from -96 dB through 0 dB. The default value is 0 dB.
Gets or sets the reverb time, in milliseconds, in the range from 0.001 through 3000. The default value is 1000.
Gets or sets the high-frequency reverb time ratio, in the range from 0.001 through 0.999. The default value is 0.001.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Default value for the property.
Maximum value for the property.
Minimum value for the property.
Represents an equalizer which can be dynamically modified by adding, removing or modifying
.
Initializes a new instance of the class based on an underlying wave stream.
The underlying wave stream.
Gets a list which contains all used by the equalizer.
None of the
Returns a new instance of the class with 10 preset .
The underlying sample source which provides the data for the equalizer.
A new instance of the class with 10 preset .
Returns a new instance of the class with 10 preset .
The underlying sample source which provides the data for the equalizer.
The bandwidth to use for the 10 . The default value is 18.
The default gain to use for the 10 . The default value is zero
which means that the data, passed through the equalizer won't be affected by the .
A new instance of the class with 10 preset .
Reads a sequence of samples from the underlying , applies the equalizer
effect and advances the position within the stream by
the number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Represents an EqualizerFilter for a single channel.
Initializes a new instance of the class.
The sampleRate of the audio data to process.
The center frequency to adjust.
The bandWidth.
The gain value in dB.
Gets or sets the gain value in dB.
Gets or sets the bandwidth.
Gets the frequency.
Gets the samplerate.
Returns a copy of the .
A copy of the
Processes an array of input samples.
The input samples to process.
The zero-based offset in the buffer to start at.
The number of samples to process.
Specifies the channel to process as a zero-based index.
The total number of channels.
Represents an EqualizerFilter which holds an for each channel.
Initializes a new instance of the class.
Initializes a new instance of the class.
The number of channels to use.
The channel filter which should be used for all channels.
Gets all underlying as a dictionary where the key represents the channel
index and the value to itself.
Gets the average frequency of all .
Gets or sets the average gain value of all .
When using the setter of the property, the new gain value will be applied to all
.
Returns an enumerator that iterates through the .
A that can be used to iterate through the .
Returns an enumerator that iterates through a .
An object that can be used to iterate through the .
Returns a new instance of the class.
The number of channels to use.
The samplerate of the data to process.
The frequency of the filter.
The bandwidth.
The gain value.
A new instance of the class.
Defines possible values for the property.
The default value is .
180° Phase.
90° Phase.
Default value for .
0° Phase.
-90° Phase.
-180° Phase.
Defines possible values for the property.
The default value is .
Triangle.
Sine. Default value.
Defines possible values for the property.
The default value is .
Triangle - Default value.
Square
A pitch shifting effect.
The internal pitch shifting code is based on the implementation of
Stephan M. Bernsee smb@dspdimension.com (see http://www.dspdimension.com) and
Michael Knight madmik3@gmail.com (http://sites.google.com/site/mikescoderama/) who
translated Stephan's code to C#.
Both gave the explicit permission to republish the code as part of CSCore under the MS-PL.
Big thanks!
Gets or sets the pitch shift factor.
A pitch shift factor value which is between 0.5
(one octave down) and 2. (one octave up). A value of exactly 1 does not change
the pitch.
Initializes a new instance of the class.
Underlying base source which provides audio data.
Reads a sequence of samples from the , applies the pitch shifting to them
and advances the position within the stream by the number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source including the applied pitch shift.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Provides the ability use an implementation of the interface fade waveform-audio data.
Initializes a new instance of the class.
The underlying source to use.
Gets or sets the fade strategy to use.
Reads a sequence of samples from the class and advances the position within the stream by
the number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Compared to the , the property of the accepts any value.
Initializes a new instance of the class.
The underlying base source.
Gets or sets the volume. A value of 1.0 will set the volume to 100%. A value of 0.0 will set the volume to 0%.
Since there is no validation of the value, this property can be used to set the gain value to any value.
Gets or sets a value indicating whether the method should clip overflows. The default value is true.
true if the method should clip overflows; otherwise, false.
Clipping the overflows means, that all samples which are not in the range from -1 to 1, will be clipped to that range.
For example if a sample has a value of 1.3, it will be clipped to a value of 1.0.
Reads a sequence of samples from the and advances the position within the stream by
the number of samples read. After reading the samples, the specified gain value will get applied and the overflows will be clipped (optionally).
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Provides a mechanism for fading in/out audio.
The - and the -property must be set before the
can be used.
Gets a value which indicates whether the current volume equals the target volume. If not, the
property returns false.
Gets or sets the sample rate to use.
Gets or sets the number of channels.
Occurs when the fading process has reached its target volume.
Applies the fading algorithm to the waveform-audio data.
Float-array which contains IEEE-Float samples.
Zero-based offset of the .
The number of samples, the fading algorithm has to be applied on.
Starts fading a specified volume another volume.
The start volume in the range from 0.0 to 1.0. If no value gets specified, the default volume will be used.
The default volume is typically 100% or the current volume.
The target volume in the range from 0.0 to 1.0.
The duration.
Starts fading a specified volume another volume.
The start volume in the range from 0.0 to 1.0. If no value gets specified, the default volume will be used.
The default volume is typically 100% or the current volume.
The target volume in the range from 0.0 to 1.0.
The duration in milliseconds.
Stops the fading.
Provides a linear fading algorithm.
Gets the current volume.
Gets the target volume.
Occurs when the fading process has reached its target volume.
Gets a value which indicates whether the class is fading.
True means that the class is fading audio data.
False means that the equals the .
Gets or sets the sample rate to use.
Gets or sets the number of channels.
Starts fading a specified volume another volume.
The start volume in the range from 0.0 to 1.0. If no value gets specified, the default volume will be used.
The default volume is typically 100% or the current volume.
The target volume in the range from 0.0 to 1.0.
The duration.
Starts fading a specified volume another volume.
The start volume in the range from 0.0 to 1.0. If no value gets specified, the default volume will be used.
The default volume is typically 100% or the current volume.
The target volume in the range from 0.0 to 1.0.
The duration in milliseconds.
Stops the fading.
Applies the fading algorithm to the .
Float-array which contains IEEE-Float samples.
Zero-based offset of the .
The number of samples, the fading algorithm has to be applied on.
Provides data for the event.
Gets the individual peak value for each channel.
Gets the master peak value.
Initializes a new instance of the class.
The channel peak values.
The master peak value.
is null or empty.
Represents a peak meter.
Gets the average value of all .
Gets the peak values for all channels.
Obsolete
Gets or sets the interval at which to raise the event.
The interval is specified in milliseconds.
Event which gets raised when a new peak value is available.
Initializes a new instance of the class.
Underlying base source which provides audio data.
source
Reads a sequence of samples from the and advances the position within the stream by the
number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Sets all ChannelPeakValues to zero and resets the amount of processed blocks.
Converts a 32-bit PCM to a .
Initializes a new instance of the class.
The underlying 32-bit POCM instance which has to get converted to a .
is null.
The format of the is not 32-bit PCM.
Reads a sequence of samples from the and advances the position within the stream by the
number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Converts a to a 32-bit PCM .
Initializes a new instance of the class.
The underlying which has to get converted to a 32-bit PCM .
is null.
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Converts a to a 32-bit IeeeFloat .
Initializes a new instance of the class.
The underlying which has to get converted to a 32-bit IeeeFloat .
is null.
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Converts a to a 24-bit PCM .
Initializes a new instance of the class.
The underlying which has to get converted to a 24-bit PCM .
is null.
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Converts a to a 8-bit PCM .
Initializes a new instance of the class.
The underlying which has to get converted to a 8-bit PCM .
is null.
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Converts a 32-bit IeeeFloat to a .
Initializes a new instance of the class.
The underlying 32-bit IeeeFloat instance which has to get converted to a .
is null.
The format of the is not 32-bit IeeeFloat.
Reads a sequence of samples from the and advances the position within the stream by the
number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Converts a to a 16-bit PCM .
Initializes a new instance of the class.
The underlying which has to get converted to a 16-bit PCM .
is null.
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Converts a to a .
The underlying source which provides samples.
The buffer to use for reading from the .
Initializes a new instance of the class.
The underlying which has to get converted to a .
The of the Output-.
The of the Output-.
The is null.
Invalid number of bits per sample specified by the argument.
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Gets the of the output waveform-audio data.
Gets or sets the current position.
Gets the length of the waveform-audio data.
Gets a value indicating whether the supports seeking.
Disposes the instance.
Disposes the underlying .
Not used.
Calls .
Converts a 16-bit PCM to a .
Initializes a new instance of the class.
The underlying 16-bit POCM instance which has to get converted to a .
is null.
The format of the is not 16-bit PCM.
Reads a sequence of samples from the and advances the position within the stream by the
number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Converts a 24-bit PCM to a .
Initializes a new instance of the class.
The underlying 24-bit POCM instance which has to get converted to a .
is null.
The format of the is not 24-bit PCM.
Reads a sequence of samples from the and advances the position within the stream by the
number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Converts a 8-bit PCM to a .
Initializes a new instance of the class.
The underlying 8-bit POCM instance which has to get converted to a .
is null.
The format of the is not 8-bit PCM.
Reads a sequence of samples from the and advances the position within the stream by the
number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Converts a to a .
The underlying source which provides the raw data.
The buffer to use for reading from the .
Initializes a new instance of the class.
The underlying instance which has to get converted to a .
The argument is null.
Reads a sequence of samples from the and advances the position within the stream by the
number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Gets the of the waveform-audio data.
Gets or sets the current position in samples.
Gets the length of the waveform-audio data in samples.
Gets a value indicating whether the supports seeking.
Disposes the .
Disposes the .
Not used.
Finalizes an instance of the class.
Returns an implementation of the interface which converts the specified to a .
The instance to convert.
Returns an implementation of the interface which converts the specified to a .
is null.
The of the is not supported.
A thread-safe (synchronized) wrapper around the specified a .
The type of the underlying .
The type of the data read by the method.
Initializes a new instance of the class.
The underlying source to synchronize.
Gets the output of the .
Gets or sets the position of the .
Gets the length of the .
Gets a value indicating whether the supports seeking.
Gets or sets the .
Reads a sequence of elements from the and advances its position by the
number of elements read.
An array of elements. When this method returns, the contains the specified
array of elements with the values between and ( +
- 1) replaced by the elements read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of elements to read from the current source.
The total number of elements read into the buffer.
Defines an explicit conversation of a to its
.
Instance of the .
The of the .
Disposes the and releases all allocated resources.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Disposes the and releases all allocated resources.
Finalizes an instance of the class.
Buffered WaveSource which overrides the allocated memory after the internal buffer got full.
Gets or sets a value which specifies whether the method should clear the specified buffer with zeros before reading any data.
Gets the maximum size of the buffer in bytes.
The maximum size of the buffer in bytes.
Initializes a new instance of the class with a default buffersize of 5 seconds.
The WaveFormat of the source.
Initializes a new instance of the class.
The WaveFormat of the source.
Buffersize in bytes.
Adds new data to the internal buffer.
A byte-array which contains the data.
Zero-based offset in the (specified in bytes).
Number of bytes to add to the internal buffer.
Number of added bytes.
Reads a sequence of bytes from the internal buffer of the and advances the position within the internal buffer by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the internal buffer.
The zero-based byte offset in the at which to begin storing the data
read from the internal buffer.
The maximum number of bytes to read from the internal buffer.
The total number of bytes read into the .
Gets the of the waveform-audio data.
Not supported.
Gets the number of stored bytes inside of the internal buffer.
Gets a value indicating whether the supports seeking.
Disposes the and its internal buffer.
Disposes the and its internal buffer.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Default destructor which calls .
Reads data from the and stores the read data in a buffer.
Initializes a new instance of the class.
The to buffer.
Size of the buffer.
is out of range.
Reads a sequence of bytes from internal buffer and advances the position by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the .
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the .
The total number of bytes read into the buffer.
BufferSource
Resets/Clears the buffer.
BufferSource
Gets or sets the position of the source.
BufferSource
Gets the length of the source.
Disposes the and releases all allocated resources.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Fire the event after every block read.
Occurs when the method reads a block.
If the method reads n during a single call, the event will get fired n times.
Initializes a new instance of the class.
Underlying base source which provides audio data.
source
Reads a sequence of samples from the and advances the position within the stream by
the number of samples read. Fires the event for each block it reads (one block = (number of channels) samples).
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Provides data for the event.
Gets the sample of the left channel.
Gets the sample of the right channel.
Gets the samples of all channels if the number of is greater or equal to three.
If the number of is less than three, the value of the property is null.
Gets the number of channels.
Initializes a new instance of the class.
The samples.
The index inside of the -array.
The number of channels.
Provides data for the event.
Type of the array.
Gets the number of read elements.
Gets the array which contains the read data.
Initializes a new instance of the class.
The read data.
The number of read elements.
Converts a mono source to a stereo source.
Initializes a new instance of the class.
The underlying mono source.
The has more or less than one channel.
Reads a sequence of samples from the and advances the position within the stream by
the number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Gets or sets the position in samples.
Gets the length in samples.
Gets the of the waveform-audio data.
Disposes the and the underlying .
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Notifies the client when a certain amount of data got read.
Can be used as some kind of a timer for playbacks,...
Initializes a new instance of the class.
Underlying base source which provides audio data.
source is null.
Gets or sets the interval in blocks. One block equals one sample for each channel.
Gets or sets the interval in milliseconds.
Occurs when a specified amount of data got read.
The - or the -property specifies how many samples have to get
read to trigger the event.
Reads a sequence of samples from the and advances the position within the
stream by
the number of samples read. When the [(number of total samples read) / (number of channels)] %
= 0, the event gets triggered.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Provides control over the balance between the left and the right channel of an audio source.
Gets or sets the balance. The valid range is from -1 to 1. -1 will mute the right channel, 1 will mute left channel.
The value is not within the specified range.
Initializes a new instance of the class.
Underlying base source which provides audio data.
Source has to be stereo.
Reads a sequence of samples from the and advances the position within the stream by
the number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Read samples has to be a multiple of two.
Notifies the client when a specific number of samples got read and when the method got called.
Compared to the , none of both events won't provide the read data.
Initializes a new instance of the class.
Underlying base source which provides audio data.
source
Gets or sets the interval (in which to fire the event) in blocks. One block equals one
sample for each channel.
Gets or sets the interval (in which to fire the event) in milliseconds.
Occurs when the method got called.
Occurs when a specified amount of data got read.
The - or the -property specifies how many samples have to get
read to trigger the event.
Reads a sequence of samples from the and advances the position within the
stream by
the number of samples read. Triggers the event and if the [(number of total samples read) /
(number of channels)] %
= 0, the event gets triggered.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Provides the ability to adjust the volume of an audio stream.
The epsilon which is used to compare for almost-equality of the volume in .
Gets or sets the volume specified by a value in the range from 0.0 to 1.0.
Initializes a new instance of the class.
The underlying base source.
Reads a sequence of samples from the and advances the position within the stream by
the number of samples read. After reading the samples, the volume of the read samples gets manipulated.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Represents an implementation of the interface which provides the data provided by a specified object.
Occurs when new data is available.
Gets the underlying instance.
Initializes a new instance of the class with a default bufferSize of 5 seconds.
The soundIn which provides recorded data.
Note that soundIn has to be already initialized.
Note that old data ("old" gets specified by the bufferSize) gets overridden.
For example, if the bufferSize is about 5 seconds big, data which got recorded 6 seconds ago, won't be available anymore.
Initializes a new instance of the class.
The soundIn which provides recorded data.
Size of the internal buffer in bytes.
Note that soundIn has to be already initialized.
Note that old data ("old" gets specified by the bufferSize) gets overridden.
For example, if the bufferSize is about 5 seconds big, data which got recorded 6 seconds ago, won't be available anymore.
Reads a sequence of bytes from the internal stream which holds recorded data and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Gets the of the recorded data.
Gets or sets the current position in bytes. This property is currently not supported.
Gets the length in bytes. This property is currently not supported.
Gets a value indicating whether the supports seeking.
Gets or sets a value which indicates whether the method should always provide the requested amount of data.
For the case that the internal buffer can't offer the requested amount of data, the rest of the requested bytes will be filled up with zeros.
Disposes the .
Disposes the .
Not used.
Destructor of the class which calls the method.
Converts a stereo source to a mono source.
Initializes a new instance of the class.
The underlying stereo source.
The has more or less than two channels.
Reads a sequence of samples from the and advances the position within the stream by
the number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Gets or sets the position in samples.
Gets the length in samples.
Gets the of the waveform-audio data.
Disposes the and the underlying .
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
A Stream which can be used for endless looping.
Initializes a new instance of the class.
The underlying .
Gets or sets whether looping is enabled.
Occurs when the underlying reaches its end.
If the property is set to true, the Position of the
will be reseted to zero.
Reads from the underlying . If the
does not provide any more data, its position gets reseted to zero.
Buffer which receives the read data.
Zero-based offset offset in the at which to begin storing data.
The maximum number of bytes to read.
Actual number of read bytes.
Generates a sine wave.
Gets or sets the frequency of the sine wave.
Gets or sets the amplitude of the sine wave.
Gets or sets the phase of the sine wave.
1000Hz, 0.5 amplitude, 0.0 phase
Initializes a new instance of the class.
Specifies the frequency of the sine wave in Hz.
Specifies the amplitude of the sine wave. Use a value between 0 and 1.
Specifies the initial phase. Use a value between 0 and 1.
Reads a sequence of samples from the .
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Gets the of the waveform-audio data.
Not supported.
Not supported.
Gets a value indicating whether the supports seeking.
Not used.
Defines all known encoding types. Primary used in the class. See
.
WAVE_FORMAT_UNKNOWN, Microsoft Corporation
WAVE_FORMAT_PCM Microsoft Corporation
WAVE_FORMAT_ADPCM Microsoft Corporation
WAVE_FORMAT_IEEE_FLOAT Microsoft Corporation
WAVE_FORMAT_VSELP Compaq Computer Corp.
WAVE_FORMAT_IBM_CVSD IBM Corporation
WAVE_FORMAT_ALAW Microsoft Corporation
WAVE_FORMAT_MULAW Microsoft Corporation
WAVE_FORMAT_DTS Microsoft Corporation
WAVE_FORMAT_DRM Microsoft Corporation
WAVE_FORMAT_WMAVOICE9
WAVE_FORMAT_OKI_ADPCM OKI
WAVE_FORMAT_DVI_ADPCM Intel Corporation
WAVE_FORMAT_IMA_ADPCM Intel Corporation
WAVE_FORMAT_MEDIASPACE_ADPCM Videologic
WAVE_FORMAT_SIERRA_ADPCM Sierra Semiconductor Corp
WAVE_FORMAT_G723_ADPCM Antex Electronics Corporation
WAVE_FORMAT_DIGISTD DSP Solutions, Inc.
WAVE_FORMAT_DIGIFIX DSP Solutions, Inc.
WAVE_FORMAT_DIALOGIC_OKI_ADPCM Dialogic Corporation
WAVE_FORMAT_MEDIAVISION_ADPCM Media Vision, Inc.
WAVE_FORMAT_CU_CODEC Hewlett-Packard Company
WAVE_FORMAT_YAMAHA_ADPCM Yamaha Corporation of America
WAVE_FORMAT_SONARC Speech Compression
WAVE_FORMAT_DSPGROUP_TRUESPEECH DSP Group, Inc
WAVE_FORMAT_ECHOSC1 Echo Speech Corporation
WAVE_FORMAT_AUDIOFILE_AF36, Virtual Music, Inc.
WAVE_FORMAT_APTX Audio Processing Technology
WAVE_FORMAT_AUDIOFILE_AF10, Virtual Music, Inc.
WAVE_FORMAT_PROSODY_1612, Aculab plc
WAVE_FORMAT_LRC, Merging Technologies S.A.
WAVE_FORMAT_DOLBY_AC2, Dolby Laboratories
WAVE_FORMAT_GSM610, Microsoft Corporation
WAVE_FORMAT_MSNAUDIO, Microsoft Corporation
WAVE_FORMAT_ANTEX_ADPCME, Antex Electronics Corporation
WAVE_FORMAT_CONTROL_RES_VQLPC, Control Resources Limited
WAVE_FORMAT_DIGIREAL, DSP Solutions, Inc.
WAVE_FORMAT_DIGIADPCM, DSP Solutions, Inc.
WAVE_FORMAT_CONTROL_RES_CR10, Control Resources Limited
WAVE_FORMAT_NMS_VBXADPCM
WAVE_FORMAT_CS_IMAADPCM
WAVE_FORMAT_ECHOSC3
WAVE_FORMAT_ROCKWELL_ADPCM
WAVE_FORMAT_ROCKWELL_DIGITALK
WAVE_FORMAT_XEBEC
WAVE_FORMAT_G721_ADPCM
WAVE_FORMAT_G728_CELP
WAVE_FORMAT_MSG723
WAVE_FORMAT_MPEG, Microsoft Corporation
WAVE_FORMAT_RT24
WAVE_FORMAT_PAC
WAVE_FORMAT_MPEGLAYER3, ISO/MPEG Layer3 Format Tag
WAVE_FORMAT_LUCENT_G723
WAVE_FORMAT_CIRRUS
WAVE_FORMAT_ESPCM
WAVE_FORMAT_VOXWARE
WAVE_FORMAT_CANOPUS_ATRAC
WAVE_FORMAT_G726_ADPCM
WAVE_FORMAT_G722_ADPCM
WAVE_FORMAT_DSAT_DISPLAY
WAVE_FORMAT_VOXWARE_BYTE_ALIGNED
WAVE_FORMAT_VOXWARE_AC8
WAVE_FORMAT_VOXWARE_AC10
WAVE_FORMAT_VOXWARE_AC16
WAVE_FORMAT_VOXWARE_AC20
WAVE_FORMAT_VOXWARE_RT24
WAVE_FORMAT_VOXWARE_RT29
WAVE_FORMAT_VOXWARE_RT29HW
WAVE_FORMAT_VOXWARE_VR12
WAVE_FORMAT_VOXWARE_VR18
WAVE_FORMAT_VOXWARE_TQ40
WAVE_FORMAT_SOFTSOUND
WAVE_FORMAT_VOXWARE_TQ60
WAVE_FORMAT_MSRT24
WAVE_FORMAT_G729A
WAVE_FORMAT_MVI_MVI2
WAVE_FORMAT_DF_G726
WAVE_FORMAT_DF_GSM610
WAVE_FORMAT_ISIAUDIO
WAVE_FORMAT_ONLIVE
WAVE_FORMAT_SBC24
WAVE_FORMAT_DOLBY_AC3_SPDIF
WAVE_FORMAT_MEDIASONIC_G723
WAVE_FORMAT_PROSODY_8KBPS
WAVE_FORMAT_ZYXEL_ADPCM
WAVE_FORMAT_PHILIPS_LPCBB
WAVE_FORMAT_PACKED
WAVE_FORMAT_MALDEN_PHONYTALK
WAVE_FORMAT_GSM
WAVE_FORMAT_G729
WAVE_FORMAT_G723
WAVE_FORMAT_ACELP
WAVE_FORMAT_RAW_AAC1
WAVE_FORMAT_RHETOREX_ADPCM
WAVE_FORMAT_IRAT
WAVE_FORMAT_VIVO_G723
WAVE_FORMAT_VIVO_SIREN
WAVE_FORMAT_DIGITAL_G723
WAVE_FORMAT_SANYO_LD_ADPCM
WAVE_FORMAT_SIPROLAB_ACEPLNET
WAVE_FORMAT_SIPROLAB_ACELP4800
WAVE_FORMAT_SIPROLAB_ACELP8V3
WAVE_FORMAT_SIPROLAB_G729
WAVE_FORMAT_SIPROLAB_G729A
WAVE_FORMAT_SIPROLAB_KELVIN
WAVE_FORMAT_G726ADPCM
WAVE_FORMAT_QUALCOMM_PUREVOICE
WAVE_FORMAT_QUALCOMM_HALFRATE
WAVE_FORMAT_TUBGSM
WAVE_FORMAT_MSAUDIO1
Windows Media Audio, WAVE_FORMAT_WMAUDIO2, Microsoft Corporation
Windows Media Audio Professional WAVE_FORMAT_WMAUDIO3, Microsoft Corporation
Windows Media Audio Lossless, WAVE_FORMAT_WMAUDIO_LOSSLESS
Windows Media Audio Professional over SPDIF WAVE_FORMAT_WMASPDIF (0x0164)
WAVE_FORMAT_UNISYS_NAP_ADPCM
WAVE_FORMAT_UNISYS_NAP_ULAW
WAVE_FORMAT_UNISYS_NAP_ALAW
WAVE_FORMAT_UNISYS_NAP_16K
WAVE_FORMAT_CREATIVE_ADPCM
WAVE_FORMAT_CREATIVE_FASTSPEECH8
WAVE_FORMAT_CREATIVE_FASTSPEECH10
WAVE_FORMAT_UHER_ADPCM
WAVE_FORMAT_QUARTERDECK
WAVE_FORMAT_ILINK_VC
WAVE_FORMAT_RAW_SPORT
WAVE_FORMAT_ESST_AC3
WAVE_FORMAT_IPI_HSX
WAVE_FORMAT_IPI_RPELP
WAVE_FORMAT_CS2
WAVE_FORMAT_SONY_SCX
WAVE_FORMAT_FM_TOWNS_SND
WAVE_FORMAT_BTV_DIGITAL
WAVE_FORMAT_QDESIGN_MUSIC
WAVE_FORMAT_VME_VMPCM
WAVE_FORMAT_TPC
WAVE_FORMAT_OLIGSM
WAVE_FORMAT_OLIADPCM
WAVE_FORMAT_OLICELP
WAVE_FORMAT_OLISBC
WAVE_FORMAT_OLIOPR
WAVE_FORMAT_LH_CODEC
WAVE_FORMAT_NORRIS
WAVE_FORMAT_SOUNDSPACE_MUSICOMPRESS
Advanced Audio Coding (AAC) audio in Audio Data Transport Stream (ADTS) format.
The format block is a WAVEFORMATEX structure with wFormatTag equal to WAVE_FORMAT_MPEG_ADTS_AAC.
The WAVEFORMATEX structure specifies the core AAC-LC sample rate and number of channels,
prior to applying spectral band replication (SBR) or parametric stereo (PS) tools, if present.
No additional data is required after the WAVEFORMATEX structure.
http://msdn.microsoft.com/en-us/library/dd317599%28VS.85%29.aspx
MPEG_RAW_AAC
Source wmCodec.h
MPEG-4 audio transport stream with a synchronization layer (LOAS) and a multiplex layer (LATM).
The format block is a WAVEFORMATEX structure with wFormatTag equal to WAVE_FORMAT_MPEG_LOAS.
See .
The WAVEFORMATEX structure specifies the core AAC-LC sample rate and number of channels,
prior to applying spectral SBR or PS tools, if present.
No additional data is required after the WAVEFORMATEX structure.
NOKIA_MPEG_ADTS_AAC
Source wmCodec.h
NOKIA_MPEG_RAW_AAC
Source wmCodec.h
VODAFONE_MPEG_ADTS_AAC
Source wmCodec.h
VODAFONE_MPEG_RAW_AAC
Source wmCodec.h
High-Efficiency Advanced Audio Coding (HE-AAC) stream.
The format block is an HEAACWAVEFORMAT structure. See .
WAVE_FORMAT_DVM
WAVE_FORMAT_VORBIS1 "Og" Original stream compatible
WAVE_FORMAT_VORBIS2 "Pg" Have independent header
WAVE_FORMAT_VORBIS3 "Qg" Have no codebook header
WAVE_FORMAT_VORBIS1P "og" Original stream compatible
WAVE_FORMAT_VORBIS2P "pg" Have independent headere
WAVE_FORMAT_VORBIS3P "qg" Have no codebook header
Raw AAC1
Windows Media Audio Voice (WMA Voice)
Extensible
WAVE_FORMAT_DEVELOPMENT
FLAC
WARNING: If MimeType equals "-->" the picture will be downloaded from the web.
Use GetURL() the get the url to the picture. If not, data, contained by the frame will
be used.
Range from 1(worst) to 255(best). Zero -> Rating disabled.
- 1 -> ommit the counter. Default length is 4 byte. If 4 byte is not enough to hold the
number, a byte will be added(up to 8 bytes total).
Gets the formatstring of the timestamp
length of the string which has to be parsed
Exception class for all ID3-Tag related Exceptions.
encoded with ISO-8859-1 [ISO-8859-1] or UTF-8 [UTF-8]
Defines a base class for all time converts. A time converter can be used to convert raw positions (depending on the implementation i.e. bytes or samples) to a human
readable .
A for objects.
A for objects.
Converts a back to raw elements, a source works with. The unit of these raw elements depends on the implementation. For more information, see .
The of the source which gets used to convert the .
The to convert to raw elements.
The converted in raw elements.
Converts raw elements to a value. The unit of these raw elements depends on the implementation. For more information, see .
The of the source which gets used to convert the .
The raw elements to convert to a .
The .
Specifies which to use.
Gets the type of the to use.
Gets or sets the arguments to pass to the constructor of the . For more information, see .
Gets or sets a value indicating whether a new instance of the specified should be created each time the queries the .
The default value is false.
Initializes a new instance of the class based on the type of the to use.
Type of the to use.
timeConverterType
Specified type is no time converter.;timeConverterType
Provides s for converting raw time values (e.g. bytes, samples,...) to a and back.
Gets the default instance of the factory.
Registers a new for a specific source type.
The to register.
The source type.
timeConverter is null.
There is already a registered for the specified .
The class uses the source type to find choose the best for an . For more information, see .
Unregisters a previously registered .
The source type, that got passed to the method previously.
The specified source type could not be found.
Gets the for the specified .
The object to get the for.
The type of the .
The best for the specified .
The specified is null.
Specified type is no AudioSource.;type
or
No registered time converter for the specified source type was found.
or
Multiple possible time converters, for the specified source type, were found. Specify which time converter to use, through the .
The chooses the best for the specified .
If there is no applied to the object (the ), it looks up the inheritance hierarchy (interfaces included) of the object
and searches for all registered source types. If there is a match it returns the associated . If there are more or less than one match BUT no
it throws an exception.
Gets the for the specified source type.
The type of the source.
The best for the specified source type.
Specified type is no AudioSource.;type
or
No registered time converter for the specified source type was found.
or
Multiple possible time converters, for the specified source type, were found. Specify which time converter to use, through the .
The chooses the best for the specified source type.
If there is no applied to the object, it looks up the inheritance hierarchy (interfaces included) of the object
and searches for all registered source types. If there is a match it returns the associated . If there are more or less than one match BUT no
it throws an exception.
Gets the for the specified .
The to get the associated for.
The best for the specified .
Specified type is no AudioSource.;type
or
No registered time converter for the specified source type was found.
or
Multiple possible time converters, for the specified source type, were found. Specify which time converter to use, through the .
The chooses the best for the specified .
If there is no applied to the object (the ), it looks up the inheritance hierarchy (interfaces included) of the object
and searches for all registered source types. If there is a match it returns the associated . If there are more or less than one match BUT no
it throws an exception.
Clears the internal cache.
Defines a 3D vector.
Retrieves or sets the x component of the 3D vector.
Retrieves or sets the y component of the 3D vector.
Retrieves or sets the z component of the 3D vector.
Initializes a new instance of the structure.
The value to use for the x, y and z component of the 3D vector.
Initializes a new instance of the structure.
The x component of the 3D vector.
The y component of the 3D vector..
The z component of the 3D vector.
Returns a string that represents the 3D vector.
A string that represents the 3D vector.
This class is based on the CUETools.NET BitReader (see http://sourceforge.net/p/cuetoolsnet/code/ci/default/tree/CUETools.Codecs/BitReader.cs)
The author "Grigory Chudov" explicitly gave the permission to use the source as part of the cscore source code which got licensed under the ms-pl.
Represents a read- and writeable buffer which can hold a specified number of elements.
Specifies the type of the elements to store.
Initializes a new instance of the class.
Size of the buffer.
Adds new data to the internal buffer.
Array which contains the data.
Zero-based offset in the (specified in "elements").
Number of elements to add to the internal buffer.
Number of added elements.
Reads a sequence of elements from the internal buffer of the .
An array of elements. When this method returns, the contains the specified
array with the values between and ( +
- 1) replaced by the elements read from the internal buffer.
The zero-based offset in the at which to begin storing the data
read from the internal buffer.
The maximum number of elements to read from the internal buffer.
The total number of elements read into the .
Gets the size of the internal buffer.
Gets the number of buffered elements.
Clears the internal buffer.
Disposes the and releases the internal used buffer.
Disposes the and releases the internal used buffer.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Default destructor which calls the method.
Represents a complex number.
A complex number with a total length of zero.
Imaginary component of the complex number.
Real component of the complex number.
Initializes a new instance of the structure.
The real component of the complex number.
The imaginary component of the complex number will be set to zero.
Initializes a new instance of the structure.
The real component of the complex number.
The imaginary component of the complex number.
Gets the absolute value of the complex number.
Defines an implicit conversion of a complex number to a single-precision floating-point number.
Complex number.
The absolute value of the .
Defines an implicit conversion of a complex number to a double-precision floating-point number.
Complex number.
The absolute value of the .
Implements the operator ==.
The complex1.
The complex2.
The result of the operator.
Implements the operator !=.
The complex1.
The complex2.
The result of the operator.
Indicates whether the current complex value is equal to another complex value.
A complex value to compare with this complex value.
true if the current complex value is equal to the complex value; otherwise, false.
Determines whether the specified , is equal to this instance.
The to compare with this instance.
true if the specified is equal to this instance; otherwise, false.
Returns a hash code for this instance.
A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table.
This class is based on the CUETools.NET project (see http://sourceforge.net/p/cuetoolsnet/)
The author "Grigory Chudov" explicitly gave the permission to use the source as part of the cscore source code which got licensed under the ms-pl.
This class is based on the CUETools.NET project (see http://sourceforge.net/p/cuetoolsnet/)
The author "Grigory Chudov" explicitly gave the permission to use the source as part of the cscore source code which got licensed under the ms-pl.
This class is based on the CUETools.NET project (see http://sourceforge.net/p/cuetoolsnet/)
The author "Grigory Chudov" explicitly gave the permission to use the source as part of the cscore source code which got licensed under the ms-pl.
Values that are used in activation calls to indicate the execution contexts in which an object is to be run.
The code that creates and manages objects of this class is a DLL that runs in the same process as the caller of the function specifying the class context.
Indicates a handler dll, which runs on the same process as the caller.
Indicates a server executable, which runs on the same machine but on a different process than the caller.
Obsolete.
Indicates a server executable, which runs on a different machine than the caller.
Obsolete.
Reserved.
Reserved.
Reserved.
Reserved.
Indicates that code should not be allowed to be downloaded from the Directory Service (if any) or the Internet.
Reserved.
Specify if you want the activation to fail if it uses custom marshalling.
Enables the downloading of code from the directory service or the Internet.
Indicates that no log messages about activation failure should be written to the Event Log.
Indicates that activate-as-activator capability is disabled for this activation only.
Indicates that activate-as-activator capability is enabled for this activation only.
Indicates that activation should begin from the default context of the current apartment.
Activate or connect to a 32-bit version of the server; fail if one is not registered.
Activate or connect to a 64 bit version of the server; fail if one is not registered.
When this flag is specified, COM uses the impersonation token of the thread, if one is present, for the activation request made by the thread. When this flag is not specified or if the thread does not have an impersonation token, COM uses the process token of the thread's process for the activation request made by the thread.
Indicates activation is for an app container. Reserved for internal use.
Specify this flag for Interactive User activation behavior for As-Activator servers.
Used for loading Proxy/Stub DLLs.
Bitwise combination of the and the constants.
Bitwise combination of the , the and the constants.
Bitwise combination of the and the constants.
Managed implementation of the interface. See .
Initializes a new instance of the class.
Underlying .
Initializes a new instance of the class.
Underlying .
Indicates whether the underlying stream should be disposed on .
Creates a new stream object with its own seek pointer that references the same bytes as the original stream.
When this method returns, contains the new stream object. This parameter is passed uninitialized.
HRESULT
Ensures that any changes made to a stream object that is open in transacted mode are reflected in the parent storage.
A value that controls how the changes for the stream object are committed.
HRESULT
Copies a specified number of bytes from the current seek pointer in the stream to the current seek pointer in another stream.
A reference to the destination stream.
The number of bytes to copy from the source stream.
On successful return, contains the actual number of bytes read from the source.
On successful return, contains the actual number of bytes written to the destination.
HRESULT
Restricts access to a specified range of bytes in the stream.
The byte offset for the beginning of the range.
The length of the range, in bytes, to restrict.
The requested restrictions on accessing the range.
HRESULT
Reads a specified number of bytes from the stream object into memory starting at the current seek pointer.
When this method returns, contains the data read from the stream. This parameter is passed uninitialized.
The number of bytes to read from the stream object.
A pointer to a ULONG variable that receives the actual number of bytes read from the stream object.
HRESULT
Discards all changes that have been made to a transacted stream since the last Commit call.
HRESULT
Changes the seek pointer to a new location relative to the beginning of the stream, to the end of the stream, or to the current seek pointer.
The displacement to add to dwOrigin.
The origin of the seek. The origin can be the beginning of the file, the current seek pointer, or the end of the file.
On successful return, contains the offset of the seek pointer from the beginning of the stream.
HRESULT
Changes the size of the stream object.
The new size of the stream as a number of bytes.
HRESULT
Retrieves the STATSTG structure for this stream.
When this method returns, contains a STATSTG structure that describes this stream object. This parameter is passed uninitialized.
Members in the STATSTG structure that this method does not return, thus saving some memory allocation operations.
HRESULT
Removes the access restriction on a range of bytes previously restricted with the LockRegion method.
The byte offset for the beginning of the range.
The length, in bytes, of the range to restrict.
The access restrictions previously placed on the range.
HRESULT
Writes a specified number of bytes into the stream object starting at the current seek pointer.
The buffer to write this stream to.
he number of bytes to write to the stream.
On successful return, contains the actual number of bytes written to the stream object. If the caller sets this pointer to Zero, this method does not provide the actual number of bytes written.
HRESULT
Gets a value indicating whether the current stream supports reading.
Gets a value indicating whether the current stream supports seeking.
Gets a value indicating whether the current stream supports writing.
Clears all buffers for this stream and causes any buffered data to be written to the underlying device.
Gets the length in bytes of the stream.
Gets or sets the position within the current stream.
Reads a sequence of bytes from the current stream and advances the position within the stream by the number of bytes read.
An array of bytes. When this method returns, the buffer contains the specified byte array with the values between offset and (offset + count - 1) replaced by the bytes read from the current source.
The zero-based byte offset in buffer at which to begin storing the data read from the current stream.
The maximum number of bytes to be read from the current stream.
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached.
Sets the position within the current stream.
A byte offset relative to the origin parameter.
A value of type indicating the reference point used to obtain the new position.
The new position within the current stream.
Sets the length of the current stream.
The desired length of the current stream in bytes.
Writes a sequence of bytes to the current stream and advances the current position within this stream by the number of bytes written.
An array of bytes. This method copies count bytes from buffer to the current stream.
The zero-based byte offset in buffer at which to begin copying bytes to the current stream.
The number of bytes to be written to the current stream.
Releases the unmanaged resources used by the Stream and optionally releases the managed resources.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Closes the current stream and releases any resources (such as sockets and file handles) associated with the current stream.
Provides the managed definition of the IStream interface.
Reads a specified number of bytes from the stream object into memory starting at the current seek pointer.
When this method returns, contains the data read from the stream. This parameter is passed uninitialized.
The number of bytes to read from the stream object.
A pointer to a ULONG variable that receives the actual number of bytes read from the stream object.
HRESULT
Writes a specified number of bytes into the stream object starting at the current seek pointer.
The buffer to write this stream to.
he number of bytes to write to the stream.
On successful return, contains the actual number of bytes written to the stream object. If the caller sets this pointer to Zero, this method does not provide the actual number of bytes written.
HRESULT
Changes the seek pointer to a new location relative to the beginning of the stream, to the end of the stream, or to the current seek pointer.
The displacement to add to dwOrigin.
The origin of the seek. The origin can be the beginning of the file, the current seek pointer, or the end of the file.
On successful return, contains the offset of the seek pointer from the beginning of the stream.
HRESULT
Changes the size of the stream object.
The new size of the stream as a number of bytes.
HRESULT
Copies a specified number of bytes from the current seek pointer in the stream to the current seek pointer in another stream.
A reference to the destination stream.
The number of bytes to copy from the source stream.
On successful return, contains the actual number of bytes read from the source.
On successful return, contains the actual number of bytes written to the destination.
HRESULT
Ensures that any changes made to a stream object that is open in transacted mode are reflected in the parent storage.
A value that controls how the changes for the stream object are committed.
HRESULT
Discards all changes that have been made to a transacted stream since the last Commit call.
HRESULT
Restricts access to a specified range of bytes in the stream.
The byte offset for the beginning of the range.
The length of the range, in bytes, to restrict.
The requested restrictions on accessing the range.
HRESULT
Removes the access restriction on a range of bytes previously restricted with the LockRegion method.
The byte offset for the beginning of the range.
The length, in bytes, of the range to restrict.
The access restrictions previously placed on the range.
HRESULT
Retrieves the STATSTG structure for this stream.
When this method returns, contains a STATSTG structure that describes this stream object. This parameter is passed uninitialized.
Members in the STATSTG structure that this method does not return, thus saving some memory allocation operations.
HRESULT
Creates a new stream object with its own seek pointer that references the same bytes as the original stream.
When this method returns, contains the new stream object. This parameter is passed uninitialized.
HRESULT
Exception for Com Exceptions.
Throws an if the is not .
Errorcode.
Name of the interface which contains the COM-function which returned the specified .
Name of the COM-function which returned the specified .
Name of the Cominterface which caused the error.
Name of the member of the Cominterface which caused the error.
Initializes a new instance of the class.
Errorcode.
Name of the interface which contains the COM-function which returned the specified .
Name of the COM-function which returned the specified .
Initializes a new instance of the class from serialization data.
The object that holds the serialized object data.
The StreamingContext object that supplies the contextual information about the source or destination.
Populates a with the data needed to serialize the target object.
The to populate with data.
The destination (see StreamingContext) for this serialization.
Exposes methods for enumerating, getting, and setting property values.
For more information,
.
Device description - key
Device interface enabled - key
Device interface CLSID - key
Device friendly name - key
Audio Endpoint Path - key
Audio Engine Device Format
Initializes a new instance of the class.
The native pointer of the COM object.
Gets the number of properties available.
Gets or sets the at the specified index.
The .
The index.
The at the specified index.
Gets or sets the for the specified .
The key.
Returns an enumerator that iterates through the .
A that can be used to iterate through the .
Returns an enumerator that iterates through the .
An object that can be used to iterate through the .
Gets data for a specific property.
The zero-based index of the property.
The data of the specified property.
The is bigger or equal to .
Gets data for a specific property.
The of the property. The key can be obtained by calling the method.
The data of the specified property.
Gets a property key from an item's array of properties.
The zero-based index of the property key in the array of structures.
The .
Sets a new property value, or replaces or removes an existing value.
The index of the property.
The new property data.
The is bigger or equal to .
Sets a new property value, or replaces or removes an existing value.
The of the property. The key can be obtained by calling the method.
The new property data.
Saves a property change.
For more information see
.
Represents a native 4 byte boolean value.
Represents the boolean value true as a .
Represents the boolean value false as a .
Initializes a new instance of the structure based on a boolean value.
The boolean value.
Returns a value indicating whether this instance is equal to a object.
A value to compare to this instance.
true if obj has the same value as this instance; otherwise, false.
Returns a value indicating whether this instance is equal to a specified object.
An object to compare to this instance.
true if obj is a and has the same value as this instance; otherwise, false.
Returns the hash code for this instance.
A hash code for the current .
Implements the operator ==.
The left.
The right.
The result of the operator.
Implements the operator !=.
The left.
The right.
The result of the operator.
Performs an implicit conversion from to .
The value.
The result of the conversion.
Performs an implicit conversion from to .
if set to true [value].
The result of the conversion.
Converts the value of this instance to its equivalent string representation (either "True" or "False").
The string representation of this instance.
Defines common HRESULT error codes.
S_OK
S_FALSE
E_ABORT
E_ACCESSDENIED
E_NOINTERFACE
E_FAIL
E_INVALIDARG
E_POINTER
E_NOTIMPL
E_NOTFOUND
MF_E_ATTRIBUTENOTFOUND
MF_E_SHUTDOWN
AUDCLNT_E_UNSUPPORTED_FORMAT
AUDCLNT_E_DEVICE_INVALIDATED
AUDCLNT_S_BUFFER_EMPTY
Blob
Number of bytes stored in the blob.
Pointer to a byte array which stores the data.
Returns the data stored in the .
The data stored in the
Converts the data stored in the based on an to a string and returns the string.
Encoding used to convert the data to a string.
String of the stored data.
Returns a that represents the data stored in the as hex string.
A that represents the data stored in the as hex string.
Specifies the FMTID/PID identifier that programmatically identifies a property.
For more information, see
.
A unique GUID for the property.
A property identifier (PID).
Initializes a new instance of the struct.
The unique GUID for the property.
The property identifier (PID).
Returns a that represents this instance.
A that represents this instance.
The structure is used to store data.
For more information, see .
Value type tag.
Reserved for future use.
Reserved for future use.
Reserved for future use.
VT_I1, Version 1
VT_UI1
VT_I2
VT_UI2
VT_I4
VT_UI4
VT_INT, Version 1
VT_UINT, Version 1
VT_I8
VT_UI8
VT_R4
VT_R8
VT_BOOL
VT_ERROR
VT_DATE
VT_FILETIME
VT_BLOB
VT_PTR
Gets or sets the datatype of the .
Returns the associated value of the . The type of the returned value is defined through the property.
The associated value of the . If the datatype is not supported, the method will return null.
Not all datatypes are supported.
Releases the associated memory by calling the PropVariantClear function.
Returns a that represents the value of this instance.
A that represents the value of this instance.
Represents a native COM object.
Unsafe native pointer to the COM object.
Gets a value which indicates whether the got already disposed.
Native pointer to the COM object.
Initializes a new instance of the class.
Initializes a new instance of the class.
The native pointer of the COM object.
Queries supported interfaces/objects on a .
The being requested.
The queried com interface/object.
Retrieves a pointer to the supported interface on an object.
Type of the requested .
A pointer to the requested interface.
Retrieves pointers to the supported interfaces on an object.
The identifier of the interface being requested.
The address of a pointer variable that receives the interface pointer requested in the parameter.
This method returns S_OK if the interface is supported, and E_NOINTERFACE otherwise. If ppvObject is NULL, this method returns E_POINTER.
Retrieves pointers to the supported interfaces on an object.
The identifier of the interface being requested.
The address of a pointer variable that receives the interface pointer requested in the parameter.
This method returns S_OK if the interface is supported, and E_NOINTERFACE otherwise. If ppvObject is NULL, this method returns E_POINTER.
Increments the reference count for an interface on an object. This method should be called for every new copy of a pointer to an interface on an object.
The method returns the new reference count. This value is intended to be used only for test purposes.
Increments the reference count for an interface on an object. This method should be called for every new copy of a pointer to an interface on an object.
The method returns the new reference count. This value is intended to be used only for test purposes.
Decrements the reference count for an interface on an object.
The method returns the new reference count. This value is intended to be used only for test purposes.
Decrements the reference count for an interface on an object.
The method returns the new reference count. This value is intended to be used only for test purposes.
Releases the COM object.
Releases the COM object.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Finalizes an instance of the class.
Enables clients to get pointers to other interfaces on a given object through the method, and manage the existence of the object through the and methods.
Retrieves pointers to the supported interfaces on an object.
The identifier of the interface being requested.
The address of a pointer variable that receives the interface pointer requested in the parameter.
This method returns S_OK if the interface is supported, and E_NOINTERFACE otherwise. If ppvObject is NULL, this method returns E_POINTER.
Increments the reference count for an interface on an object. This method should be called for every new copy of a pointer to an interface on an object.
The method returns the new reference count. This value is intended to be used only for test purposes.
Decrements the reference count for an interface on an object.
The method returns the new reference count. This value is intended to be used only for test purposes.
Specifies the category of an audio stream.
Other audio stream.
Media that will only stream when the app is in the foreground.
Media that can be streamed when the app is in the background.
Real-time communications, such as VOIP or chat.
Alert sounds.
Sound effects.
Game sound effects.
Background audio for games.
Contains the new global debug configuration for XAudio2. Used with the
function.
Bitmask of enabled debug message types. For a list of possible values take look at:
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.xaudio2.xaudio2_debug_configuration(v=vs.85).aspx.
Message types that will cause an immediate break. For a list of possible values take look at:
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.xaudio2.xaudio2_debug_configuration(v=vs.85).aspx.
Indicates whether to log the thread ID with each message.
Indicates whether to log source files and line numbers.
Indicates whether to log function names.
Indicates whether to log message timestamps.
Provides information about an audio device.
Gets the of the Device.
Gets the of the Device.
Gets the of the Device.
Gets the of the Device.
Defines an effect chain.
Number of effects in the effect chain for the voice.
Pointer to an array of structures containing pointers to XAPO instances.
Contains information about an XAPO for use in an effect chain.
Pointer to the IUnknown interface of the XAPO object.
TRUE if the effect should begin in the enabled state. Otherwise, FALSE.
Number of output channels the effect should produce.
Defines filter parameters for a source voice.
The .
Filter radian frequency calculated as (2 * sin(pi * (desired filter cutoff frequency) / sampleRate)).
The frequency must be greater than or equal to 0 and less than or equal to 1.0f.
The maximum frequency allowable is equal to the source sound's sample rate divided by
six which corresponds to the maximum filter radian frequency of 1.
For example, if a sound's sample rate is 48000 and the desired cutoff frequency is the maximum
allowable value for that sample rate, 8000, the value for Frequency will be 1.
Reciprocal of Q factor. Controls how quickly frequencies beyond Frequency are dampened. Larger values
result in quicker dampening while smaller values cause dampening to occur more gradually.
Must be greater than 0 and less than or equal to 1.5f.
Indicates the filter type.
Note Note that the DirectX SDK versions of XAUDIO2 do not support the LowPassOnePoleFilter or the
HighPassOnePoleFilter.
Attenuates (reduces) frequencies above the cutoff frequency.
Attenuates frequencies outside a given range.
Attenuates frequencies below the cutoff frequency.
Attenuates frequencies inside a given range.
XAudio2.8 only: Attenuates frequencies above the cutoff frequency. This is a one-pole filter, and
has no effect.
XAudio2.8 only: Attenuates frequencies below the cutoff frequency. This is a one-pole filter, and
has no effect.
Flags controlling which voice state data should be returned.
Calculate all values.
Calculate all values except .
Internal used IXAudio2EngineCallback-wrapper. The default implementation of this interface is
.
OnProcessingPassStart
OnProcessingPassEnd
OnCriticalError
Errorcode
The IXAudio2VoiceCallback interface contains methods that notify the client when certain events happen in a given
.
Called during each processing pass for each voice, just before XAudio2 reads data from the voice's buffer queue.
The number of bytes that must be submitted immediately to avoid starvation. This allows the implementation of
just-in-time streaming scenarios; the client can keep the absolute minimum data queued on the voice at all times,
and pass it fresh data just before the data is required. This model provides the lowest possible latency attainable
with XAudio2. For xWMA and XMA data BytesRequired will always be zero, since the concept of a frame of xWMA or XMA
data is meaningless.
Note: In a situation where there is always plenty of data available on the source voice, BytesRequired should
always report zero, because it doesn't need any samples immediately to avoid glitching.
Called just after the processing pass for the voice ends.
Called when the voice has just finished playing a contiguous audio stream.
Called when the voice is about to start processing a new audio buffer.
Context pointer that was assigned to the pContext member of the
structure when the buffer was submitted.
Called when the voice finishes processing a buffer.
Context pointer that was assigned to the pContext member of the
structure when the buffer was submitted.
Called when the voice reaches the end position of a loop.
Context pointer that was assigned to the pContext member of the
structure when the buffer was submitted.
Called when a critical error occurs during voice processing.
Context pointer that was assigned to the pContext member of the
structure when the buffer was submitted.
The HRESULT code of the error encountered.
Specifies values for the and
.
Log nothing.
Log error messages.
Log warning messages. Note: Enabling also enables .
Log informational messages.
Log detailed informational messages. Note: Enabling also enables .
Log public API function entries and exits.
Log internal function entries and exits. Note: Enabling also enables
.
Log delays detected and other timing data.
Log usage of critical sections and mutexes.
Log memory heap usage information.
Log audio streaming information.
All
Contains performance information. Used by .
CPU cycles spent on audio processing since the last call to the or
function.
Total CPU cycles elapsed since the last call. Note: This only counts cycles on the CPU on which XAudio2 is running.
Fewest CPU cycles spent on processing any single audio quantum since the last call.
Most CPU cycles spent on processing any single audio quantum since the last call.
Total memory currently in use.
Minimum delay that occurs between the time a sample is read from a source buffer and the time it reaches the
speakers.
Total audio dropouts since the engine started.
Number of source voices currently playing.
Total number of source voices currently in existence.
Number of submix voices currently playing.
Number of resampler xAPOs currently active.
Number of matrix mix xAPOs currently active.
Not supported on Windows. Xbox 360. Number of source voices decoding XMA data.
Not supported on Windows. A voice can use more than one XMA stream.
Flags that specify how a is stopped.
None
Continue emitting effect output after the voice is stopped.
Extends the the to enable real-time audio streaming.
Initializes a new instance of the class with a default buffer size of 100ms.
Instance of the class, used to create the .
The instance which provides audio data to play.
Initializes a new instance of the class.
Instance of the class, used to create the .
The instance which provides audio data to play.
Buffersize of the internal buffers, in milliseconds. Values in the range from 70ms to
200ms are recommended.
Initializes a new instance of the class.
Pointer to a object.
instance which receives notifications from the
which got passed as a pointer (see the argument).
which provides the audio data to stream.
Buffersize of the internal used buffers in milliseconds. Values in the range from 70ms to
200ms are recommended.
It is recommended to use the method instead of the this constructor.
Creates an instance of the class.
Instance of the class.
which provides the audio data to stream.
Buffersize of the internal used buffers in milliseconds. Values in the range from 70ms to
200ms are recommended.
Configured instance.
Occurs when the playback stops and no more data is available.
This event occurs whenever the event occurs.
Notifies the class that new data got requested. If there are any buffers which
are currently not queued and the underlying holds any more data, this data refills the
internal used buffers and provides audio data to play.
Stops and disposes the , closes the internal used waithandle and frees the
allocated memory of all used buffers.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Provides a mechanism for playing instances.
Maximum amount of instances a can
contain.
Limited by the method.
Gets the default singleton instance.
Gets the number of items which got added to the .
Disposes the .
Adds a to the .
The instance to add to the
.
Removes a from the .
The instance to remove from the
.
Disposes the and stops the internal playback thread.
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Destructor which calls the method.
Default implementation of the interface.
Called during each processing pass for each voice, just before XAudio2 reads data from the voice's buffer queue.
The only argument passed to the eventhandler is the number of required bytes:
The number of bytes that must be submitted immediately to avoid starvation. This allows the implementation of
just-in-time streaming scenarios; the client can keep the absolute minimum data queued on the voice at all times,
and pass it fresh data just before the data is required. This model provides the lowest possible latency attainable
with XAudio2. For xWMA and XMA data BytesRequired will always be zero, since the concept of a frame of xWMA or XMA
data is meaningless.
Note: In a situation where there is always plenty of data available on the source voice, BytesRequired should
always report zero, because it doesn't need any samples immediately to avoid glitching.
Called just after the processing pass for the voice ends.
Called when the voice has just finished playing a contiguous audio stream.
Called when the voice is about to start processing a new audio buffer.
The only argument passed to the eventhandler is a context pointer that was assigned to the pContext member of the
structure when the buffer was submitted.
Called when the voice finishes processing a buffer.
The only argument passed to the eventhandler is a context pointer that was assigned to the pContext member of the
structure when the buffer was submitted.
Called when the voice reaches the end position of a loop.
The only argument passed to the eventhandler is a context pointer that was assigned to the pContext member of the
structure when the buffer was submitted.
Called when a critical error occurs during voice processing.
The first argument passed to the eventhandler is a context pointer that was assigned to the pContext member of the
structure when the buffer was submitted.
The second argument passed to the eventhandler is the HRESULT error code of the critical error.
Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
2
Contains information about the creation flags, input channels, and sample rate of a voice.
Flags used to create the voice; see the individual voice interfaces for more information.
Flags that are currently set on the voice.
The number of input channels the voice expects.
The input sample rate the voice expects.
VoiceFlags
None
No pitch control is available on the voice.
No sample rate conversion is available on the voice. The voice's outputs must have the same sample rate.
The filter effect should be available on this voice.
XAudio2.8 only: Not supported on Windows.
XAudio2.7 only: Indicates that no samples were played.
Defines a destination voice that is the target of a send from another voice and specifies whether a filter should
be used.
Either or .
The destination voice.
Creates a new instance of the structure.
The . Must be either or .
The destination voice. Must not be null.
Creates a new instance of the structure.
The . Must be either or .
Pointer to the destination voice. Must not be .
VoiceSendFlags
None.
Indicates a filter should be used on a voice send.
Defines a set of voices to receive data from a single output voice.
Number of voices to receive the output of the voice. An OutputCount value of 0 indicates the voice should not send
output to any voices.
Array of s.
Returns the voice's current state and cursor position data.
Pointer to a buffer context provided in the that is processed currently, or,
if the voice is stopped currently, to the next buffer due to be processed.
is NULL if there are no buffers in the queue.
Number of audio buffers currently queued on the voice, including the one that is processed currently.
Total number of samples processed by this voice since it last started, or since the last audio stream ended (as
marked with the flag).
This total includes samples played multiple times due to looping.
Theoretically, if all audio emitted by the voice up to this time is captured, this parameter would be the length of
the audio stream in samples.
If you specify when you call
,
this member won't be calculated, and its value is unspecified on return from
.
takes about one-third as much time to
complete when you specify .
Flags which define calculate flags for calculating the 3D audio parameters.
Enables matrix coefficient table calculation.
Enables delay time array calculation (stereo only).
Enables low pass filter (LPF) direct-path coefficient calculation.
Enables LPF reverb-path coefficient calculation.
Enables reverb send level calculation.
Enables Doppler shift factor calculation.
Enables emitter-to-listener interior angle calculation.
Fills the center channel with silence. This flag allows you to keep a 6-channel matrix so you do not have to remap the channels, but the center channel will be silent. This flag is only valid if you also set .
Applies an equal mix of all source channels to a low frequency effect (LFE) destination channel. It only applies to matrix calculations with a source that does not have an LFE channel and a destination that does have an LFE channel. This flag is only valid if you also set .
Specifies directionality for a single-channel non-Low-Frequency-Effect emitter by scaling DSP behavior with respect to the emitter's orientation.
For a detailed explanation of sound cones see .
X3DAUDIO_2PI
Inner cone angle in radians. This value must be within 0.0f to .
Outer cone angle in radians. This value must be within InnerAngle to .
Volume scaler on/within inner cone. This value must be within 0.0f to 2.0f.
Volume scaler on/beyond outer cone. This value must be within 0.0f to 2.0f.
LPF direct-path or reverb-path coefficient scaler on/within inner cone. This value is only used for LPF calculations and must be within 0.0f to 1.0f.
LPF direct-path or reverb-path coefficient scaler on or beyond outer cone. This value is only used for LPF calculations and must be within 0.0f to 1.0f.
Reverb send level scaler on or within inner cone. This must be within 0.0f to 2.0f.
Reverb send level scaler on/beyond outer cone. This must be within 0.0f to 2.0f.
Defines a DSP setting at a given normalized distance.
Normalized distance. This must be within 0.0f to 1.0f.
DSP control setting.
Receives the results from a call to .
See http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.x3daudio.x3daudio_dsp_settings%28v=vs.85%29.aspx for more details.
Caller provided array that will be initialized with the volume level of each source channel present in each
destination channel. The array must have at least ( × )
elements. The array is arranged with the source channels as the column index of the array and the destination
channels as the row index of the array.
Caller provided delay time array, which receives delays for each destination channel in milliseconds. This array
must have at least elements. X3DAudio doesn't actually perform the delay. It simply
returns the
coefficients that may be used to adjust a delay DSP effect placed in the effect chain. The
member can
be NULL if the flag is not specified when calling
.
Note This member is only returned when X3DAudio is initialized for stereo output. For typical Xbox 360 usage, it
will not return any data at all.
Number of source channels. This must be initialized to the number of emitter channels before calling
.
Number of source channels. This must be initialized to the number of emitter channels before calling
.
LPF direct-path coefficient. Only calculated if the flag is specified when
calling .
When using X3DAudio with XAudio2 the value returned in the LPFDirectCoefficient member would be applied to a low
pass filter on a source voice with .
LPF reverb-path coefficient. Only calculated if the flag is specified when
calling .
Reverb send level. Only calculated if the flag is specified when calling
.
Doppler shift factor. Scales the resampler ratio for Doppler shift effect, where:
effective_frequency = DopplerFactor × original_frequency.
Only calculated if the flag is specified when calling
.
When using X3DAudio with XAudio2 the value returned in the DopplerFactor would be applied to a source voice with
.
Emitter-to-listener interior angle, expressed in radians with respect to the emitter's front orientation.
Only calculated if the flag is specified when calling
.
Distance in user-defined world units from the listener to the emitter base position.
Component of emitter velocity vector projected onto emitter-to-listener vector in user-defined world units per
second.
Only calculated if the flag is specified when calling
.
Component of listener velocity vector projected onto the emitter->listener vector in user-defined world units per
second. Only calculated if the flag is specified when calling
.
Gets the caller provided array that will be initialized with the volume level of each source channel present in each
destination channel. The array must have at least ( × )
elements. The array is arranged with the source channels as the column index of the array and the destination
channels as the row index of the array.
Gets the caller provided delay time array, which receives delays for each destination channel in milliseconds. This array
must have at least elements. X3DAudio doesn't actually perform the delay. It simply
returns the
coefficients that may be used to adjust a delay DSP effect placed in the effect chain. This won't be calculated if the flag is not specified when calling
.
Gets the number of source channels.
Gets the number of source channels.
Gets the LPF direct-path coefficient. Only calculated if the flag is specified when
calling .
When using X3DAudio with XAudio2 the value returned in the LPFDirectCoefficient member would be applied to a low
pass filter on a source voice with .
Gets the LPF reverb-path coefficient. Only calculated if the flag is specified when
calling .
Gets the reverb send level. Only calculated if the flag is specified when calling
.
Gets the doppler shift factor. Scales the resampler ratio for Doppler shift effect, where:
effective_frequency = DopplerFactor × original_frequency.
Only calculated if the flag is specified when calling
.
When using X3DAudio with XAudio2 the value returned in the DopplerFactor would be applied to a source voice with
.
Gets the emitter-to-listener interior angle, expressed in radians with respect to the emitter's front orientation.
Only calculated if the flag is specified when calling
.
Gets the distance in user-defined world units from the listener to the emitter base position.
Gets the component of emitter velocity vector projected onto emitter-to-listener vector in user-defined world units per
second.
Only calculated if the flag is specified when calling
.
Gets the component of listener velocity vector projected onto the emitter->listener vector in user-defined world units per
second. Only calculated if the flag is specified when calling
.
Initializes a new instance of the class.
The number of source channels.
The number of destination channels.
Defines a single-point or multiple-point 3D audio source that is used with an arbitrary number of sound channels.
Gets or sets the sound cone. Used only with single-channel emitters for matrix, LPF (both direct and reverb paths), and reverb calculations. NULL specifies the emitter is omnidirectional.
Gets or sets the orientation of the front direction. This value must be orthonormal with . must be normalized when used. For single-channel emitters without cones is only used for emitter angle calculations. For multi channel emitters or single-channel with cones is used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Gets or sets the orientation of the top direction. This value must be orthonormal with . is only used with multi-channel emitters for matrix calculations.
Gets or sets the position in user-defined world units. This value does not affect .
Gets or sets the velocity vector in user-defined world units/second. This value is used only for doppler calculations. It does not affect .
Gets or sets the value to be used for the inner radius calculations. If is 0, then no inner radius is used, but may still be used. This value must be between 0.0f and FLT_MAX.
Gets or sets the value to be used for the inner radius angle calculations. This value must be between 0.0f and /4.0 (which equals 45°).
Gets or sets the number of emitters defined by the class. Must be greater than 0.
Gets or sets the distance from that channels will be placed if is greater than 1. is only used with multi-channel emitters for matrix calculations. Must be greater than or equal to 0.0f.
Gets or sets the table of channel positions, expressed as an azimuth in radians along the channel radius with respect to the front orientation vector in the plane orthogonal to the top orientation vector. An azimuth of 2* specifies a channel is a low-frequency effects (LFE) channel. LFE channels are positioned at the emitter base and are calculated with respect to only, never . must have at least elements, but can be NULL if = 1. The table values must be within 0.0f to 2*. is used with multi-channel emitters for matrix calculations.
Gets or sets the volume-level distance curve, which is used only for matrix calculations. NULL specifies a specialized default curve that conforms to the inverse square law, such that when distance is between 0.0f and × 1.0f, no attenuation is applied. When distance is greater than × 1.0f, the amplification factor is (× 1.0f)/distance. At a distance of × 2.0f, the sound will be at half volume or -6 dB, at a distance of × 4.0f, the sound will be at one quarter volume or -12 dB, and so on. and are independent of each other. does not affect LFE channel volume.
Gets or sets the LFE roll-off distance curve, or NULL to use default curve: [0.0f, ×1.0f], [ ×1.0f, 0.0f]. A NULL value for specifies a default curve that conforms to the inverse square law with distances <= clamped to no attenuation. and are independent of each other. does not affect non LFE channel volume.
Gets or sets the low-pass filter (LPF) direct-path coefficient distance curve, or NULL to use the default curve: [0.0f, 1.0f], [1.0f, 0.75f]. is only used for LPF direct-path calculations.
Gets or sets the LPF reverb-path coefficient distance curve, or NULL to use default curve: [0.0f, 0.75f], [1.0f, 0.75f]. is only used for LPF reverb path calculations.
Gets or sets the reverb send level distance curve, or NULL to use default curve: [0.0f, 1.0f], [1.0f, 0.0f].
Gets or sets the curve distance scaler that is used to scale normalized distance curves to user-defined world units, and/or to exaggerate their effect. This does not affect any other calculations. The value must be within the range FLT_MIN to FLT_MAX. is only used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Doppler shift scaler that is used to exaggerate Doppler shift effect. is only used for Doppler calculations and does not affect any other calculations. The value must be within the range 0.0f to FLT_MAX.
Defines a point of 3D audio reception.
A listener's front and top vectors must be orthonormal. To be considered orthonormal, a pair of vectors must
have a magnitude of 1 +- 1x10-5 and a dot product of 0 +- 1x10-5.
Gets or sets the orientation of front direction. When is NULL OrientFront is used only for
matrix and delay calculations. When is not NULL OrientFront is used for matrix, LPF (both
direct and reverb paths), and reverb calculations. This value must be orthonormal with
when used.
Gets or sets the orientation of top direction, used only for matrix and delay calculations. This value must be
orthonormal with when used.
Gets or sets the position in user-defined world units. This value does not affect .
Gets or sets the velocity vector in user-defined world units per second, used only for doppler calculations. This
value does not affect .
Gets or sets the to use. Providing a listener cone will specify that additional calculations
are performed when determining the volume and filter DSP parameters for individual sound sources. A NULL
value specifies an omnidirectional sound and no cone processing is applied.
is only used for matrix, LPF (both direct and reverb paths), and reverb calculations.
Provides access to the X3DAudio functions.
Initializes a new instance of the class.
Assignment of channels to speaker positions. This value must not be zero.
Initializes a new instance of class.
Speed of sound, in user-defined world units per second. Use this value only for doppler
calculations. It must be greater than or equal to zero.
Assignment of channels to speaker positions. This value must not be zero.
Calculates DSP settings with respect to 3D parameters.
Represents the point of reception.
Represents the sound source.
Bitwise combination of specifying which 3D parameters to calculate.
Instance of the class that receives the calculation results.
Disposes the instance.
Destructor which calls .
X3DAUDIO_HANDLE is an opaque data structure. Because the operating system doesn't allocate any additional storage
for the 3D audio instance handle, you don't need to free or close it.
is the class for the XAudio2 object that manages all audio engine states, the audio
processing thread, the voice graph, and so forth.
The denominator of a quantum unit. In 10ms chunks (= 1/100 seconds).
Minimum sample rate is 1000 Hz.
Maximum sample rate is 200 kHz.
The minimum frequency ratio is 1/1024.
Maximum frequency ratio is 1024.
The default value for the frequency ratio is 4.
The maximum number of supported channels is 64.
Value which indicates that the default number of channels should be used.
Values which indicates that the default sample rate should be used.
Value which can be used in combination with the method to commit all
changes.
Values which indicates that the made changes should be commited instantly.
Fired by XAudio2 just before an audio processing pass begins.
Fired by XAudio2 just after an audio processing pass ends.
Fired if a critical system error occurs that requires XAudio2 to be closed down and restarted.
Internal default ctor.
Initializes a new instance of the class.
Native pointer of the object.
The XAudio2 subversion to use.
Gets current resource usage details, such as available memory or CPU usage.
Gets the default device which can be used to create a mastering voice.
Using XAudio2.7 the default device is 0 (as an integer). Using XAudio2.8 the default device is null.
Gets the of the XAudio2 object.
Creates a new instance of the class.
If no supported XAudio2 version is available, the CreateXAudio2 method throws an
.
A new instance.
Creates a new instance of the class.
If no supported XAudio2 version is available, the CreateXAudio2 method throws an
.
The to use.
A new instance.
Adds an from the engine callback list.
object to add to the engine
callback list.
HRESULT
Adds an from the engine callback list.
object to add to the engine
callback list.
Removes an from the engine callback list.
object to remove from the engine
callback list. If the given interface is present more than once in the list, only the first instance in the list
will be removed.
Creates and configures a source voice. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2.ixaudio2.createsourcevoice(v=vs.85).aspx.
If successful, returns a pointer to the new object.
Pointer to a . The following formats are supported:
- 8-bit (unsigned) integer PCM
- 16-bit integer PCM (optimal format for XAudio2)
- 20-bit integer PCM (either in 24 or 32 bit containers)
- 24-bit integer PCM (either in 24 or 32 bit containers)
- 32-bit integer PCM
- 32-bit float PCM (preferred format after 16-bit integer)
The number of channels in a source voice must be less than or equal to . The sample
rate of a source voice must be between and .
that specify the behavior of the source voice. A flag can be
or a combination of one or more of the following.
Possible values are , and
. is not supported on Windows.
Highest allowable frequency ratio that can be set on this voice. The value for this
argument must be between and .
Client-provided callback interface, . This parameter is
optional and can be null.
List of structures that describe the set of destination voices for the
source voice. If is NULL, the send list defaults to a single output to the first
mastering
voice created.
List of structures that describe an effect chain to use in the
source voice. This parameter is optional and can be null.
HRESULT
Creates and configures a source voice. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2.ixaudio2.createsourcevoice(v=vs.85).aspx.
Pointer to a . The following formats are supported:
- 8-bit (unsigned) integer PCM
- 16-bit integer PCM (optimal format for XAudio2)
- 20-bit integer PCM (either in 24 or 32 bit containers)
- 24-bit integer PCM (either in 24 or 32 bit containers)
- 32-bit integer PCM
- 32-bit float PCM (preferred format after 16-bit integer)
The number of channels in a source voice must be less than or equal to . The
sample rate of a source voice must be between and
.
that specify the behavior of the source voice. A flag can be
or a combination of one or more of the following.
Possible values are , and
. is not supported on Windows.
Highest allowable frequency ratio that can be set on this voice. The value for this
argument must be between and .
Client-provided callback interface, . This parameter is
optional and can be null.
List of structures that describe the set of destination voices for the
source voice. If is NULL, the send list defaults to a single output to the first
mastering
voice created.
List of structures that describe an effect chain to use in the
source voice. This parameter is optional and can be null.
If successful, returns a pointer to the new object.
Creates and configures a source voice. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2.ixaudio2.createsourcevoice(v=vs.85).aspx.
Pointer to a . The following formats are supported:
- 8-bit (unsigned) integer PCM
- 16-bit integer PCM (optimal format for XAudio2)
- 20-bit integer PCM (either in 24 or 32 bit containers)
- 24-bit integer PCM (either in 24 or 32 bit containers)
- 32-bit integer PCM
- 32-bit float PCM (preferred format after 16-bit integer)
The number of channels in a source voice must be less than or equal to . The
sample rate of a source voice must be between and
.
that specify the behavior of the source voice. A flag can be
or a combination of one or more of the following.
Possible values are , and
. is not supported on Windows.
Highest allowable frequency ratio that can be set on this voice. The value for this
argument must be between and .
Client-provided callback interface, . This parameter is
optional and can be null.
List of structures that describe the set of destination voices for the
source voice. If is NULL, the send list defaults to a single output to the first
mastering
voice created.
List of structures that describe an effect chain to use in the
source voice. This parameter is optional and can be null.
If successful, returns a new object.
Creates and configures a source voice.
Pointer to a . The following formats are supported:
- 8-bit (unsigned) integer PCM
- 16-bit integer PCM (optimal format for XAudio2)
- 20-bit integer PCM (either in 24 or 32 bit containers)
- 24-bit integer PCM (either in 24 or 32 bit containers)
- 32-bit integer PCM
- 32-bit float PCM (preferred format after 16-bit integer)
The number of channels in a source voice must be less than or equal to . The
sample rate of a source voice must be between and
.
that specify the behavior of the source voice. A flag can be
or a combination of one or more of the following.
Possible values are , and
. is not supported on Windows.
If successful, returns a new object.
Creates and configures a source voice.
Pointer to a . The following formats are supported:
- 8-bit (unsigned) integer PCM
- 16-bit integer PCM (optimal format for XAudio2)
- 20-bit integer PCM (either in 24 or 32 bit containers)
- 24-bit integer PCM (either in 24 or 32 bit containers)
- 32-bit integer PCM
- 32-bit float PCM (preferred format after 16-bit integer)
The number of channels in a source voice must be less than or equal to . The
sample rate of a source voice must be between and
.
If successful, returns a new object.
Creates and configures a submix voice.
On success, returns a pointer to the new object.
Number of channels in the input audio data of the submix voice. The
must be less than or equal to .
Sample rate of the input audio data of submix voice. This rate must be a multiple of
. InputSampleRate must be between and
.
Flags that specify the behavior of the submix voice. It can be or
.
An arbitrary number that specifies when this voice is processed with respect to other
submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices
that include a smaller value and before all other voices that include a larger
value. Voices that include the same value
are
processed in any order. A submix voice cannot send to another submix voice with a lower or equal
value. This prevents audio being lost due to a submix cycle.
List of structures that describe the set of destination voices for the
submix voice. If is NULL, the send list will default to a single output to the first
mastering voice created.
List of structures that describe an effect chain to use in the
submix voice. This parameter is optional and can be null.
HRESULT
Creates and configures a submix voice.
Number of channels in the input audio data of the submix voice. The
must be less than or equal to .
Sample rate of the input audio data of submix voice. This rate must be a multiple of
. InputSampleRate must be between and
.
Flags that specify the behavior of the submix voice. It can be or
.
An arbitrary number that specifies when this voice is processed with respect to other
submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices
that include a smaller value and before all other voices that include a larger
value. Voices that include the same value
are
processed in any order. A submix voice cannot send to another submix voice with a lower or equal
value. This prevents audio being lost due to a submix cycle.
List of structures that describe the set of destination voices for the
submix voice. If is NULL, the send list will default to a single output to the first
mastering voice created.
List of structures that describe an effect chain to use in the
submix voice. This parameter is optional and can be null.
On success, returns a pointer to the new object.
Creates and configures a submix voice.
Number of channels in the input audio data of the submix voice. The
must be less than or equal to .
Sample rate of the input audio data of submix voice. This rate must be a multiple of
. InputSampleRate must be between and
.
Flags that specify the behavior of the submix voice. It can be or
.
An arbitrary number that specifies when this voice is processed with respect to other
submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices
that include a smaller value and before all other voices that include a larger
value. Voices that include the same value
are
processed in any order. A submix voice cannot send to another submix voice with a lower or equal
value. This prevents audio being lost due to a submix cycle.
List of structures that describe the set of destination voices for the
submix voice. If is NULL, the send list will default to a single output to the first
mastering voice created.
List of structures that describe an effect chain to use in the
submix voice. This parameter is optional and can be null.
On success, returns a new object.
Creates and configures a submix voice.
Number of channels in the input audio data of the submix voice. The
must be less than or equal to .
Sample rate of the input audio data of submix voice. This rate must be a multiple of
. InputSampleRate must be between and
.
Flags that specify the behavior of the submix voice. It can be or
.
On success, returns a new object.
Creates and configures a mastering voice.
If successful, returns a pointer to the new object.
Number of channels the mastering voice expects in its input audio. must be less
than
or equal to .
You can set InputChannels to , which causes XAudio2 to try to detect the system
speaker configuration setup.
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of
. must be between
and .
You can set InputSampleRate to , with the default being determined by the current
platform.
Flags that specify the behavior of the mastering voice. Must be 0.
Identifier of the device to receive the output audio.
Specifying the default value of NULL (for XAudio2.8) or 0 (for XAudio2.7) causes
XAudio2 to select the global default audio device.
On XAudio2.7: Use the and the method to enumerate device. Pass its index (valid range from 0 to ) to the argument.
On XAudio2.8: Use the class to enumerate objects. Pass its to the argument.
structure that describes an effect chain to use in the mastering
voice, or NULL to use no effects.
The audio stream category to use for this mastering voice.
HRESULT
Creates and configures a mastering voice.
Number of channels the mastering voice expects in its input audio. must be less
than
or equal to .
You can set InputChannels to , which causes XAudio2 to try to detect the system
speaker configuration setup.
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of
. must be between
and .
You can set InputSampleRate to , with the default being determined by the current
platform.
Flags that specify the behavior of the mastering voice. Must be 0.
Identifier of the device to receive the output audio.
Specifying the default value of NULL (for XAudio2.8) or 0 (for XAudio2.7) causes
XAudio2 to select the global default audio device.
On XAudio2.7: Use the and the method to enumerate device. Pass its index (valid range from 0 to ) to the argument.
On XAudio2.8: Use the class to enumerate objects. Pass its to the argument.
structure that describes an effect chain to use in the mastering
voice, or NULL to use no effects.
The audio stream category to use for this mastering voice.
If successful, returns a pointer to the new object.
Creates and configures a mastering voice.
Number of channels the mastering voice expects in its input audio. must be less
than
or equal to .
You can set InputChannels to , which causes XAudio2 to try to detect the system
speaker configuration setup.
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of
. must be between
and .
You can set InputSampleRate to , with the default being determined by the current
platform.
Identifier of the device to receive the output audio.
Specifying the default value of NULL (for XAudio2.8) or 0 (for XAudio2.7) causes
XAudio2 to select the global default audio device.
On XAudio2.7: Use the and the method to enumerate device. Pass its index (valid range from 0 to ) to the argument.
On XAudio2.8: Use the class to enumerate objects. Pass its to the argument.
structure that describes an effect chain to use in the mastering
voice, or NULL to use no effects.
XAudio2.8 only: The audio stream category to use for this mastering voice.
If successful, returns a new object.
Creates and configures a mastering voice.
Number of channels the mastering voice expects in its input audio. must be less
than
or equal to .
You can set InputChannels to , which causes XAudio2 to try to detect the system
speaker configuration setup.
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of
. must be between
and .
You can set InputSampleRate to , with the default being determined by the current
platform.
Identifier of the device to receive the output audio.
Specifying the default value of NULL (for XAudio2.8) or 0 (for XAudio2.7) causes
XAudio2 to select the global default audio device.
On XAudio2.7: Use the and the method to enumerate device. Pass its index (valid range from 0 to ) to the argument.
On XAudio2.8: Use the class to enumerate objects. Pass its to the argument.
If successful, returns a new object.
Creates and configures a mastering voice.
Number of channels the mastering voice expects in its input audio. must be less
than
or equal to .
You can set InputChannels to , which causes XAudio2 to try to detect the system
speaker configuration setup.
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of
. must be between
and .
You can set InputSampleRate to , with the default being determined by the current
platform.
If successful, returns a new object.
Creates and configures a mastering voice.
If successful, returns a new object.
Starts the audio processing thread.
HRESULT
Starts the audio processing thread.
Stops the audio processing thread.
Atomically applies a set of operations that are tagged with a given identifier.
Identifier of the set of operations to be applied. To commit all pending operations, pass
.
HRESULT
Atomically applies a set of operations that are tagged with a given identifier.
Identifier of the set of operations to be applied. To commit all pending operations, pass
.
Atomically applies a set of operations that are tagged with a given identifier.
Returns current resource usage details, such as available memory or CPU usage.
On success, pointer to an structure that is
returned.
HRESULT
Changes global debug logging options for XAudio2.
structure that contains the new debug configuration.
Reserved parameter. Must me NULL.
HRESULT
Changes global debug logging options for XAudio2.
structure that contains the new debug configuration.
Returns the default device.
The default device.
Initializes the engine callback.
Represents an audio data buffer.
Maximum non-infinite LoopCount.
Infinite Loop.
MaxBufferBytes. See .
Flags that provide additional information about the audio buffer.
May be or .
Size of the audio data, in bytes. Must be no larger than for PCM data.
For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.xaudio2.xaudio2_buffer(v=vs.85).aspx.
Pointer to the audio data.
First sample in the buffer that should be played.
For XMA buffers this value must be a multiple of 128 samples.
Length of the region to be played, in samples.
A value of zero means to play the entire buffer, and, in this case, must be zero as well.
For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.xaudio2.xaudio2_buffer(v=vs.85).aspx.
First sample of the region to be looped. The value of must be less than
+ .
can be less than . must be 0 if
is 0.
Length of the loop region, in samples.
The value of + must be greater than and
less than + .
must be zero if is 0.
If is not 0 then a loop length of zero indicates the entire sample should be looped.
Number of times to loop through the loop region.
This value can be between 0 and .
If LoopCount is zero no looping is performed and and must be 0.
To loop forever, set to .
Context value to be passed back in callbacks to the client. This may be .
Initializes a new instance of the structure.
Returns a instance for the underlying .
Call
Frees the allocated memory.
Frees the allocated memory.
Provides data for the , the and the
event.
Initializes a new instance of the class.
The context pointer that was assigned to the member
of the structure when the buffer was submitted.
Gets the context pointer that was assigned to the member of the
structure when the buffer was submitted.
Flags that provide additional information about the audio buffer.
None
Indicates that there cannot be any buffers in the queue after this buffer. The only effect of this flag is to
suppress debug output warnings caused by starvation of the buffer queue.
XAudio2CriticalErrorEventArgs
Initializes a new instance of the class.
Errorcode
Errorcode
Describes device roles of an XAudio2 Device. Used in .
Device is not used as the default device for any applications.
Device is used in audio console applications.
Device is used to play multimedia.
Device is used for voice communication.
Device is used in for games.
Devices is the default device for all applications.
The role of the device is not valid.
XAudio2EngineCallback
Fired by XAudio2 just before an audio processing pass begins.
Fired by XAudio2 just after an audio processing pass ends.
Fired if a critical system error occurs that requires XAudio2 to be closed down and restarted.
XAudio2-COMException.
Initializes a new instance of the class.
Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
Initializes a new instance of the class from serialization data.
The object that holds the serialized object data.
The StreamingContext object that supplies the contextual information about the source or
destination.
Throws an if the is not .
Errorcode.
Name of the interface which contains the COM-function which returned the specified
.
Name of the COM-function which returned the specified .
A mastering voice is used to represent the audio output device.
Initializes a new instance of the class.
Native pointer of the object.
The to use.
XAudio2.8 only: Gets the channel mask for this voice.
XAudio2.8 only: Returns the channel mask for this voice.
Returns the channel mask for this voice. This corresponds to the
member of the class.
HRESULT
Provides data for the event.
Initializes a new instance of the class.
The number of bytes that must be submitted immediately to avoid starvation.
Gets the number of bytes that must be submitted immediately to avoid starvation.
Defines values to use with XAudio2Create to specify available processors.
Processor 1
Processor 2
Processor 3
Processor 4
Processor 5
Processor 6
Processor 7
Processor 8
Processor 9
Processor 10
Processor 11
Processor 12
Processor 13
Processor 14
Processor 15
Processor 16
Processor 17
Processor 18
Processor 19
Processor 20
Processor 21
Processor 22
Processor 23
Processor 24
Processor 25
Processor 26
Processor 27
Processor 28
Processor 29
Processor 30
Processor 31
Processor 32
Any processor
Default processor for XAudio2.7, which is defined as .
Default processor for XAudio2.8, which is defined as .
Use a source voice to submit audio data to the XAudio2 processing pipeline.You must send voice data to a mastering
voice to be heard, either directly or through intermediate submix voices.
Gets the of the source voice.
Initializes a new instance of the class.
Native pointer of the object.
The to use.
Starts consumption and processing of audio by the voice. Delivers the result to any connected submix or mastering
voices, or to the output device.
Flags that control how the voice is started. Must be 0.
Identifies this call as part of a deferred batch. For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Starts consumption and processing of audio by the voice. Delivers the result to any connected submix or mastering
voices, or to the output device.
Flags that control how the voice is started. Must be 0.
Identifies this call as part of a deferred batch. For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Starts consumption and processing of audio by the voice. Delivers the result to any connected submix or mastering
voices, or to the output device.
Stops consumption of audio by the current voice.
Flags that control how the voice is stopped. Can be or
.
Identifies this call as part of a deferred batch. For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Stops consumption of audio by the current voice.
Stops consumption of audio by the current voice.
Flags that control how the voice is stopped. Can be or
.
Identifies this call as part of a deferred batch. For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Adds a new audio buffer to the voice queue.
Pointer to an structure to queue.
Pointer to an additional XAudio2BufferWma structure used when submitting WMA data.
HRESULT
See
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2sourcevoice.ixaudio2sourcevoice.submitsourcebuffer(v=vs.85).aspx.
Adds a new audio buffer to the voice queue.
structure to queue.
See
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2sourcevoice.ixaudio2sourcevoice.submitsourcebuffer(v=vs.85).aspx.
Removes all pending audio buffers from the voice queue. If the voice is started, the buffer that is currently
playing is not removed from the queue.
HRESULT
See
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2sourcevoice.ixaudio2sourcevoice.flushsourcebuffers(v=vs.85).aspx.
Removes all pending audio buffers from the voice queue. If the voice is started, the buffer that is currently
playing is not removed from the queue.
See
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2sourcevoice.ixaudio2sourcevoice.flushsourcebuffers(v=vs.85).aspx.
Notifies an XAudio2 voice that no more buffers are coming after the last one that is currently in its queue.
HRESULT
Notifies an XAudio2 voice that no more buffers are coming after the last one that is currently in its queue.
Stops looping the voice when it reaches the end of the current loop region.
Identifies this call as part of a deferred batch. For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Stops looping the voice when it reaches the end of the current loop region.
Identifies this call as part of a deferred batch. For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Stops looping the voice when it reaches the end of the current loop region.
Returns the voice's current cursor position data.
structure containing the state of the voice.
Returns the voice's current cursor position data.
XAudio2.8 only: Flags controlling which voice state data should be returned.
Valid values are or .
The default value is . If you specify
, GetState
returns only the buffer state, not the sampler state.
GetState takes roughly one-third as much time to complete when you specify
.
structure containing the state of the voice.
If the is not the parameter will be ignored.
Sets the frequency adjustment ratio of the voice.
Frequency adjustment ratio. This value must be between and
the MaxFrequencyRatio parameter specified when the voice was created
.
Identifies this call as part of a deferred batch. For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Sets the frequency adjustment ratio of the voice.
Frequency adjustment ratio. This value must be between and
the MaxFrequencyRatio parameter specified when the voice was created
.
Identifies this call as part of a deferred batch. For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Sets the frequency adjustment ratio of the voice.
Frequency adjustment ratio. This value must be between and
the MaxFrequencyRatio parameter specified when the voice was created
.
Returns the frequency adjustment ratio of the voi
Current frequency adjustment ratio if successful.
Reconfigures the voice to consume source data at a different sample rate than the rate specified when the voice was
created.
The new sample rate the voice should process submitted data at. Valid sample rates
are 1kHz to 200kHz.
HRESULT
Reconfigures the voice to consume source data at a different sample rate than the rate specified when the voice was
created.
The new sample rate the voice should process submitted data at. Valid sample rates
are 1kHz to 200kHz.
A submix voice is used primarily for performance improvements and effects processing.
Initializes a new instance of the class.
Native pointer of the object.
The to use.
Defines supported XAudio2 subversions.
XAudio2.7
XAudio2.8
Represents the base class from which , and
are derived.
Gets the XAudio2 Version.
Initializes a new instance of the class.
Native pointer of the object.
The to use.
Gets the of the .
These details include information about the number of input channels, the sample rate and the
.
Gets or sets the of the .
Gets or sets the volume of the . The default value is 1.0.
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Returns information about the creation flags, input channels, and sample rate of a voice.
object containing information about the voice.
HRESULT
Designates a new set of submix or mastering voices to receive the output of the voice.
VoiceSends structure which contains Output voices. If is null, the voice will send
its output to the current mastering voice. All of the voices in a send list must have the same input sample rate.
HRESULT
Designates a new set of submix or mastering voices to receive the output of the voice.
Array of s. if is null, the voice will send
its output to the current mastering voice.
All voices in the must have the same input sample rate.
Replaces the effect chain of the voice.
Describes the new effect chain to use.
If null is passed, the current effect chain is removed.
HRESULT
Replaces the effect chain of the voice.
Describes the new effect chain to use.
If null is passed, the current effect chain is removed.
Enables the effect at a given position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Enables the effect at a given position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Enables the effect at a given position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Disables the effect at a given position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Disables the effect at a given position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Disables the effect at a given position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Returns the running state of the effect at a specified position in the effect chain of the voice.
Zero-based index of an effect in the effect chain of the voice.
Returns true if the effect is enabled. If the effect is disabled, returns false.
Returns whether the effect at the specified position in the effect chain is enabled.
Zero-based index of an effect in the effect chain of the voice.
Returns true if the effect is enabled. If the effect is disabled, returns false.
Sets parameters for a given effect in the voice's effect chain.
Zero-based index of an effect within the voice's effect chain.
New values of the effect-specific parameters.
Size of the array in bytes.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Sets parameters for a given effect in the voice's effect chain.
Effect parameter.
Zero-based index of an effect within the voice's effect chain.
New values of the effect-specific parameters.
Sets parameters for a given effect in the voice's effect chain.
Effect parameter.
Zero-based index of an effect within the voice's effect chain.
New values of the effect-specific parameters.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Returns the current effect-specific parameters of a given effect in the voice's effect chain.
Zero-based index of an effect within the voice's effect chain.
Returns the current values of the effect-specific parameters.
Size of the array in bytes.
HRESULT
Returns the current effect-specific parameters of a given effect in the voice's effect chain.
Effect parameters.
Zero-based index of an effect within the voice's effect chain.
Effect parameters value.
Sets the voice's filter parameters.
structure containing the filter
information.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Sets the voice's filter parameters.
structure containing the filter
information.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Gets the voice's filter parameters.
structure containing the filter
information.
HRESULT
Gets the voice's filter parameters.
structure containing the filter information.
Sets the filter parameters on one of this voice's sends.
The destination voice of the send whose filter parameters will be set.
structure containing the filter
information.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Sets the filter parameters on one of this voice's sends.
The destination voice of the send whose filter parameters will be set.
structure containing the filter
information.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Sets the filter parameters on one of this voice's sends.
The destination voice of the send whose filter parameters will be set.
structure containing the filter
information.
Returns the filter parameters from one of this voice's sends.
The destination voice of the send whose filter parameters will be read.
structure containing the filter
information.
HRESULT
Returns the filter parameters from one of this voice's sends.
The destination voice of the send whose filter parameters will be read.
structure containing the filter information.
Sets the overall volume level for the voice.
Overall volume level to use. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Sets the overall volume level for the voice.
Overall volume level to use. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Gets the current overall volume level of the voice.
Returns the current overall volume level of the voice. See Remarks for more information on volume
levels.
HRESULT
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Gets the current overall volume level of the voice.
The current overall volume level of the voice. See Remarks for more information on volume levels.
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Sets the volume levels for the voice, per channel. This method is valid only for source and submix voices, because
mastering voices do not specify volume per channel.
Number of channels in the voice.
Array containing the new volumes of each channel in the voice. The array must have
elements. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Sets the volume levels for the voice, per channel. This method is valid only for source and submix voices, because
mastering voices do not specify volume per channel.
Number of channels in the voice.
Array containing the new volumes of each channel in the voice. The array must have
elements. See Remarks for more information on volume levels.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Sets the volume levels for the voice, per channel. This method is valid only for source and submix voices, because
mastering voices do not specify volume per channel.
Number of channels in the voice.
Array containing the new volumes of each channel in the voice. The array must have
elements. See Remarks for more information on volume levels.
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Returns the volume levels for the voice, per channel.
These settings are applied after the effect chain is applied.
This method is valid only for source and submix voices, because mastering voices do not specify volume per channel.
Confirms the channel count of the voice.
Returns the current volume level of each channel in the voice. The array must have at least
elements.
See remarks for more information on volume levels.
HRESULT
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Returns the volume levels for the voice, per channel.
These settings are applied after the effect chain is applied.
This method is valid only for source and submix voices, because mastering voices do not specify volume per channel.
Confirms the channel count of the voice.
Returns the current volume level of each channel in the voice. The has at least
elements.
The master volume level is applied at different times depending on the type of voice.
For submix and mastering voices the volume level is applied just before the voice's built in filter and effect
chain is applied.
For source voices the master volume level is applied after the voice's filter and effect
chain is applied. Volume levels are expressed as floating-point amplitude multipliers
between -2^24 and 2^24, with a maximum
gain of 144.5 dB. A volume level of 1.0 means there is no attenuation or gain and 0 means silence.
Negative levels can be used to invert the audio's phase.
Sets the volume level of each channel of the final output for the voice. These channels are mapped to the input
channels of a specified destination voice.
Destination for which to set volume levels.
If the voice sends to a single target voice then specifying null will cause SetOutputMatrix to operate on that
target voice.
Confirms the output channel count of the voice. This is the number of channels that are
produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels × DestinationChannels] volume levels sent to the destination voice.
The level sent from source channel S to destination channel D is specified in the form levelMatrix[SourceChannels ×
D + S].
For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2voice.ixaudio2voice.setoutputmatrix(v=vs.85).aspx.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
HRESULT
Sets the volume level of each channel of the final output for the voice. These channels are mapped to the input
channels of a specified destination voice.
Destination for which to set volume levels.
If the voice sends to a single target voice then specifying null will cause SetOutputMatrix to operate on that
target voice.
Confirms the output channel count of the voice. This is the number of channels that are
produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels × DestinationChannels] volume levels sent to the destination voice.
The level sent from source channel S to destination channel D is specified in the form levelMatrix[SourceChannels ×
D + S].
For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2voice.ixaudio2voice.setoutputmatrix(v=vs.85).aspx.
Identifies this call as part of a deferred batch. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/ee415807(v=vs.85).aspx.
Sets the volume level of each channel of the final output for the voice. These channels are mapped to the input
channels of a specified destination voice.
Destination for which to set volume levels.
If the voice sends to a single target voice then specifying null will cause SetOutputMatrix to operate on that
target voice.
Confirms the output channel count of the voice. This is the number of channels that are
produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels × DestinationChannels] volume levels sent to the destination voice.
The level sent from source channel S to destination channel D is specified in the form levelMatrix[SourceChannels ×
D + S].
For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2voice.ixaudio2voice.setoutputmatrix(v=vs.85).aspx.
Gets the volume level of each channel of the final output for the voice. These channels are mapped to the input
channels of a specified destination voice.
The destination to retrieve the output matrix for.
Confirms the output channel count of the voice. This is the number of channels that are
produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels × DestinationChannels] volume levels sent to the destination voice.
The level sent from source channel S to destination channel D is specified in the form levelMatrix[SourceChannels ×
D + S].
For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2voice.ixaudio2voice.getoutputmatrix(v=vs.85).aspx.
HRESULT
Gets the volume level of each channel of the final output for the voice. These channels are mapped to the input
channels of a specified destination voice.
The destination to retrieve the output matrix for.
Confirms the output channel count of the voice. This is the number of channels that are
produced by the last effect in the chain.
Confirms the input channel count of the destination voice.
Array of [SourceChannels × DestinationChannels] volume levels sent to the destination voice.
The level sent from source channel S to destination channel D is specified in the form levelMatrix[SourceChannels ×
D + S].
For more details see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2voice.ixaudio2voice.getoutputmatrix(v=vs.85).aspx.
Destroys the voice. If necessary, stops the voice and removes it from the XAudio2 graph.
Disposes the and calls the method..
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Provides data for the event.
Initializes a new instance of the class.
The context pointer that was assigned to the member
of the structure when the buffer was submitted.
The HRESULT code of the error encountered
Gets the HRESULT code of the error encountered.
is the class for the XAudio2 object that manages all audio engine states, the audio
processing thread, the voice graph, and so forth.
The denominator of a quantum unit. In 10ms chunks (= 1/100 seconds).
Minimum sample rate is 1000 Hz.
Maximum sample rate is 200 kHz.
The minimum frequency ratio is 1/1024.
Maximum frequency ratio is 1024.
The default value for the frequency ratio is 4.
The maximum number of supported channels is 64.
Value which indicates that the default number of channels should be used.
Values which indicates that the default sample rate should be used.
Value which can be used in combination with the method to commit all changes.
Values which indicates that the made changes should be commited instantly.
Initializes a new instance of the class.
Native pointer of the object.
Initializes a new instance of the class.
This constructor already calls . Don't call it a second time.
Initializes a new instance of the class.
Specifies whether the XAudio2 engine should be created in debug mode. Pass true to enable the debug
mode.
Specifies which CPU to use. Use as
default value.
This constructor already calls . Don't call it a second time.
Returns the number of available audio output devices.
Number of available audio output devices.
HRESULT
Returns the number of available audio output devices.
Number of available audio output devices.
Returns information about an audio output device.
Index of the device to be queried. This value must be less than the count returned by
.
structure.
HRESULT
Returns information about an audio output device.
Index of the device to be queried. This value must be less than the count returned by
.
structure.
Sets XAudio2 parameters and prepares XAudio2 for use.
Flags that specify the behavior of the XAudio2 object. This value must be 0.
Specifies which CPU to use. Use as default value.
HRESULT
Sets XAudio2 parameters and prepares XAudio2 for use.
Flags that specify the behavior of the XAudio2 object. This value must be 0.
Specifies which CPU to use. Use as default value.
Adds an from the engine callback list.
object to add to the engine
callback list.
HRESULT
Removes an from the engine callback list.
object to remove from the engine
callback list. If the given interface is present more than once in the list, only the first instance in the list
will be removed.
Creates and configures a source voice. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2.ixaudio2.createsourcevoice(v=vs.85).aspx.
If successful, returns a pointer to the new object.
Pointer to a . The following formats are supported:
- 8-bit (unsigned) integer PCM
- 16-bit integer PCM (optimal format for XAudio2)
- 20-bit integer PCM (either in 24 or 32 bit containers)
- 24-bit integer PCM (either in 24 or 32 bit containers)
- 32-bit integer PCM
- 32-bit float PCM (preferred format after 16-bit integer)
The number of channels in a source voice must be less than or equal to . The sample
rate of a source voice must be between and .
that specify the behavior of the source voice. A flag can be
or a combination of one or more of the following.
Possible values are , and
. is not supported on Windows.
Highest allowable frequency ratio that can be set on this voice. The value for this
argument must be between and .
Client-provided callback interface, . This parameter is
optional and can be null.
List of structures that describe the set of destination voices for the
source voice. If is NULL, the send list defaults to a single output to the first mastering
voice created.
List of structures that describe an effect chain to use in the
source voice. This parameter is optional and can be null.
HRESULT
Creates and configures a submix voice.
On success, returns a pointer to the new object.
Number of channels in the input audio data of the submix voice. The
must be less than or equal to .
Sample rate of the input audio data of submix voice. This rate must be a multiple of
. InputSampleRate must be between and
.
Flags that specify the behavior of the submix voice. It can be or
.
An arbitrary number that specifies when this voice is processed with respect to other
submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices
that include a smaller value and before all other voices that include a larger
value. Voices that include the same value are
processed in any order. A submix voice cannot send to another submix voice with a lower or equal
value. This prevents audio being lost due to a submix cycle.
List of structures that describe the set of destination voices for the
submix voice. If is NULL, the send list will default to a single output to the first
mastering voice created.
List of structures that describe an effect chain to use in the
submix voice. This parameter is optional and can be null.
HRESULT
Creates and configures a mastering voice.
If successful, returns a pointer to the new object.
Number of channels the mastering voice expects in its input audio. must be less than
or equal to .
You can set InputChannels to , which causes XAudio2 to try to detect the system
speaker configuration setup.
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of
. must be between
and .
You can set InputSampleRate to , with the default being determined by the current
platform.
Flags that specify the behavior of the mastering voice. Must be 0.
Identifier of the device to receive the output audio. Specifying the default value of 0 (zero)
causes XAudio2 to select the global default audio device.
structure that describes an effect chain to use in the mastering
voice, or NULL to use no effects.
Not valid for XAudio 2.7.
HRESULT
Starts the audio processing thread.
HRESULT
Stops the audio processing thread.
Atomically applies a set of operations that are tagged with a given identifier.
Identifier of the set of operations to be applied. To commit all pending operations, pass
.
HRESULT
Returns current resource usage details, such as available memory or CPU usage.
On success, pointer to an structure that is
returned.
HRESULT
Changes global debug logging options for XAudio2.
structure that contains the new debug configuration.
Reserved parameter. Must me NULL.
HRESULT
Returns the default device.
The default device.
is the class for the XAudio2 object that manages all audio engine states, the audio
processing thread, the voice graph, and so forth.
The denominator of a quantum unit. In 10ms chunks (= 1/100 seconds).
Minimum sample rate is 1000 Hz.
Maximum sample rate is 200 kHz.
The minimum frequency ratio is 1/1024.
Maximum frequency ratio is 1024.
The default value for the frequency ratio is 4.
The maximum number of supported channels is 64.
Value which indicates that the default number of channels should be used.
Values which indicates that the default sample rate should be used.
Value which can be used in combination with the method to commit all changes.
Values which indicates that the made changes should be commited instantly.
Initializes a new instance of the class.
Native pointer of the object.
Initializes a new instance of the class.
Initializes a new instance of the class.
Specifies which CPU to use. Use as
default value.
Adds an from the engine callback list.
object to add to the engine
callback list.
HRESULT
Removes an from the engine callback list.
object to remove from the engine
callback list. If the given interface is present more than once in the list, only the first instance in the list
will be removed.
Creates and configures a source voice. For more information see
http://msdn.microsoft.com/en-us/library/windows/desktop/microsoft.directx_sdk.ixaudio2.ixaudio2.createsourcevoice(v=vs.85).aspx.
If successful, returns a pointer to the new object.
Pointer to a . The following formats are supported:
- 8-bit (unsigned) integer PCM
- 16-bit integer PCM (optimal format for XAudio2)
- 20-bit integer PCM (either in 24 or 32 bit containers)
- 24-bit integer PCM (either in 24 or 32 bit containers)
- 32-bit integer PCM
- 32-bit float PCM (preferred format after 16-bit integer)
The number of channels in a source voice must be less than or equal to . The sample
rate of a source voice must be between and .
that specify the behavior of the source voice. A flag can be
or a combination of one or more of the following.
Possible values are , and
. is not supported on Windows.
Highest allowable frequency ratio that can be set on this voice. The value for this
argument must be between and .
Client-provided callback interface, . This parameter is
optional and can be null.
List of structures that describe the set of destination voices for the
source voice. If is NULL, the send list defaults to a single output to the first mastering
voice created.
List of structures that describe an effect chain to use in the
source voice. This parameter is optional and can be null.
HRESULT
Creates and configures a submix voice.
On success, returns a pointer to the new object.
Number of channels in the input audio data of the submix voice. The
must be less than or equal to .
Sample rate of the input audio data of submix voice. This rate must be a multiple of
. InputSampleRate must be between and
.
Flags that specify the behavior of the submix voice. It can be or
.
An arbitrary number that specifies when this voice is processed with respect to other
submix voices, if the XAudio2 engine is running other submix voices. The voice is processed after all other voices
that include a smaller value and before all other voices that include a larger
value. Voices that include the same value are
processed in any order. A submix voice cannot send to another submix voice with a lower or equal
value. This prevents audio being lost due to a submix cycle.
List of structures that describe the set of destination voices for the
submix voice. If is NULL, the send list will default to a single output to the first
mastering voice created.
List of structures that describe an effect chain to use in the
submix voice. This parameter is optional and can be null.
HRESULT
Creates and configures a mastering voice.
If successful, returns a pointer to the new object.
Number of channels the mastering voice expects in its input audio. must be less than
or equal to .
You can set InputChannels to , which causes XAudio2 to try to detect the system
speaker configuration setup.
Sample rate of the input audio data of the mastering voice. This rate must be a multiple of
. must be between
and .
You can set InputSampleRate to , with the default being determined by the current
platform.
Flags that specify the behavior of the mastering voice. Must be 0.
Identifier of the device to receive the output audio. Specifying the default value of NULL
causes XAudio2 to select the global default audio device.
structure that describes an effect chain to use in the mastering
voice, or NULL to use no effects.
The audio stream category to use for this mastering voice.
HRESULT
Starts the audio processing thread.
HRESULT
Stops the audio processing thread.
Atomically applies a set of operations that are tagged with a given identifier.
Identifier of the set of operations to be applied. To commit all pending operations, pass
.
HRESULT
Returns current resource usage details, such as available memory or CPU usage.
On success, pointer to an structure that is
returned.
HRESULT
Changes global debug logging options for XAudio2.
structure that contains the new debug configuration.
Reserved parameter. Must me NULL.
HRESULT
Returns the default device.
The default device.
Defines the format of waveform-audio data for formats having more than two channels or higher sample resolutions
than allowed by .
Can be used to define any format that can be defined by .
For more information see and
.
Returns the SubType-Guid of a . If the specified does
not contain a SubType-Guid, the gets converted to the equal SubType-Guid
using the method.
which gets used to determine the SubType-Guid.
SubType-Guid of the specified .
Gets the number of bits of precision in the signal.
Usually equal to . However, is the
container size and must be a multiple of 8, whereas can be any value not
exceeding the container size. For example, if the format uses 20-bit samples,
must be at least 24, but is 20.
Gets the number of samples contained in one compressed block of audio data. This value is used in buffer
estimation. This value is used with compressed formats that have a fixed number of samples within each block. This
value can be set to 0 if a variable number of samples is contained in each block of compressed audio data. In this
case, buffer estimation and position information needs to be obtained in other ways.
Gets a bitmask specifying the assignment of channels in the stream to speaker positions.
Subformat of the data, such as . The subformat information is similar to
that provided by the tag in the class's member.
Initializes a new instance of the class.
Samplerate of the waveform-audio. This value will get applied to the
property.
Bits per sample of the waveform-audio. This value will get applied to the
property and the property.
Number of channels of the waveform-audio. This value will get applied to the
property.
Subformat of the data. This value will get applied to the property.
Initializes a new instance of the class.
Samplerate of the waveform-audio. This value will get applied to the
property.
Bits per sample of the waveform-audio. This value will get applied to the
property and the property.
Number of channels of the waveform-audio. This value will get applied to the
property.
Subformat of the data. This value will get applied to the property.
Bitmask specifying the assignment of channels in the stream to speaker positions. Thie value
will get applied to the property.
Converts the instance to a raw instance by converting
the to the equal .
A simple instance.
Creates a new object that is a copy of the current instance.
A copy of the current instance.
Returns a string which describes the .
A string which describes the .
Defines the base for all audio streams which provide samples instead of raw byte data.
Compared to the , the provides samples instead of raw bytes.
That means that the and the properties
are expressed in samples.
Also the method provides samples instead of raw bytes.
Defines the base for all aggregators.
Defines the base for all aggregators.
Defines the base for all aggregators.
The type of data, the aggregator provides.
The type of the aggreator type.
Gets the underlying .
The underlying .
Defines the base for all audio streams which provide raw byte data.
Compared to the , the provides raw bytes instead of samples.
That means that the and the properties are
expressed in bytes.
Also the method provides samples instead of raw bytes.
Defines the base for all audio streams.
Gets a value indicating whether the supports seeking.
Gets the of the waveform-audio data.
Gets or sets the current position. The unit of this property depends on the implementation of this interface. Some
implementations may not support this property.
Gets the length of the waveform-audio data. The unit of this property depends on the implementation of this
interface. Some implementations may not support this property.
Exception class for all MM-APIs like waveOut or ACM.
Throws an if the is not
.
Errorcode.
Name of the function which returned the specified .
Gets the which describes the error.
Gets the name of the function which caused the error.
Gets the name of the function which caused the error.
Initializes a new instance of the class.
Errorcode.
Name of the function which returned the specified .
Initializes a new instance of the class from serialization data.
The object that holds the serialized object data.
The StreamingContext object that supplies the contextual information about the source or destination.
Populates a with the data needed to serialize the target object.
The to populate with data.
The destination (see StreamingContext) for this serialization.
Base class for most of the sample sources.
Creates a new instance of the class.
Underlying base source which provides audio data.
Reads a sequence of samples from the and advances the position within the stream by
the number of samples read.
An array of floats. When this method returns, the contains the specified
float array with the values between and ( +
- 1) replaced by the floats read from the current source.
The zero-based offset in the at which to begin storing the data
read from the current stream.
The maximum number of samples to read from the current source.
The total number of samples read into the buffer.
Gets the of the waveform-audio data.
Gets or sets the position in samples.
Gets the length in samples.
Gets a value indicating whether the supports seeking.
Gets or sets the underlying sample source.
Gets or sets a value which indicates whether to dispose the
on calling .
Disposes the and the underlying .
Disposes the and the underlying .
True to release both managed and unmanaged resources; false to release only unmanaged
resources.
Destructor which calls .
Provides a few basic extensions.
Gets the length of a as a .
The source to get the length for.
The length of the specified as a value.
Gets the position of a as a value.
The source to get the position of.
The position of the specified as a value.
The source must support seeking to get or set the position.
Use the property to determine whether the stream supports seeking.
Otherwise a call to this method may result in an exception.
Sets the position of a as a value.
The source to set the new position for.
The new position as a value.
The source must support seeking to get or set the position.
Use the property to determine whether the stream supports seeking.
Otherwise a call to this method may result in an exception.
Converts a duration in raw elements to a value. For more information about "raw elements" see remarks.
The instance which provides the used
to convert the duration in "raw elements" to a value.
The duration in "raw elements" to convert to a value.
The duration as a value.
source
or
elementCount
The term "raw elements" describes the elements, an audio source uses.
What type of unit an implementation of the interface uses, depends on the implementation itself.
For example, a uses bytes while a uses samples.
That means that a provides its position, length,... in bytes
while a provides its position, length,... in samples.
To get the length or the position of a as a value, use the
or the property.
Internally this method uses the class.
Converts a duration in raw elements to a duration in milliseconds. For more information about "raw elements" see remarks.
The instance which provides the used
to convert the duration in "raw elements" to a duration in milliseconds.
The duration in "raw elements" to convert to duration in milliseconds.
The duration in milliseconds.
source
or
elementCount
The term "raw elements" describes the elements, an audio source uses.
What type of unit an implementation of the interface uses, depends on the implementation itself.
For example, a uses bytes while a uses samples.
That means that a provides its position, length,... in bytes
while a provides its position, length,... in samples.
To get the length or the position of a as a value, use the
or the property.
Internally this method uses the class.
Converts a duration as a to a duration in "raw elements". For more information about "raw elements" see remarks.
instance which provides the used to convert
the duration as a to a duration in "raw elements".
Duration as a to convert to a duration in "raw elements".
Duration in "raw elements".
source
The term "raw elements" describes the elements, an audio source uses.
What type of unit an implementation of the interface uses, depends on the implementation itself.
For example, a uses bytes while a uses samples.
That means that a provides its position, length,... in bytes
while a provides its position, length,... in samples.
To get the length or the position of a as a value, use the
or the property.
Internally this method uses the class.
Converts a duration in milliseconds to a duration in "raw elements". For more information about "raw elements" see remarks.
instance which provides the used to convert
the duration in milliseconds to a duration in "raw elements".
Duration in milliseconds to convert to a duration in "raw elements".
Duration in "raw elements".
The term "raw elements" describes the elements, an audio source uses.
What type of unit an implementation of the interface uses, depends on the implementation itself.
For example, a uses bytes while a uses samples.
That means that a provides its position, length,... in bytes
while a provides its position, length,... in samples.
To get the length or the position of a as a value, use the
or the property.
Internally this method uses the class.
source
milliseconds is less than zero.
Creates a new file, writes all audio data of the to the file, and then closes the file. If the target file already exists, it is overwritten.
Source which provides the audio data to write to the file.
The file to write to.
source
Writes all audio data of the to a wavestream (including a wav header).
Source which provides the audio data to write to the .
to store the audio data in.
source
or
stream
Stream is not writeable.;stream
Writes all audio data of the to a stream. In comparison to the method,
the method won't encode the provided audio to any particular format. No wav, aiff,... header won't be included.
The waveSource which provides the audio data to write to the .
The to store the audio data in.
waveSource
or
stream
Stream is not writeable.;stream
Checks the length of an array.
Type of the array.
The array to check. This parameter can be null.
The target length of the array.
A value which indicates whether the length of the array has to fit exactly the specified .
Array which fits the specified requirements. Note that if a new array got created, the content of the old array won't get copied to the return value.
Blocks the current thread until the playback of the specified instance stops or the specified timeout expires.
The instance to wait for its playback to stop.
The number of milliseconds to wait. Pass to wait indefinitely.
true if the got stopped; false if the specified expired.
Blocks the current thread until the playback of the specified instance stops.
The instance to wait for its playback to stop.
Base class for all wave aggregators.
Creates a new instance of class.
Creates a new instance of class.
Underlying base stream.
Gets or sets a value which indicates whether to dispose the
on calling .
Gets or sets the underlying base stream of the WaveAggregator.
Gets the output WaveFormat.
Reads a sequence of bytes from the and advances the position within the stream by the
number of bytes read.
An array of bytes. When this method returns, the contains the specified
byte array with the values between and ( +
- 1) replaced by the bytes read from the current source.
The zero-based byte offset in the at which to begin storing the data
read from the current stream.
The maximum number of bytes to read from the current source.
The total number of bytes read into the buffer.
Gets or sets the position of the source.
Gets the length of the source.
Gets a value indicating whether the supports seeking.
Disposes the source and releases all allocated resources.
Disposes the and releases all allocated resources.
True to release both managed and unmanaged resources; false to release only unmanaged resources.
Destructor which calls .
Defines the format of waveform-audio data.
Gets the number of channels in the waveform-audio data. Mono data uses one channel and stereo data uses two
channels.
Gets the sample rate, in samples per second (hertz).
Gets the required average data transfer rate, in bytes per second. For example, 16-bit stereo at 44.1 kHz has an
average data rate of 176,400 bytes per second (2 channels — 2 bytes per sample per channel — 44,100 samples per
second).
Gets the block alignment, in bytes. The block alignment is the minimum atomic unit of data. For PCM data, the block
alignment is the number of bytes used by a single sample, including data for both channels if the data is stereo.
For example, the block alignment for 16-bit stereo PCM is 4 bytes (2 channels x 2 bytes per sample).
Gets the number of bits, used to store one sample.
Gets the size (in bytes) of extra information. This value is mainly used for marshalling.
Gets the number of bytes, used to store one sample.
Gets the number of bytes, used to store one block. This value equals multiplied with
.
Gets the waveform-audio format type.
Initializes a new instance of the class with a sample rate of 44100 Hz, bits per sample
of 16 bit, 2 channels and PCM as the format type.
Initializes a new instance of the class with PCM as the format type.
Samples per second.
Number of bits, used to store one sample.
Number of channels in the waveform-audio data.
Initializes a new instance of the class.
Samples per second.
Number of bits, used to store one sample.
Number of channels in the waveform-audio data.
Format type or encoding of the wave format.
Initializes a new instance of the class.
Samples per second.
Number of bits, used to store one sample.
Number of channels in the waveform-audio data.
Format type or encoding of the wave format.
Size (in bytes) of extra information. This value is mainly used for marshalling.
Converts a duration in milliseconds to a duration in bytes.
Duration in millisecond to convert to a duration in bytes.
Duration in bytes.
Converts a duration in bytes to a duration in milliseconds.
Duration in bytes to convert to a duration in milliseconds.
Duration in milliseconds.
Indicates whether the current object is equal to another object of the same type.
The to compare with this .
true if the current object is equal to the other parameter; otherwise, false.
Returns a string which describes the .
A string which describes the .
Creates a new object that is a copy of the current instance.
A copy of the current instance.
Updates the - and the -property.