gpt4 book ai didi

java - 为什么这些接口(interface)类型的变量不用于实例化新对象?

转载 作者:行者123 更新时间:2023-12-02 07:21:41 24 4
gpt4 key购买 nike

我正在使用this api 并遇到了一些让我困惑的代码示例。我知道我可以使用“new”将对象分配给接口(interface),因为接口(interface)是一种数据类型。我从下面的代码中不明白的是为什么变量:“cc”和“audioDecoder”被分配了它们已分配的值。据我所知,这些变量应该分配给新对象。有人可以解释一下这是怎么回事吗?

try {
// open media file
DefaultMediaPlayer player = new DefaultMediaPlayer("/home/me/walking.wav");

// get some properties of the first audio stream
IDecoder audioDecoder = player.getAudioStreamDecoder(0);
ICodecContextWrapper cc = audioDecoder.getCodecContext();

int sampleFormat = cc.getSampleFormat();
int sampleRate = cc.getSampleRate();
int bytesPerSample = AVSampleFormat.getBytesPerSample(sampleFormat);
long channelLayout = cc.getChannelLayout();
int channelCount = AVChannelLayout.getChannelCount(channelLayout);
AudioFormat.Encoding encoding;

if (AVSampleFormat.isPlanar(sampleFormat) || AVSampleFormat.isReal(sampleFormat))
throw new LibavException("unsupported output sample format");
else if (AVSampleFormat.isSigned(sampleFormat))
encoding = AudioFormat.Encoding.PCM_SIGNED;
else
encoding = AudioFormat.Encoding.PCM_UNSIGNED;

// create Java InputStream for audio stream raw data
SampleInputStream sis = new SampleInputStream(sampleRate * bytesPerSample * channelCount, true);

// create AudioInputStream from the SampleInputStream
AudioInputStream audioStream = new AudioInputStream(sis, new AudioFormat(encoding, sampleRate,
bytesPerSample * 8, channelCount, bytesPerSample * channelCount, sampleRate,
ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder())), -1);

// create adapter between Libav audio frames and the SampleInputStream
Frame2AudioFrameAdapter resampler = new Frame2AudioFrameAdapter(channelLayout, channelLayout, sampleRate,
sampleRate, sampleFormat, sampleFormat);

// get audio mixer for the audio stream format
PlaybackMixer audioMixer = PlaybackMixer.getMixer(audioStream.getFormat());

// connect all streams
audioDecoder.addFrameConsumer(resampler);
resampler.addAudioFrameConsumer(sis);
audioMixer.addInputStream(audioStream);

// enable audio stream decoding
player.setAudioStreamDecodingEnabled(0, true);

// start playback
audioMixer.play();
player.play();

// wait until the playback stops
player.join();

// release system resources
player.close();
resampler.dispose();
PlaybackMixer.closeAllMixers();
} catch (Exception ex) {
Logger.getLogger(PlaybackSample.class.getName()).log(Level.WARNING, "unable to play audio", ex);
}

最佳答案

如果您已阅读 API 文档。方法DefaultMediaPlayer.getAudioStreamDecoder

正在返回IDecoder类型的实例。这就是为什么在 src 中,他们将返回类型分配给 IDecoder 类型的 audioDecoder 变量。

// get some properties of the first audio stream 
IDecoder audioDecoder = player.getAudioStreamDecoder(0);
ICodecContextWrapper cc = audioDecoder.getCodecContext();

没有规则规定只能使用new将对象分配给接口(interface)类型。您可以从方法返回类型分配对象实例。

同样,方法 IDecoder.getCodecContext() 返回实例 ICodecContextWrapper 的对象,该对象被分配给变量 cc

关于java - 为什么这些接口(interface)类型的变量不用于实例化新对象?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/14132849/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com