- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我目前正在开发一个需要放慢音乐节奏的应用程序,我在网上搜索了一下,在 Android 中唯一真正可行的选择是 OpenSL ES。我从基础开始,所以我刚播放了一个音频文件,但由于某种原因我无法改变节奏。我收到以下错误
04-04 15:32:51.950: W/libOpenSLES(12848): Leaving Object::GetInterface (SL_RESULT_FEATURE_UNSUPPORTED)
我已经检查过该功能是否受支持,文档中说是,所以我的代码中可能有一些错误?我以前从未使用过 C++,因此非常感谢您的帮助。我的代码如下
* Copyright (C) 2010 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
/* This is a JNI example where we use native methods to play sounds
* using OpenSL ES. See the corresponding Java source file located at:
*
* src/com/example/nativeaudio/NativeAudio/NativeAudio.java
*/
#include <assert.h>
#include <jni.h>
#include <string.h>
// for __android_log_print(ANDROID_LOG_INFO, "YourApp", "formatted message");
// #include <android/log.h>
// for native audio
#include <SLES/OpenSLES.h>
#include <SLES/OpenSLES_Android.h>
// for native asset manager
#include <sys/types.h>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
// pre-recorded sound clips, both are 8 kHz mono 16-bit signed little endian
static const char hello[] =
#include "hello_clip.h"
;
static const char android[] =
#include "android_clip.h"
;
// engine interfaces
static SLObjectItf engineObject = NULL;
static SLEngineItf engineEngine;
// output mix interfaces
static SLObjectItf outputMixObject = NULL;
static SLEnvironmentalReverbItf outputMixEnvironmentalReverb = NULL;
// aux effect on the output mix, used by the buffer queue player
static const SLEnvironmentalReverbSettings reverbSettings =
SL_I3DL2_ENVIRONMENT_PRESET_STONECORRIDOR;
// file descriptor player interfaces
static SLObjectItf fdPlayerObject = NULL;
static SLPlayItf fdPlayerPlay;
static SLSeekItf fdPlayerSeek;
static SLMuteSoloItf fdPlayerMuteSolo;
static SLVolumeItf fdPlayerVolume;
// synthesized sawtooth clip
#define SAWTOOTH_FRAMES 8000
static short sawtoothBuffer[SAWTOOTH_FRAMES];
// pointer and size of the next player buffer to enqueue, and number of remaining buffers
static short *nextBuffer;
static unsigned nextSize;
static int nextCount;
// playback rate (default 1x:1000)
static SLpermille playbackMinRate = 500;
static SLpermille playbackMaxRate = 2000;
static SLpermille playbackRateStepSize;
static SLPlaybackRateItf fdPlaybackRate;
// create the engine and output mix objects
void Java_com_example_nativeaudio_NativeAudio_createEngine(JNIEnv* env, jclass clazz)
{
SLresult result;
// create engine
result = slCreateEngine(&engineObject, 0, NULL, 0, NULL, NULL);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// realize the engine
result = (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// get the engine interface, which is needed in order to create other objects
result = (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// create output mix, with environmental reverb specified as a non-required interface
const SLInterfaceID ids[1] = {SL_IID_ENVIRONMENTALREVERB};
const SLboolean req[1] = {SL_BOOLEAN_FALSE};
result = (*engineEngine)->CreateOutputMix(engineEngine, &outputMixObject, 1, ids, req);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// realize the output mix
result = (*outputMixObject)->Realize(outputMixObject, SL_BOOLEAN_FALSE);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// get the environmental reverb interface
// this could fail if the environmental reverb effect is not available,
// either because the feature is not present, excessive CPU load, or
// the required MODIFY_AUDIO_SETTINGS permission was not requested and granted
result = (*outputMixObject)->GetInterface(outputMixObject, SL_IID_ENVIRONMENTALREVERB,
&outputMixEnvironmentalReverb);
if (SL_RESULT_SUCCESS == result) {
result = (*outputMixEnvironmentalReverb)->SetEnvironmentalReverbProperties(
outputMixEnvironmentalReverb, &reverbSettings);
(void)result;
}
// ignore unsuccessful result codes for environmental reverb, as it is optional for this example
}
// expose the mute/solo APIs to Java for one of the 3 players
// expose the volume APIs to Java for one of the 3 players
// enable reverb on the buffer queue player
jboolean Java_com_example_nativeaudio_NativeAudio_enableReverb(JNIEnv* env, jclass clazz,
jboolean enabled)
{
SLresult result;
// we might not have been able to add environmental reverb to the output mix
if (NULL == outputMixEnvironmentalReverb) {
return JNI_FALSE;
}
return JNI_TRUE;
}
// create asset audio player
jboolean Java_com_example_nativeaudio_NativeAudio_createAssetAudioPlayer(JNIEnv* env, jclass clazz,
jobject assetManager, jstring filename)
{
SLresult result;
// convert Java string to UTF-8
const char *utf8 = (*env)->GetStringUTFChars(env, filename, NULL);
assert(NULL != utf8);
// use asset manager to open asset by filename
AAssetManager* mgr = AAssetManager_fromJava(env, assetManager);
assert(NULL != mgr);
AAsset* asset = AAssetManager_open(mgr, utf8, AASSET_MODE_UNKNOWN);
// release the Java string and UTF-8
(*env)->ReleaseStringUTFChars(env, filename, utf8);
// the asset might not be found
if (NULL == asset) {
return JNI_FALSE;
}
// open asset as file descriptor
off_t start, length;
int fd = AAsset_openFileDescriptor(asset, &start, &length);
assert(0 <= fd);
AAsset_close(asset);
// configure audio source
SLDataLocator_AndroidFD loc_fd = {SL_DATALOCATOR_ANDROIDFD, fd, start, length};
SLDataFormat_MIME format_mime = {SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED};
SLDataSource audioSrc = {&loc_fd, &format_mime};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, outputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_SEEK, SL_IID_MUTESOLO, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*engineEngine)->CreateAudioPlayer(engineEngine, &fdPlayerObject, &audioSrc, &audioSnk,
3, ids, req);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// realize the player
result = (*fdPlayerObject)->Realize(fdPlayerObject, SL_BOOLEAN_FALSE);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// get the play interface
result = (*fdPlayerObject)->GetInterface(fdPlayerObject, SL_IID_PLAY, &fdPlayerPlay);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// get the seek interface
result = (*fdPlayerObject)->GetInterface(fdPlayerObject, SL_IID_SEEK, &fdPlayerSeek);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// get the mute/solo interface
result = (*fdPlayerObject)->GetInterface(fdPlayerObject, SL_IID_MUTESOLO, &fdPlayerMuteSolo);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// get the volume interface
result = (*fdPlayerObject)->GetInterface(fdPlayerObject, SL_IID_VOLUME, &fdPlayerVolume);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// enable whole file looping
result = (*fdPlayerSeek)->SetLoop(fdPlayerSeek, SL_BOOLEAN_TRUE, 0, SL_TIME_UNKNOWN);
assert(SL_RESULT_SUCCESS == result);
(void)result;
// get playback rate interface
result = (*fdPlayerObject)->GetInterface(fdPlayerObject,
SL_IID_PLAYBACKRATE, &fdPlaybackRate);
assert(SL_RESULT_SUCCESS == result);
SLuint32 capa;
result = (*fdPlaybackRate)->GetRateRange(fdPlaybackRate, 0,
&playbackMinRate, &playbackMaxRate, &playbackRateStepSize, &capa);
assert(SL_RESULT_SUCCESS == result);
result = (*fdPlaybackRate)->SetPropertyConstraints(fdPlaybackRate,
SL_RATEPROP_PITCHCORAUDIO);
if (SL_RESULT_PARAMETER_INVALID == result) {
// LOGD("Parameter Invalid");
}
if (SL_RESULT_FEATURE_UNSUPPORTED == result) {
// LOGD("Feature Unsupported");
}
if (SL_RESULT_SUCCESS == result) {
assert(SL_RESULT_SUCCESS == result);
// LOGD("Success");
}
// result = (*fdPlaybackRate)->SetRate(fdPlaybackRate, playbackMaxRate);
// assert(SL_RESULT_SUCCESS == result);
SLpermille SLrate;
result = (*fdPlaybackRate)->GetRate(fdPlaybackRate, &SLrate);
assert(SL_RESULT_SUCCESS == result);
// enable whole file looping
result = (*fdPlayerSeek)->SetLoop(fdPlayerSeek, SL_BOOLEAN_FALSE, 0, SL_TIME_UNKNOWN);
assert(SL_RESULT_SUCCESS == result);
(void)result;
return JNI_TRUE;
}
/*JNIEXPORT void Java_com_example_stackoverflowcode_NativeAudio_setRate(JNIEnv* env, jclass clazz, jint rate) {
result = (*fdPlayerRate)->SetRate(fdPlayerRate, playbackMaxRate);
assert(SL_RESULT_SUCCESS == result);
}
*/
JNIEXPORT void Java_com_example_stackoverflowcode_NativeAudio_setRate(
JNIEnv* env, jclass clazz, jint rate) {
if (NULL != fdPlaybackRate) {
SLresult result;
result = (*fdPlaybackRate)->SetRate(fdPlaybackRate, rate);
assert(SL_RESULT_SUCCESS == result);
}
}
// set the playing state for the asset audio player
void Java_com_example_nativeaudio_NativeAudio_setPlayingAssetAudioPlayer(JNIEnv* env,
jclass clazz, jboolean isPlaying)
{
SLresult result;
// make sure the asset audio player was created
if (NULL != fdPlayerPlay) {
// set the player's state
result = (*fdPlayerPlay)->SetPlayState(fdPlayerPlay, isPlaying ?
SL_PLAYSTATE_PLAYING : SL_PLAYSTATE_PAUSED);
assert(SL_RESULT_SUCCESS == result);
(void)result;
}
}
// shut down the native audio system
void Java_com_example_nativeaudio_NativeAudio_shutdown(JNIEnv* env, jclass clazz)
{
// destroy file descriptor audio player object, and invalidate all associated interfaces
if (fdPlayerObject != NULL) {
(*fdPlayerObject)->Destroy(fdPlayerObject);
fdPlayerObject = NULL;
fdPlayerPlay = NULL;
fdPlayerSeek = NULL;
fdPlayerMuteSolo = NULL;
fdPlayerVolume = NULL;
}
// destroy output mix object, and invalidate all associated interfaces
if (outputMixObject != NULL) {
(*outputMixObject)->Destroy(outputMixObject);
outputMixObject = NULL;
outputMixEnvironmentalReverb = NULL;
}
// destroy engine object, and invalidate all associated interfaces
if (engineObject != NULL) {
(*engineObject)->Destroy(engineObject);
engineObject = NULL;
engineEngine = NULL;
}
}
非常感谢任何帮助。谢谢
最佳答案
我认为这里还有一些步骤:
1) 获取 DynamicInterfaceManagementItf
2)添加PlayBackRateItf彻底DynamicInterfaceManagementItf
3) 获取对象上的接口(interface)
// get dynamic interface
result = (*fdPlayerObject)->GetInterface(fdPlayerObject,
SL_IID_DYNAMICINTERFACEMANAGEMENT,
(void*) &dynamicInterfaceManagementItf);
CheckErr(result);
// add playback rate itf
result = (*dynamicInterfaceManagementItf)->AddInterface(
dynamicInterfaceManagementItf, SL_IID_PLAYBACKRATE,
SL_BOOLEAN_FALSE);
CheckErr(result);
// get the playbackrate interface
result = (*fdPlayerObject)->GetInterface(fdPlayerObject,
SL_IID_PLAYBACKRATE, &fdPlaybackRateItf);
CheckErr(result);
希望对你有帮助
关于android - Android 上的 OpenSL ES 播放速率,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/22865834/
我需要一次发送至少 200 条消息。程序启动后,给15或17发邮件成功,然后报错: 消息错误: com.sun.mail.smtp.SMTPSendFailedException: 421 4.4.2
我目前正在开发一个使用 AVSynthesizer 将文本转换为语音的 iOS 应用程序。 我想要做的是,当合成器在说话时,可以通过 slider 改变发声率,并且说话的速度会发生变化。 我在 sli
假设我们有以下场景: 包含 10,000 条消息的服务总线队列 Azure Functions(使用计划),其中函数设置为 SB 队列的触发器 外部(不受我们控制)系统无法超过特定请求率 如果我尽快对
TextToSpeech有设置语速的方法:setSpeechRate() .但它没有查询当前速度的相反方法。 有没有办法向系统查询该值? 最佳答案 您可能会得到默认的 TTS 语速 Settings.
我有一个关于 NGINX 速率限制的问题。 是否可以根据 JWT token 的解码值进行速率限制?我在文档中找不到任何这样的信息。 或者即使有一种通过创建纯自定义变量(使用 LuaJIT)来进行速率
我有一个带有方向键和 2 个按钮的游戏 handle 。所有这些都是数字的(不是模拟的)。 我有一个处理他们的事件的程序: -(void)gamepadTick:(float)delta {
所以我需要在 OpenCV 中获取网络摄像头的 fps 速率。哪个功能可以做这样的事情? 最佳答案 int cvGetCaptureProperty( CvCapture* capture, int
我四处寻找 CURL 设置文件,但没有在/etc/中找到它,也没有在 curl 站点 ether 上找到太多... 所以基本上我想做的是设置 curl 可以上传的最大速度限制(无论它正在运行多少个实例
我有一个在 Atom 上运行的嵌入式 Linux 系统,这是一个足够新的 CPU,可以有一个不变的 TSC(时间戳计数器),内核在启动时测量其频率。我在自己的代码中使用 TSC 来保持时间(避免内核调
我正在寻找一种以高粒度单独限制 RPC 速率的方法,令我沮丧的是,针对此问题可用的选项并不多。我正在尝试用 gRPC 替换 REST API,对我来说最重要的功能之一是能够为每个路由添加中间件。不幸的
我正在使用 PHP、MySQL 和 Redis 开发 API,并希望对特定调用进行速率限制。 API 位于 CloudFlare 后面。为实现这一点,我将增加每个 IP 地址每小时在 Redis 中进
我正在寻找一种以编程方式(无论是调用库还是独立程序)监视 linux 中实时 ip 流量的方法。我不想要总数,我想要当前正在使用的带宽。我正在寻找与 OS X 的 istat 菜单的网络流量监视器类似
所以我注意到 Apple 更改了 SKStoreProductViewController,禁用了“写评论”按钮。此问题是否有任何解决方法或修复方法? 最佳答案 SKStoreProductViewC
我今天浏览了 Amazon RDS 定价网站,现在确实想知道他们实际上如何计算 I/O 速率? “每 100 万个请求 0.10 美元”到底是什么意思? 谁能举出一些简单的例子,从 EC2 到 RDS
关闭。这个问题需要details or clarity .它目前不接受答案。 想改进这个问题吗? 通过 editing this post 添加细节并澄清问题. 关闭 5 年前。 Improve
在旧的 API 中,剩余的允许容量显然作为 X-Ratelimit-Remaining 返回HTTP header 。 然而,current version's documentation对此一无所获
在我的 Android 应用程序中,我观察到前置摄像头录制的视频以 7-10 fps 的速度录制,而后置摄像头的工作正常, native 摄像头应用程序确实以 29fps 的速度录制前置摄像头的视频。
我正在编码一个里面有 dvb_teletext 的视频。打开输出流 #1:2 的编码器时出现错误提示。我使用以下命令对我的视频进行编码。 ffmpeg -threads 8 -i input.ts -
我正在使用以下命令为我的视频添加淡入淡出效果 {"-y", "-ss", "" + startMs / 1000, "-t", "" + (endMs - startMs) / 1000, "-i",
我正在尝试使用以下命令通过 FFMPEG 将 avi 视频文件转换为 flv 格式: -i C:\files\input\test.avi -y -ab 448k -ar 48000 -vcodec
我是一名优秀的程序员,十分优秀!