gpt4 book ai didi

java - 如何将语音聊天功能添加到已经存在的视频 session 代码 WebRTC Android

转载 作者:行者123 更新时间:2023-12-01 13:45:55 24 4
gpt4 key购买 nike

最近我开始学习 webrtc 并设法将一个已经存在的代码库与我自己的信号服务器一起使用,它运行良好。

问题是我使用的 webrtc 代码仅用于视频,它没有实现传输语音。

由于我对 webrtc 不是很熟悉,所以无法添加语音功能。

这是RTCClient代码(可能这是我们应该实现语音假设权限的部分)

    class RTCClient(
context: Application,
observer: PeerConnection.Observer
) {

companion object {
private const val LOCAL_TRACK_ID = "local_track"
private const val LOCAL_STREAM_ID = "local_track"
}

private val rootEglBase: EglBase = EglBase.create()

init {
initPeerConnectionFactory(context)
}

private val iceServer = listOf(
PeerConnection.IceServer.builder("stun:stun.l.google.com:19302")
.createIceServer()
)

private val peerConnectionFactory by lazy { buildPeerConnectionFactory() }
private val videoCapturer by lazy { getVideoCapturer(context) }
private val localVideoSource by lazy { peerConnectionFactory.createVideoSource(false) }
// add here something about voice ?
private val peerConnection by lazy { buildPeerConnection(observer) }

private fun initPeerConnectionFactory(context: Application) {
val options = PeerConnectionFactory.InitializationOptions.builder(context)
.setEnableInternalTracer(true)
.setFieldTrials("WebRTC-H264HighProfile/Enabled/")
.createInitializationOptions()
PeerConnectionFactory.initialize(options)
}

private fun buildPeerConnectionFactory(): PeerConnectionFactory {
return PeerConnectionFactory
.builder()
//add audio here(?)
.setVideoDecoderFactory(DefaultVideoDecoderFactory(rootEglBase.eglBaseContext))
.setVideoEncoderFactory(DefaultVideoEncoderFactory(rootEglBase.eglBaseContext, true, true))
.setOptions(PeerConnectionFactory.Options().apply {
disableEncryption = true
disableNetworkMonitor = true


})
.createPeerConnectionFactory()
}

private fun buildPeerConnection(observer: PeerConnection.Observer) = peerConnectionFactory.createPeerConnection(
iceServer,
observer
)

private fun getVideoCapturer(context: Context) =
Camera2Enumerator(context).run {
deviceNames.find {
isFrontFacing(it)
}?.let {
createCapturer(it, null)
} ?: throw IllegalStateException()
}

fun initSurfaceView2(view: SurfaceViewRenderer) = view.run {
setMirror(true)
setEnableHardwareScaler(true)
init(rootEglBase.eglBaseContext, null)
}

fun initSurfaceView(view:SurfaceViewRenderer){

Log.i("surfaceview",view.toString())
view.setMirror(true)
view.setEnableHardwareScaler(true)
view?.init(rootEglBase.eglBaseContext,null)

}

fun startLocalVideoCapture(localVideoOutput: SurfaceViewRenderer) {
// implement voice transfer here (?)
val surfaceTextureHelper = SurfaceTextureHelper.create(Thread.currentThread().name, rootEglBase.eglBaseContext)
(videoCapturer as VideoCapturer).initialize(surfaceTextureHelper, localVideoOutput.context, localVideoSource.capturerObserver)
videoCapturer.startCapture(320, 240, 60)
val localVideoTrack = peerConnectionFactory.createVideoTrack(LOCAL_TRACK_ID, localVideoSource)
localVideoTrack.addSink(localVideoOutput)
val localStream = peerConnectionFactory.createLocalMediaStream(LOCAL_STREAM_ID)
localStream.addTrack(localVideoTrack)
peerConnection?.addStream(localStream)
}

private fun PeerConnection.call(sdpObserver: SdpObserver) {
val constraints = MediaConstraints().apply {
mandatory.add(MediaConstraints.KeyValuePair("OfferToReceiveVideo", "true"))
}

createOffer(object : SdpObserver by sdpObserver {
override fun onCreateSuccess(desc: SessionDescription?) {

setLocalDescription(object : SdpObserver {
override fun onSetFailure(p0: String?) {
}

override fun onSetSuccess() {
}

override fun onCreateSuccess(p0: SessionDescription?) {
}

override fun onCreateFailure(p0: String?) {
}
}, desc)
sdpObserver.onCreateSuccess(desc)
}
}, constraints)
}

private fun PeerConnection.answer(sdpObserver: SdpObserver) {
val constraints = MediaConstraints().apply {
mandatory.add(MediaConstraints.KeyValuePair("OfferToReceiveVideo", "true"))
}

createAnswer(object : SdpObserver by sdpObserver {
override fun onCreateSuccess(p0: SessionDescription?) {
setLocalDescription(object : SdpObserver {
override fun onSetFailure(p0: String?) {
}

override fun onSetSuccess() {
}

override fun onCreateSuccess(p0: SessionDescription?) {
}

override fun onCreateFailure(p0: String?) {
}
}, p0)
sdpObserver.onCreateSuccess(p0)
}
}, constraints)
}

fun call(sdpObserver: SdpObserver) = peerConnection?.call(sdpObserver)

fun answer(sdpObserver: SdpObserver) = peerConnection?.answer(sdpObserver)

fun onRemoteSessionReceived(sessionDescription: SessionDescription) {
peerConnection?.setRemoteDescription(object : SdpObserver {
override fun onSetFailure(p0: String?) {
}

override fun onSetSuccess() {
}

override fun onCreateSuccess(p0: SessionDescription?) {
}

override fun onCreateFailure(p0: String?) {
}
}, sessionDescription)
}

fun addIceCandidate(iceCandidate: IceCandidate?) {
peerConnection?.addIceCandidate(iceCandidate)
}
}

这里完整的代码:
https://github.com/amrfarid140/webrtc-android-codelab/tree/65a22c1fc735cf00b42b4246148af8402089cbc7/mobile/app/src/main/java/me/amryousef/webrtc_demo

编辑:
即使这个问题已经 5 个月大,它仍然是活跃的。
我猜webrtc真的那么复杂:(

最佳答案

建议你换成其他的webrtc 比如:https://www.agora.io/en/ .
或者更简单,您只需将 agora 的语音功能添加到您现有的项目中。逻辑添加很简单。

希望这有帮助!

关于java - 如何将语音聊天功能添加到已经存在的视频 session 代码 WebRTC Android,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58758571/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com