A. (四)android基於UDP的多客戶端語音通信
在前三篇得基礎上,這次研究了組播功能。非常感謝https://blog.csdn.net/jspping/article/details/64438515得貢獻!
組播也就是通過MulticastSocket來進行開發,與DatagramSocket比較相類似,這次依然是用兩個線程進行實現,發送線程MultiSendThread和接收線程MultiReceiveThread。廢話不多說,開始碼:
(一)MultiSendThread:
(1)初始化MuticastSocket
// 偵聽的埠
try {
multicastSocket = new MulticastSocket(8082);
// 使用D類地址,該地址為發起組播的那個ip段,即偵聽10001的套接字
address = InetAddress.getByName("239.0.0.1");
} catch (IOException e) {
e.printStackTrace();
}
(2)初始化AudioRecord
protected LinkedList<byte[]> mRecordQueue;
int minBufferSize;
private static AcousticEchoCanceler aec;
private static AutomaticGainControl agc;
private static NoiseSuppressor nc;
AudioRecord audioRec;
byte[] buffer;
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN)
private void initAudio() {
//播放的采樣頻率 和錄制的采樣頻率一樣
int sampleRate = 44100;
//和錄制的一樣的
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
//錄音用輸入單聲道 播放用輸出單聲道
int channelConfig = AudioFormat.CHANNEL_IN_MONO;
minBufferSize = AudioRecord.getMinBufferSize(
sampleRate,
channelConfig, AudioFormat.ENCODING_PCM_16BIT);
System.out.println("****RecordMinBufferSize = " + minBufferSize);
audioRec = new AudioRecord(
MediaRecorder.AudioSource.MIC,
sampleRate,
channelConfig,
audioFormat,
minBufferSize);
buffer = new byte[minBufferSize];
if (audioRec == null) {
return;
}
//聲學回聲消除器 AcousticEchoCanceler 消除了從遠程捕捉到音頻信號上的信號的作用
if (AcousticEchoCanceler.isAvailable()) {
aec = AcousticEchoCanceler.create(audioRec.getAudioSessionId());
if (aec != null) {
aec.setEnabled(true);
}
}
//自動增益控制 AutomaticGainControl 自動恢復正常捕獲的信號輸出
if (AutomaticGainControl.isAvailable()) {
agc = AutomaticGainControl.create(audioRec.getAudioSessionId());
if (agc != null) {
agc.setEnabled(true);
}
}
//雜訊抑制器 NoiseSuppressor 可以消除被捕獲信號的背景噪音
if (NoiseSuppressor.isAvailable()) {
nc = NoiseSuppressor.create(audioRec.getAudioSessionId());
if (nc != null) {
nc.setEnabled(true);
}
}
mRecordQueue = new LinkedList<byte[]>();
}
(3)開始錄制,並實時發送出去
@Override
public void run() {
if (multicastSocket == null)
return;
try {
audioRec.startRecording();
while (true) {
try {
byte[] bytes_pkg = buffer.clone();
if (mRecordQueue.size() >= 2) {
int length = audioRec.read(buffer, 0, minBufferSize);
// 組報
DatagramPacket datagramPacket = new DatagramPacket(buffer, length);
// 向組播ID,即接收group /239.0.0.1 埠 10001
datagramPacket.setAddress(address);
// 發送的埠號
datagramPacket.setPort(10001);
System.out.println("AudioRTwritePacket = " + datagramPacket.getData().toString());
multicastSocket.send(datagramPacket);
}
mRecordQueue.add(bytes_pkg);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
(二)MultiReceiveThread
(1)初始化MulticastSocket
// 接收數據時需要指定監聽的埠號
try {
multicastSocket = new MulticastSocket(10001);
// 創建組播ID地址
InetAddress address = InetAddress.getByName("239.0.0.1");
// 加入地址
multicastSocket.joinGroup(address);
} catch (IOException e) {
e.printStackTrace();
}
(2)初始化AudioTrack
byte[] buffer;
AudioTrack audioTrk;
private void initAudioTracker() {
//揚聲器播放
int streamType = AudioManager.STREAM_MUSIC;
//播放的采樣頻率 和錄制的采樣頻率一樣
int sampleRate = 44100;
//和錄制的一樣的
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
//流模式
int mode = AudioTrack.MODE_STREAM;
//錄音用輸入單聲道 播放用輸出單聲道
int channelConfig = AudioFormat.CHANNEL_OUT_MONO;
int recBufSize = AudioTrack.getMinBufferSize(
sampleRate,
channelConfig,
audioFormat);
System.out.println("****playRecBufSize = " + recBufSize);
audioTrk = new AudioTrack(
streamType,
sampleRate,
channelConfig,
audioFormat,
recBufSize,
mode);
audioTrk.setStereoVolume(AudioTrack.getMaxVolume(),
AudioTrack.getMaxVolume());
buffer = new byte[recBufSize];
}
(3)開始接收,並進行實時播放
@Override
public void run() {
if (multicastSocket == null)
return;
//從文件流讀數據
audioTrk.play();
// 包長
while (true) {
try {
// 數據報
DatagramPacket datagramPacket = new DatagramPacket(buffer, buffer.length);
// 接收數據,同樣會進入阻塞狀態
multicastSocket.receive(datagramPacket);
audioTrk.write(datagramPacket.getData(), 0, datagramPacket.getLength());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
(三)開始測試
MultiSendThread multiSendThread;
MultiReceiverThread multiReceiverThread;
@OnClick({R.id.btnSend, R.id.btnReceive})
public void onViewClicked(View view) {
switch (view.getId()) {
case R.id.btnSend:
if (multiSendThread == null) {
multiSendThread = new MultiSendThread();
}
new Thread(multiSendThread).start();
break;
case R.id.btnReceive:
if (multiReceiverThread == null) {
multiReceiverThread = new MultiReceiverThread();
}
new Thread(multiReceiverThread).start();
break;
}
}
B. 有人用過android 4.1的新API里的AcousticEchoCanceler 回聲消除嗎
下面簡單的備忘下AcousticEchoCanceler的使用方法:
1)判斷當前機型是否支持AEC,需要注意這里的檢查不一定準確。
public static boolean isDeviceSupport()
{
return AcousticEchoCanceler.isAvailable();
}
2)初始化並使能AEC。
private AcousticEchoCanceler canceler;
public boolean initAEC(int audioSession)
{
if (canceler != null)
{
return false;
}
canceler = AcousticEchoCanceler.create(audioSession);
canceler.setEnabled(true);
return canceler.getEnabled();
}
3)使能/去使能AEC。
public boolean setAECEnabled(boolean enable)
{
if (null == canceler)
{
return false;
}
canceler.setEnabled(enable);
return canceler.getEnabled();
}
4)釋放AEC。
public boolean release()
{
if (null == canceler)
{
return false;
}
canceler.setEnabled(false);
canceler.release();
return true;
}
AcousticEchoCanceler的初始化需要一個sessionid,下面簡單的備忘下上層的調用方式:
1)初始化AudioRecord的時候需要處理第一個參數。
if (chkNewDev())
{
audioRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_COMMUNICATION, frequency, channelIN, audioEncoding, tmpSize);
}else
{
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelIN, audioEncoding, tmpSize);
}
2)初始化好audioRecord之後,就可以通過
audioRecord.getAudioSessionId()
獲取到相應的sessionid。
3)初始化AudioTrack時,也需要額外的處理sessionid。
if (chkNewDev() && audioRecord != null)
{
audioTrack = new AudioTrack(AudioManager.STREAM_VOICE_CALL, frequency, channelOUT, audioEncoding, tmpSize, AudioTrack.MODE_STREAM,audioRecord.getAudioSessionId());
}
else
{
audioTrack = new AudioTrack(AudioManager.STREAM_VOICE_CALL, frequency, channelOUT, audioEncoding, tmpSize, AudioTrack.MODE_STREAM);
}
另外,由於API的限制,需要考慮機型不匹配的情況:
public static boolean chkNewDev()
{
return android.os.Build.VERSION.SDK_INT >= 16;
}
許可權:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
總結:
1)android新版本增加的API AcousticEchoCanceler 可以非常快速的開發出符合VOIP性質的回聲消除程序。但是考慮到各種機型適配,仍然需要第三方的回聲消除程序。這里主要推薦兩個:webrtc裡面的AEC/AECM,speex。
作者成功的在項目中使用了webrtc裡面的回聲消除,感覺效果還可以。
2)作者編寫的上述代碼也只是根據官方的介紹編寫的,由於資料較少,不能保證代碼的正確性。
轉載
C. android webrtc 迴音消除使用audiotrack 合適嗎
android webrtc 迴音消除使用audiotrack合適。推薦即構科技,產品可選:實時音視頻、實時語音、互動直播、IM即時通訊。【點擊免費試用,0成本啟動】
WebRTC的代碼結構布局清晰,在「webrtc\moles\audio_processing\aee」目錄下可以找到幾個用於回聲處理GIPS的AEC源文件。然後主要查找每個AEC源文件所關聯的WebRTC代碼,就可找出回聲處理模塊所需要WebRTC相關的源代碼文件和頭文件,這樣就可以將AEC從WebRTC中提取出來單獨使用。為方便使用,將需要這些代碼分成2個模塊,通用音頻處理模塊webRTC_AUDIO和GIPS-AEC模塊。WebRTC—AUDIO模塊中包含AEC源文件運行所依賴的WebRTC音頻處理相關源文件及頭文件,而GIPS—AEC模塊則是WebRTC中專門用於回聲處理GIPS的AEC源文件。GIPS-AEC模塊以WebRTC_AUDIO模塊為基礎,對回聲進行處理。
想要了解更多關於這方面的相關信息,推薦咨詢ZEGO即構科技。ZEGO即構科技是一家全球雲通訊服務商,專注自研音視頻引擎,服務覆蓋全球,鏈接 5 億終端用戶。ZEGO即構科技覆蓋212個國家/地區,全球用戶體驗毫秒級互動,日均通話時長達30億分鍾,躋身雲通訊行業頭部,全方位行業解決方案,滿足百餘個業務場景需要,服務客戶4000家,70%泛娛樂/在線教育客戶的選擇。
D. Android開發視頻通話怎麼實現
/**
* Android視頻聊天
* 1、初始化SDK 2、連接伺服器、 3、用戶登錄;4、進入房間;5、打開本地視頻;6、請求對方視頻
*/
public class VideoChatActivity extends Activity implements AnyChatBaseEvent
{
private AnyChatCoreSDK anychat; // 核心SDK
private SurfaceView remoteSurfaceView; // 對方視頻
private SurfaceView localSurfaceView; // 本地視頻
private ConfigEntity configEntity;
private boolean bSelfVideoOpened = false; // 本地視頻是否已打開
private boolean bOtherVideoOpened = false; // 對方視頻是否已打開
private TimerTask mTimerTask; // 定時器
private Timer mTimer = new Timer(true);
private Handler handler; // 用Handler來不間斷刷新即時視頻
private List<String> userlist = new ArrayList<String>();//保存在線用戶列表
private int userid; // 用戶ID
@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_video_chat);
remoteSurfaceView = (SurfaceView) findViewById(R.id.surface_remote);
localSurfaceView = (SurfaceView) findViewById(R.id.surface_local);
configEntity = ConfigService.LoadConfig(this);//載入視頻通話設置
loginSystem();// 初始化SDK 連接伺服器
mTimerTask = new TimerTask(){
public void run(){
Message mesasge = new Message();
handler.sendMessage(mesasge);
}
};
mTimer.schele(mTimerTask, 1000, 100);
handler = new Handler(){
@Override
public void handleMessage(Message msg){
VideoChat();// 不間斷顯示即時視頻通話畫面
super.handleMessage(msg);
}
};
}
// 初始化SDK 連接伺服器
private void loginSystem(){
if (anychat == null){
anychat = new AnyChatCoreSDK();
anychat.SetBaseEvent(this); // 設置基本事件回調函數
if (configEntity.useARMv6Lib != 0) // 使用ARMv6指令集
anychat.SetSDKOptionInt(AnyChatDefine.
BRAC_SO_CORESDK_USEARMV6LIB, 1);
anychat.InitSDK(android.os.Build.VERSION.SDK_INT, 0); // 初始化SDK
}
anychat.Connect("demo.anychat.cn", 8906);// 連接伺服器
}
// 顯示即時視頻通話畫面
public void VideoChat(){
if (!bOtherVideoOpened){
if (anychat.GetCameraState(userid) == 2
&& anychat.GetUserVideoWidth(userid) != 0){
SurfaceHolder holder = remoteSurfaceView.getHolder();
holder.setFormat(PixelFormat.RGB_565);
holder.setFixedSize(anychat.GetUserVideoWidth(userid),
anychat.GetUserVideoHeight(userid));
Surface s = holder.getSurface(); // 獲得視頻畫面
anychat.SetVideoPos(userid, s, 0, 0, 0, 0); // 調用API顯示視頻畫面
bOtherVideoOpened = true;
}
if (!bSelfVideoOpened){
if (anychat.GetCameraState(-1) == 2
&& anychat.GetUserVideoWidth(-1) != 0){
SurfaceHolder holder = localSurfaceView.getHolder();
holder.setFormat(PixelFormat.RGB_565);
holder.setFixedSize(anychat.GetUserVideoWidth(-1),
anychat.GetUserVideoHeight(-1));
Surface s = holder.getSurface();
anychat.SetVideoPos(-1, s, 0, 0, 0, 0);
bSelfVideoOpened = true;
}
}
}
public void OnAnyChatConnectMessage(boolean bSuccess){
if (!bSuccess){
Toast.makeText(VideoChatActivity.this, "連接伺服器失敗,自動重連,請稍後...", Toast.LENGTH_SHORT).show();
}
anychat.Login("android", ""); // 伺服器連接成功 用戶登錄
}
public void OnAnyChatLoginMessage(int dwUserId, int dwErrorCode){
if (dwErrorCode == 0) {
Toast.makeText(this, "登錄成功!", Toast.LENGTH_SHORT).show();
anychat.EnterRoom(1, ""); // 用戶登錄成功 進入房間
ApplyVideoConfig();
} else {
Toast.makeText(this, "登錄失敗,錯誤代碼:" + dwErrorCode, Toast.LENGTH_SHORT).show();
}
}
public void OnAnyChatEnterRoomMessage(int dwRoomId, int dwErrorCode){
if (dwErrorCode == 0) { // 進入房間成功 打開本地音視頻
Toast.makeText(this, "進入房間成功", Toast.LENGTH_SHORT).show();
anychat.UserCameraControl(-1, 1); // 打開本地視頻
anychat.UserSpeakControl(-1, 1); // 打開本地音頻
} else {
Toast.makeText(this, "進入房間失敗,錯誤代碼:" + dwErrorCode, Toast.LENGTH_SHORT).show();
}
}
public void OnAnyChatOnlineUserMessage(int dwUserNum, int dwRoomId){
if (dwRoomId == 1){
int user[] = anychat.GetOnlineUser();
if (user.length != 0){
for (int i = 0; i < user.length; i++){
userlist.add(user[i]+"");
. }
String temp =userlist.get(0);
userid = Integer.parseInt(temp);
anychat.UserCameraControl(userid, 1);// 請求用戶視頻
anychat.UserSpeakControl(userid, 1); // 請求用戶音頻
}
else {
Toast.makeText(VideoChatActivity.this, "當前沒有在線用戶", Toast.LENGTH_SHORT).show();
}
}
}
public void OnAnyChatUserAtRoomMessage(int dwUserId, boolean bEnter){
if (bEnter) {//新用戶進入房間
userlist.add(dwUserId+"");
}
else { //用戶離開房間
if (dwUserId == userid)
{
Toast.makeText(VideoChatActivity.this, "視頻用戶已下線", Toast.LENGTH_SHORT).show();
anychat.UserCameraControl(userid, 0);// 關閉用戶視頻
anychat.UserSpeakControl(userid, 0); // 關閉用戶音頻
userlist.remove(userid+""); //移除該用戶
if (userlist.size() != 0)
{
String temp =userlist.get(0);
userid = Integer.parseInt(temp);
anychat.UserCameraControl(userid, 1);// 請求其他用戶視頻
anychat.UserSpeakControl(userid, 1); // 請求其他用戶音頻
}
}
141. else {
userlist.remove(dwUserId+""); //移除該用戶
}
}
}
public void OnAnyChatLinkCloseMessage(int dwErrorCode){
Toast.makeText(VideoChatActivity.this, "連接關閉,error:" + dwErrorCode, Toast.LENGTH_SHORT).show();
}
@Override
protected void onDestroy(){ //程序退出
anychat.LeaveRoom(-1); //離開房間
anychat.Logout(); //注銷登錄
anychat.Release(); //釋放資源
mTimer.cancel();
super.onDestroy();
}
// 根據配置文件配置視頻參數
private void ApplyVideoConfig(){
if (configEntity.configMode == 1) // 自定義視頻參數配置
{
// 設置本地視頻編碼的碼率(如果碼率為0,則表示使用質量優先模式)
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_BITRATECTRL,configEntity.videoBitrate);
if (configEntity.videoBitrate == 0)
{
// 設置本地視頻編碼的質量
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_QUALITYCTRL,configEntity.videoQuality);
}
// 設置本地視頻編碼的幀率
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_FPSCTRL,configEntity.videoFps);
// 設置本地視頻編碼的關鍵幀間隔
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_GOPCTRL,configEntity.videoFps * 4);
// 設置本地視頻採集解析度
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_WIDTHCTRL,configEntity.resolution_width);
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_HEIGHTCTRL,configEntity.resolution_height);
// 設置視頻編碼預設參數(值越大,編碼質量越高,佔用CPU資源也會越高)
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_PRESETCTRL,configEntity.videoPreset);
}
// 讓視頻參數生效
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_APPLYPARAM,configEntity.configMode);
// P2P設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_NETWORK_P2PPOLITIC,configEntity.enableP2P);
// 本地視頻Overlay模式設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_OVERLAY,configEntity.videoOverlay);
// 迴音消除設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_AUDIO_ECHOCTRL,configEntity.enableAEC);
// 平台硬體編碼設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_CORESDK_USEHWCODEC,configEntity.useHWCodec);
// 視頻旋轉模式設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_ROTATECTRL,configEntity.videorotatemode);
// 視頻平滑播放模式設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_STREAM_SMOOTHPLAYMODE,configEntity.smoothPlayMode);
// 視頻採集驅動設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_CAPDRIVER,configEntity.videoCapDriver);
// 本地視頻採集偏色修正設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_LOCALVIDEO_FIXCOLORDEVIA,configEntity.fixcolordeviation);
// 視頻顯示驅動設置
anychat.SetSDKOptionInt(AnyChatDefine.BRAC_SO_VIDEOSHOW_DRIVERCTRL,configEntity.videoShowDriver);
}
}