如何使用 JavaScript MediaRecorder API 建立影片和音訊錄製器?
在本教程中,您將學習如何使用 JavaScript MediaRecorder API 建立音訊和影片錄製器。因此,這可以使用 WebRTC 來完成。
什麼是 WebRTC?
WebRTC 是即時通訊的簡稱。我們可以訪問和捕獲使用者裝置中可用的網路攝像頭和麥克風裝置。
我們可以使用 ECMAScript 物件訪問使用者裝置的網路攝像頭和麥克風。
navigator.mediaDevices.getUserMedia(constraints).
因此,getUserMedia 函式預設情況下會請求使用者許可以使用您的網路攝像頭。此函式返回一個promise,並且一旦您單擊“確定”並給予同意,則該函式將被觸發並在您的系統中啟用網路攝像頭,否則,如果您不允許,它還有一個 catch 方法來關閉網路攝像頭。
我們還可以向getUserMedia() 函式傳遞引數,例如我們想要某個特定寬度或高度的圖片。
前端設計
我們的前端部分將包含以下元素:
對於影片錄製,螢幕將包含以下元素:
一個影片元素,用於顯示影片媒體螢幕
開始按鈕將啟動影片錄製
停止按鈕將停止影片錄製流。
對於音訊錄製,它也將有兩個按鈕
開始按鈕將啟動音訊錄製
停止按鈕將停止音訊錄製流。
我們將新增 font awesome CDN 以新增開始和停止按鈕圖示,併為了使頁面更具吸引力,我們將在元素上新增 CSS 樣式。
HTML 程式碼
示例
<!DOCTYPE html> <html> <head> <title>Video & Audio Recorder</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css"> <style> body { text-align: center; color: red; font-size: 1.2em; } /* styling of start and stop buttons */ #video_st, #video_en, #aud_st, #aud_en{ margin-top: 10px; padding: 10px; border-radius: 4px; cursor: pointer; } #vidBox{ background-color: grey; } /*video box styling*/ video { background-color: gray; display: block; margin: 6px auto; width: 520px; height: 240px; } /*audio box styling*/ audio { display: block; margin: 6px auto; } a { color: green; } </style> </head> <body> <h1 style="color:blue"> Video-Audio recorder</h1> <div class="display-none" id="vid-recorder"> <h3>Record Video </h3> <video autoplay id="vidBox"> </video> <!-- click this button to start video recording --> <button type="button" id="video_st" onclick="start_video_Recording()"> <i class="fa fa-play"></i></button> <!-- click this button to stop video recording --> <button type="button" id="video_en" disabled onclick="stop_Recording(this, document.getElementById('video_st'))"> <i class="fa fa-stop"></i> </button> </div> <!-- ------------ --> <br> <hr> <!-- ------------ --> <div class="display-none" id="audio_rec"> <h3> Record Audio</h3> <!-- click this button to start audio recording --> <button type="button" id="aud_st" onclick="start_audio_Recording()"><i class="fa fa-play"></i> </button> <!-- click this button to stop video recording --> <button type="button" id="aud_en"disabled onclick="stop_Recording(this, document.getElementById('aud_st'))"> <i class="fa fa-stop"></i></button> </div> </body> </html>
當您單擊“開始影片”按鈕時,它將呼叫start_video_Recording() 函式,而“停止”按鈕將呼叫stop_Recording() 函式,類似地,對於音訊,單擊開始按鈕將觸發start_audio_Recording() 函式,而停止按鈕將呼叫stop_Recording() 函式。
start_video_Recording() 函式
讓我們定義一個函式來啟動影片並錄製它。
function start_video_Recording() { // stores the recorded media let chunksArr= []; const startBtn=document.getElementById("video_st"); const endBtn=document.getElementById("video_en"); // permission to access camera and microphone navigator.mediaDevices.getUserMedia({audio: true, video: true}) .then((mediaStreamObj) => { // Create a new MediaRecorder instance const medRec =new MediaRecorder(mediaStreamObj); window.mediaStream = mediaStreamObj; window.mediaRecorder = medRec; medRec.start(); //when recorded data is available then push into chunkArr array medRec.ondataavailable = (e) => {chunksArr.push(e.data);}; //stop the video recording medRec.onstop = () => { const blobFile = new Blob(chunksArr, { type:"video/mp4" }); chunksArr= []; // create video element and store the media which is recorded const recMediaFile = document.createElement("video"); recMediaFile.controls = true; const RecUrl = URL.createObjectURL(blobFile); //keep the recorded url as source recMediaFile.src = RecUrl; document.getElementById(`vid-recorder`).append(recMediaFile); }; document.getElementById("vidBox").srcObject = mediaStreamObj; //disable the start button and enable the stop button startBtn.disabled = true; endBtn.disabled = false; }); }
當按下開始按鈕時,它將呼叫上述函式,該函式將觸發 WebRTC 攝像頭和麥克風方法以獲取錄製許可權,並啟用停止錄製按鈕並停用開始錄製按鈕。
當按下停止按鈕時,它將呼叫 stop() 函式並停止所有媒體流軌道。
然後,為了錄製媒體流,我們將建立一個媒體錄製器例項,並將媒體流和媒體重新排序設定為全域性變數。然後停止影片將停止媒體流,建立影片元素將建立一個新的影片元素並存儲錄製的媒體資料。
類似地,start_audio_Recording() 函式也類似於start_video_Recording() 函式,但有一些必要的更改。
stop_Recording() 函式
現在讓我們定義一個函式來停止錄製。
function stop_Recording(end, start) { window.mediaRecorder.stop(); // stop all tracks window.mediaStream.getTracks() .forEach((track) => {track.stop();}); //disable the stop button and enable the start button end.disabled = true; start.disabled = false; }
此函式將停止儲存在媒體流中的所有媒體軌道。
示例
讓我們將上述函式新增到 HTML 程式碼中,以使影片和音訊錄製功能化。
<!DOCTYPE html> <html> <head> <title>Video & Audio Recorder</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css"> <style> body { text-align: center; color: red; font-size: 1.2em; } //video start & end, Audio start & end button styling #video_st, #video_en, #aud_st, #aud_en{ margin-top: 10px; padding: 10px; border-radius: 4px; cursor: pointer; } #vidBox{ background-color: grey; } video { background-color: gray; display: block; margin: 6px auto; width: 420px; height: 240px; } audio { display: block; margin: 6px auto; } a { color: green; } </style> </head> <body> <h1 style="color:blue"> Video-Audio recorder</h1> <div class="display-none" id="vid-recorder"> <h3>Record Video </h3> <video autoplay id="vidBox"> </video> <button type="button" id="video_st" onclick="start_video_Recording()"> <i class="fa fa-play"></i></button> <button type="button" id="video_en" disabled onclick="stop_Recording(this, document.getElementById('video_st'))"> <i class="fa fa-stop"></i> </button> </div> <!-- ------------ --> <br> <hr> <!-- ------------ --> <div class="display-none" id="audio_rec"> <h3> Record Audio</h3> <button type="button" id="aud_st" onclick="start_audio_Recording()"><i class="fa fa-play"></i> </button> <button type="button" id="aud_en" disabled onclick="stop_Recording(this, document.getElementById('aud_st'))"> <i class="fa fa-stop"></i></button> </div> <script> //----------------------Video------------------------------------- function start_video_Recording() { //To stores the recorded media let chunks = []; const startBtn=document.getElementById("video_st"); const endBtn=document.getElementById("video_en"); // Access the camera and microphone navigator.mediaDevices.getUserMedia({audio: true, video: true}) .then((mediaStreamObj) => { // Create a new MediaRecorder instance const medRec =new MediaRecorder(mediaStreamObj); window.mediaStream = mediaStreamObj; window.mediaRecorder = medRec; medRec.start(); //when recorded data is available then push into chunkArr array medRec.ondataavailable = (e) => { chunks.push(e.data); }; //stop the video recording medRec.onstop = () => { const blobFile = new Blob(chunks, { type:"video/mp4" });chunks = []; // create video element and store the media which is recorded const recMediaFile = document.createElement("video"); recMediaFile.controls = true; const RecUrl = URL.createObjectURL(blobFile); //keep the recorded url as source recMediaFile.src = RecUrl; document.getElementById(`vid-recorder`).append(recMediaFile); }; document.getElementById("vidBox").srcObject = mediaStreamObj; startBtn.disabled = true; endBtn.disabled = false; }); } //--------------------audio--------------------------------------- function start_audio_Recording() { //To stores the recorded media let chunksArr = []; const startBtn=document.getElementById("aud_st"); const endBtn=document.getElementById("aud_en"); // Access the camera and microphone navigator.mediaDevices.getUserMedia({audio: true, video: false}) .then((mediaStream) => { const medRec = new MediaRecorder(mediaStream); window.mediaStream = mediaStream; window.mediaRecorder = medRec; medRec.start(); //when recorded data is available then push into chunkArr array medRec.ondataavailable = (e) => { chunksArr.push(e.data); }; //stop the audio recording medRec.onstop = () => { const blob = new Blob(chunksArr, {type: "audio/mpeg"}); chunksArr = []; // create audio element and store the media which is recorded const recMediaFile = document.createElement("audio"); recMediaFile.controls = true; const RecUrl = URL.createObjectURL(blob); recMediaFile.src = RecUrl; document.getElementById(`audio_rec`).append( recMediaFile); }; startBtn.disabled = true; endBtn.disabled = false; }); } function stop_Recording(end, start) { //stop all tracks window.mediaRecorder.stop(); window.mediaStream.getTracks() .forEach((track) => {track.stop();}); //disable the stop button and enable the start button end.disabled = true; start.disabled = false; } </script> </body> </html>
從輸出中可以看出,當單擊影片開始按鈕時,它會呼叫 start_video_Recording() 函式,並在該函式中呼叫 navigator.mediaDevices.getUserMedia() 方法,並開啟一個許可權選單,請求影片和麥克風的許可權。它返回一個 promise,該 promise 解析媒體流。在它接收音訊或影片媒體流後,它會建立一個媒體錄製器的例項,並透過在上述程式碼中呼叫 medRec.start() 函式來啟動錄製。
因此,您瞭解了使用 WebRTC 建立影片和音訊錄製的完整過程。