The front end uses html5 and ffmpeg to realize functions such as screen recording and camera

The front end uses html5 and ffmpeg to realize functions such as screen recording and camera

Some time ago I made a windows desktop application, vue + electron, which involves screen recording and camera functions. There are very few related documents on the Internet for people in need.

If the description of the article is wrong or there is a better way, please leave a message and let me know, refill ( ` )

Background introduction

** Technology involved: vue, electron, ffmpeg, node **

Two methods are compared for screen recording and camera

  • Use HTML5 API to achieve

    Camera: mediaDevices (getting equipment) + getUserMedia (getting stream) + MediaRecorder (storing)

    Screen recording: getDisplayMedia (get stream) + MediaRecorder (store)

  • ffmpeg + node: FFmpeg is a very powerful set of open source tools for audio and video processing, not much introduction, and Electron is based on node and chromium, it allows the use of node API and almost all node modules, which means that we can call cmd Command to operate ffmpeg to achieve screen recording and video recording, of course, ffmpeg has more than that

HTML5 implementation

mediaDevices

  • Used to collect information about multimedia input and output devices available on the system

  • The method call successfully returns the device list, and the MediaStreamConstraints object with devceID can be passed in to specify the device to obtain the streaming media source

  navigator.mediaDevices.enumerateDevices().then(devicelist => {
    //audiooutput  
    //audioinput   
    //audiooutput  
    console.log(devicelist)
  }).catch(err => console.log(err))
 

getUserMedia

  • Users provide an interface for accessing hardware device media (camera, video, audio, geographic location, etc.). Based on this interface, developers can access hardware media devices without relying on any browser plug-ins.

  • This method returns the video stream. Assign the obtained stream to the video tag to realize recording and watching

navigator.mediaDevices.getUserMedia(MediaStreamConstraints).then(stream => {
    videoElement.srcObject = stream; //
  }, error => console.log(error));
 

getDisplayMedia

  • Use the user s display or part of it as the source of the media stream, which allows the user s display or part of it to be obtained in the form of a video stream

  • Screen recording mainly relies on this method, which returns a promise object like getUserMedia, the call is successful and returns the stream, and this stream is assigned to the video element to realize recording and watching

  • ** What you need to pay attention to is ** If you are using this element on Google on the web side, you need to enable Experimental Web Platform features at chrome://flags/

Electron is built on node + chromium, and the desktopCapturer module needs to be introduced in electron, and this method is used based on this module

navigator.mediaDevices.getDisplayMedia({ video: true })
  .then(stream => {
    videoElement.srcObject = stream;
  }, error => console.log(error));
 

MediaRecorder

  • Record and capture media, namely video and audio

  • The streams obtained by getDisplayMedia and getUserMedia need to be stored using MediaRecorder and can be saved as a file

let herf
this.recorder = new MediaRecorder(stream);
this.recorder.ondataavailable = e => { 
  herf = e.data;
  download.href = URL.createObjectURL(herf);
};
this.recorder.start();
 

The second use ffmpeg

Official website installation package download ffmpeg.zeranoe.com/builds/

Some basic parameters

  • -formats output all available formats
  • -f fmt specifies the format (audio or video format)
  • -i filename specifies the input file name, of course it can also be specified under linux: 0.0 (screen recording) or camera
  • -y overwrite existing files
  • -t duration record duration as t
  • -fs limit_size set file size limit
  • -itsoffset time_off Set the time offset (s), this option affects all subsequent input files. The offset is added to the time stamp of the input file. Defining a positive offset means that the corresponding stream is delayed by offset seconds. [-]hh:mm:ss* [.xxx] format also supports audio
  • -ab bitrate set audio bitrate
  • -ar freq set audio sample rate
  • -ac channels set the channel default to 1 video
  • -b bitrate set the bit rate, the default is 200kb/s
  • -r fps set the default frame rate to 25
  • -s size Set the frame size format to WXH default 160X128. The following abbreviations can also be used directly:

Screen recording related commands

 
ffmpeg -list_devices true -f dshow -i dummy

 
ffmpeg -f gdigrab -i desktop captrue.mkv -y
 

node call

cd into the bin folder and execute the screen-recording related commands

Regarding stopping recording, although ffmpeg can stop recording by pressing Q, we can't see the cmd command line through code call and he has been occupying this process during the recording process and no commands can be entered. So I only thought of one way here. Forcibly stop the process

Reference article

MDN developer.mozilla.org/zh-CN/docs/ developer.mozilla.org/en-US/docs/ developer.mozilla.org/zh-CN/docs/

W3C w3c.github.io/mediacaptur...