http://www.html5rocks.com/en/tutorials/getusermedia/intro/
Introduction
Audio/Video capture has been the "Holy Grail" of web development for a long time. For many years we've had to rely on browser plugins (Flash or Silverlight) to get the job done. Come on!
HTML5 to the rescue. It might not be apparent, but the rise of HTML5 has brought a surge of access to device hardware. Geolocation (GPS), the Orientation API(accelerometer), WebGL (GPU), and the Web Audio API (audio hardware) are perfect examples. These features are ridiculously powerful, exposing high level JavaScript APIs that sit on top of the system's underlying hardware capabilities.
This tutorial introduces a new API, navigator.getUserMedia()
, which allows web apps to access a user's camera and microphone.
The road to getUserMedia()
If you're not aware of its history, the way we arrived at the getUserMedia()
API is an interesting tale.
Several variants of "Media Capture APIs" have evolved over the past few years. Many folks recognized the need to be able to access native devices on the web, but that led everyone and their mom to put together a new spec. Things got so messy that the W3C finally decided to form a working group. Their sole purpose? Make sense of the madness! The Device APIs Policy (DAP) Working Group has been tasked to consolidate + standardize the plethora of proposals.
I'll try to summarize what happened in 2011...
Round 1: HTML Media Capture
HTML Media Capture was the DAP's first go at standardizing media capture on the web. It works by overloading the <input type="file">
and adding new values for the accept
parameter.
If you wanted to let users take a snapshot of themselves with the webcam, that's possible with capture=camera
:
<inputtype="file"accept="image/*;capture=camera">
Recording a video or audio is similar:
<inputtype="file"accept="video/*;capture=camcorder"><inputtype="file"accept="audio/*;capture=microphone">
Kinda nice right? I particularly like that it reuses a file input. Semantically, it makes a lot of sense. Where this particular "API" falls short is the ability to do realtime effects (e.g. render live webcam data to a <canvas>
and apply WebGL filters). HTML Media Capture only allows you to record a media file or take a snapshot in time.
Support:
- Android 3.0 browser - one of the first implementations. Check out this video to see it in action.
- Chrome for Android (0.16)
- Firefox Mobile 10.0
- iOS6 Safari and Chrome (partial support)
Round 2: device element
Many thought HTML Media Capture was too limiting, so a new spec emerged that supported any type of (future) device. Not surprisingly, the design called for a new element, the <device>
element, which became the predecessor togetUserMedia()
.
Opera was among the first browsers to create initial implementations of video capture based on the <device>
element. Soon after (the same day to be precise), the WhatWG decided to scrap the <device>
tag in favor of another up and comer, this time a JavaScript API called navigator.getUserMedia()
. A week later, Opera put out new builds that included support for the updated getUserMedia()
spec. Later that year, Microsoft joined the party by releasing a Lab for IE9supporting the new spec.
Here's what <device>
would have looked like:
<devicetype="media"onchange="update(this.data)"></device><videoautoplay></video><script>function update(stream){
document.querySelector('video').src = stream.url;}</script>
Support:
Unfortunately, no released browser ever included <device>
. One less API to worry about I guess :) <device>
did have two great things going for it though: 1.) it was semantic, and 2.) it was easily extendible to support more than just audio/video devices.
Take a breath. This stuff moves fast!
Round 3: WebRTC
The <device>
element eventually went the way of the Dodo.
The pace to find a suitable capture API accelerated thanks to the larger WebRTC(Web Real Time Communications) effort. That spec is overseen by the W3C WebRTC Working Group. Google, Opera, Mozilla, and a few others have implementations.
getUserMedia()
is related to WebRTC because it's the gateway into that set of APIs. It provides the means to access the user's local camera/microphone stream.
Support:
getUserMedia()
has been supported since Chrome 21, Opera 18, and Firefox 17.
Getting started
With navigator.getUserMedia()
, we can finally tap into webcam and microphone input without a plugin. Camera access is now a call away, not an install away. It's baked directly into the browser. Excited yet?
Feature detection
Feature detecting is a simple check for the existence ofnavigator.getUserMedia
:
function hasGetUserMedia(){return!!(navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia);}if(hasGetUserMedia()){// Good to go!}else{
alert('getUserMedia() is not supported in your browser');}
You can also use Modernizr to detect getUserMedia
to avoid the vendor prefix dance yourself:
if(Modernizr.getusermedia){var gUM =Modernizr.prefixed('getUserMedia', navigator);
gUM({video:true},function(//...//...}
Gaining access to an input device
To use the webcam or microphone, we need to request permission. The first parameter to getUserMedia()
is an object specifying the details and requirements for each type of media you want to access. For example, if you want to access the webcam, the first parameter should be {video: true}
. To use both the microphone and camera, pass {video: true, audio: true}
:
<videoautoplay></video><script>var errorCallback =function(e){
console.log('Reeeejected!', e);};// Not showing vendor prefixes.
navigator.getUserMedia({video:true, audio:true},function(localMediaStream){var video = document.querySelector('video');
video.src = window.URL.createObjectURL(localMediaStream);// Note: onloadedmetadata doesn't fire in Chrome when using it with getUserMedia.// See crbug.com/110938.
video.onloadedmetadata =function(e){// Ready to go. Do some stuff.};}, errorCallback);</script>
OK. So what's going on here? Media capture is a perfect example of new HTML5 APIs working together. It works in conjunction with our other HTML5 buddies,<audio>
and <video>
. Notice that we're not setting a src
attribute or including<source>
elements on the <video>
element. Instead of feeding the video a URL to a media file, we're feeding it a Blob URL obtained from a LocalMediaStream
object representing the webcam.
I'm also telling the <video>
to autoplay
, otherwise it would be frozen on the first frame. Adding controls
also works as you'd expected.
If you want something that works cross-browser, try this:
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;var video = document.querySelector('video');if(navigator.getUserMedia){
navigator.getUserMedia({audio:true, video:true},function(stream){
video.src = window.URL.createObjectURL(stream);}, errorCallback);}else{
video.src ='somevideo.webm';// fallback.}
Setting media constraints (resolution, height, width)
The first parameter to getUserMedia()
can also be used to specify more requirements (or constraints) on the returned media stream. For example, instead of just indicating you want basic access to video (e.g. {vide: true}
), you can additionally require the stream to be HD:
var hdConstraints ={
video:{
mandatory:{
minWidth:1280,
minHeight:720}}};
navigator.getUserMedia(hdConstraints, successCallback, errorCallback);...var vgaConstraints ={
video:{
mandatory:{
maxWidth:640,
maxHeight:360}}};
navigator.getUserMedia(vgaConstraints, successCallback, errorCallback);
For more configurations, see the constraints API
Selecting a media source
In Chrome 30 or later, getUserMedia()
also supports selecting the the video/audio source using the MediaStreamTrack.getSources()
API.
In this example, the last microphone and camera that's found is selected as the media stream source:
MediaStreamTrack.getSources(function(sourceInfos){var audioSource =null;var videoSource =null;for(var i =0; i != sourceInfos.length;++i){var sourceInfo = sourceInfos[i];if(sourceInfo.kind ==='audio'){
console.log(sourceInfo.id, sourceInfo.label ||'microphone');
audioSource = sourceInfo.id;}elseif(sourceInfo.kind ==='video'){
console.log(sourceInfo.id, sourceInfo.label ||'camera');
videoSource = sourceInfo.id;}else{
console.log('Some other kind of source: ', sourceInfo);}}
sourceSelected(audioSource, videoSource);});function sourceSelected(audioSource, videoSource){var constraints ={
audio:{
optional:[{sourceId: audioSource}]},
video:{
optional:[{sourceId: videoSource}]}};
navigator.getUserMedia(constraints, successCallback, errorCallback);}
Check out Sam Dutton's great demo of how to let users select the media source.
Security
Some browsers throw up an infobar upon calling getUserMedia()
, which gives users the option to grant or deny access to their camera/mic. The spec unfortunately is very quiet when it comes to security. For example, here is Chrome's permission dialog:
Permission dialog in Chrome
If your app is running from SSL (https://
), this permission will be persistent. That is, users won't have to grant/deny access every time.
Providing fallback
For users that don't have support for getUserMedia()
, one option is to fallback to an existing video file if the API isn't supported and/or the call fails for some reason:
// Not showing vendor prefixes or code that works cross-browser:function fallback(e){
video.src ='fallbackvideo.webm';}function success(stream){
video.src = window.URL.createObjectURL(stream);}if(!navigator.getUserMedia){
fallback();}else{
navigator.getUserMedia({video:true}, success, fallback);}
Basic demo
Capture video Stop
Taking screenshots
The <canvas>
API's ctx.drawImage(video, 0, 0)
method makes it trivial to draw <video>
frames to <canvas>
. Of course, now that we have video input viagetUserMedia()
, it's just as easy to create a photo booth application with realtime video:
<videoautoplay></video><imgsrc=""><canvasstyle="display:none;"></canvas><script>var video = document.querySelector('video');var canvas = document.querySelector('canvas');var ctx = canvas.getContext('2d');var localMediaStream =null;function snapshot(){if(localMediaStream){
ctx.drawImage(video,0,0);// "image/webp" works in Chrome.// Other browsers will fall back to image/png.
document.querySelector('img').src = canvas.toDataURL('image/webp');}}
video.addEventListener('click', snapshot,false);// Not showing vendor prefixes or code that works cross-browser.
navigator.getUserMedia({video:true},function(stream){
video.src = window.URL.createObjectURL(stream);
localMediaStream = stream;}, errorCallback);</script>
Capture Stop
Applying Effects
CSS Filters
Using CSS Filters, we can apply some gnarly effects to the <video>
as it is captured:
<style>
video {width:307px;height:250px;background: rgba(255,255,255,0.5);border:1px solid #ccc;}.grayscale {+filter: grayscale(1);}.sepia {+filter: sepia(1);}.blur {+filter: blur(3px);}...</style><videoautoplay></video><script>var idx =0;var filters =['grayscale','sepia','blur','brightness','contrast','hue-rotate','hue-rotate2','hue-rotate3','saturate','invert',''];function changeFilter(e){var el = e.target;
el.className ='';var effect = filters[idx++% filters.length];// loop through filters.if(effect){
el.classList.add(effect);}}
document.querySelector('video').addEventListener('click', changeFilter,false);</script>
Click the video to cycle through CSS filters
Capture video Stop
WebGL Textures
One amazing use case for video capture is to render live input as a WebGL texture. Since I know absolutely nothing about WebGL (other than it's sweet), I'm going to suggest you give Jerome Etienne's tutorial and demo a look. It talks about how to use getUserMedia()
and Three.js to render live video into WebGL.
Using getUserMedia with the Web Audio API
One of my dreams is to build AutoTune in the browser with nothing more than open web technology!
Chrome supports live microphone input from getUserMedia()
to the Web Audio API for real-time effects. Piping microphone input to the Web Audio API looks like this:
window.AudioContext= window.AudioContext||
window.webkitAudioContext;var context =newAudioContext();
navigator.getUserMedia({audio:true},function(stream){var microphone = context.createMediaStreamSource(stream);var filter = context.createBiquadFilter();// microphone -> filter -> destination.
microphone.connect(filter);
filter.connect(context.destination);}, errorCallback);
Demos:
For more information, see Chris Wilson's post.
Conclusion
In general, device access on the web has been a tough cookie to crack. Manypeople have tried, few have succeeded. Most of the early ideas have never taken hold outside of a proprietary environment nor have they gained widespread adoption.
The real problem is that the web's security model is very different from the native world. For example, I probably don't want every Joe Shmoe web site to have random access to my video camera. It's a tough problem to get right.
Bridging frameworks like PhoneGap have helped push the boundary, but they're only a start and a temporary solution to an underlying problem. To make web apps competitive to their desktop counterparts, we need access to native devices.
getUserMedia()
is but the first wave of access to new types of devices. I hope we'll continue to see more in the very near future!
Additional resources
- W3C specification
- Bruce Lawson's HTML5Doctor article
- Bruce Lawson's dev.opera.com article
相关推荐
DeckLink Quad HDMI Recorder has improved EDID support for capturing from game consoles and DSLR cameras, along with improved support for DVI video modes and more reliable sensing of input connections...
9. If you don't want to force synchronization of audio and video deselect "Synchronize audio and video" (synchronization is only available when doing scenes splitting). 10. Start capturing by clicking...
Capturing Video Conclusion Chapter 13 : StageWebView The Native Browser The StageWebView Class Conclusion Chapter 14 : Hardware Acceleration Some Definitions Rendering, or How Things Are Drawn to the ...
Capturing Video Conclusion Chapter 13 : StageWebView The Native Browser The StageWebView Class Conclusion Chapter 14 : Hardware Acceleration Some Definitions Rendering, or How Things Are Drawn to the ...
Capturing Video Conclusion Chapter 13 : StageWebView The Native Browser The StageWebView Class Conclusion Chapter 14 : Hardware Acceleration Some Definitions Rendering, or How Things Are Drawn to the ...
Capturing Video Conclusion Chapter 13 : StageWebView The Native Browser The StageWebView Class Conclusion Chapter 14 : Hardware Acceleration Some Definitions Rendering, or How Things Are Drawn to the ...
The Safari-exclusive applications for these Apple devices assemble elements of Web 2.0 apps, traditional desktop apps, multimedia video and audio, and the cell phone. this book shows you how to ...
- **Implementation**: Developers can use frameworks like AVFoundation to play and record audio and video. They can also use Core Audio for more advanced audio processing tasks. - **Sample Code**: ...
- **Surveillance Systems**: For capturing and processing video streams in real-time. - **Industrial Control Systems**: For managing and controlling industrial processes and machinery. #### Conclusion...
decklink sdi linux capture exe stderr, "Capturing with the following configuration:\n" " - Capture device: %s\n" " - Video mode: %s %s\n" " - Pixel format: %s\n" " - Audio channels: %u\n" " ...
This project uses Android lastest MediaCodec API for video/audio encoding and popular C ibrary librtmp (source code included) for rtmp streaming, in addionion, provides ability to implement real-time ...
在IT领域,尤其是在Web开发中,捕获视频和流...通过`Capturing-Video-master`这样的项目,我们可以深入学习和实践这些技术,包括实时视频预览、图像处理、视频录制以及WebRTC通信等,进一步提升Web应用的交互性和功能。
Capturing Device Change Messages 311 Reading Device Change Messages 312 Retrieving the Device Path Name in the Message 314 Stopping Device Notifications 317 11. Human Interface Devices: Using Control ...
#### 同步音频和视频(Synchronizing audio and video with Merge Clips) - **音频同步**:通过合并剪辑功能,可以轻松实现音频和视频的同步。 #### 处理脱机剪辑(Working with offline clips) - **脱机剪辑**...
"Building Full Screen Camera with Gesture-based Controls"以及"Video Capturing and Playback Using AVKit"章节则关注于如何创建具有手势控制的全屏相机以及视频的录制和播放功能。 "Building Slide Out Sidebar...
大多数功能完整且准确的Pulseaudio实现,包括隔离的客户端捕获 通过wlr-export-dmabuf-unstable协议进行零拷贝Wayland捕获 通过wlr-screencopy-unstable协议捕获Wayland(支持软件或DMA-BUF帧) 第二类libavdevice...
2. **Video Capturing**: 支持从摄像头捕获视频流,并对其进行预处理。 3. **Video Encoding/Decoding**: 提供了多种视频编码解码器,如VP8、VP9和H.264。 4. **Transport Layers**: 包括ICE(Interactive ...