`
kldwq2002
  • 浏览: 9237 次
文章分类
社区版块
存档分类
最新评论

Writing code that captures videos on Android

阅读更多

http://integratingstuff.com/2010/10/18/writing-code-that-captures-videos-on-android/


Writing code that captures videos on Android

Although the Google guys did a good job on the Android documentation, the explanation on how to write code that captures videos is somewhat short.
In this tutorial, we are going to write an activity that is able to preview, start and stop video capturing, and give more explanation on it than the basic documentation does.
We are going to do this for Android 2.1, but after that, we will discuss differences with 2.2.
Finally, we will be illustrating how the undocumented 2.1 non-public api on MediaRecorder can be called through reflection.
This article is aimed at Android developers.

Setting the permissions

Since we are going to use the camera, the following line will definitely need to be declared in our AndroidManifest file:

<uses-permission android:name="android.permission.CAMERA" />

If we dont specify this, we will get a “Permission Denied” exception as soon as we try to access the camera from our code.

It is good practice to tell the app what features of the camera we are going to use too:

<uses-feature android:name="android.hardware.camera"/>
<uses-feature:name="android.hardware.camera.autofocus"/>

If we dont specify these however, the app will just assume that all camera features are used(camera, autofocus and flash). So to just make it work, we dont need to declare these.

We are also going to record audio during the video capture. So we also declare:

<uses-permission android:name="android.permission.RECORD_AUDIO" />

Setting up the camera preview

Before we are going to discuss the actual video capturing, we are going to make sure that everything the camera is seeing is previewed to the screen.

Surfaceview is a special type of view that basically gives you a surface to draw too. Its used in various scenarios, such as to draw 2D or 3D objects to, or to play videos.

In this case, we are going to draw the camera input to such a a surfaceview, so the user is able to preview the video/sees what he is recording.

We define a camera_surface.xml layout file in which we setup the surfaceview:

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
	android:layout_width="fill_parent" android:layout_height="fill_parent">
	<SurfaceView android:id="@+id/surface_camera" xmlns:android="http://schemas.android.com/apk/res/android"
		android:layout_width="fill_parent"
		android:layout_height="fill_parent"
		android:layout_centerInParent="true"
		android:layout_weight="1">
	</SurfaceView>
</RelativeLayout>

The following activity will then use the surfaceview in the above layout xml and start rendering the camera input to the screen:

public class CustomVideoCamera extends Activity implements SurfaceHolder.Callback{

	private static final String TAG = "CAMERA_TUTORIAL";

	private SurfaceView surfaceView;
	private SurfaceHolder surfaceHolder;
	private Camera camera;
	private boolean previewRunning;

        @Override
        public void onCreate(Bundle savedInstanceState) {
                super.onCreate(savedInstanceState);
                setContentView(R.layout.camera_surface);
                surfaceView = (SurfaceView) findViewById(R.id.surface_camera);
                surfaceHolder = surfaceView.getHolder();
                surfaceHolder.addCallback(this);
                surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
        }

        @Override
	public void surfaceCreated(SurfaceHolder holder) {
		camera = Camera.open();
		if (camera != null){
			Camera.Parameters params = camera.getParameters();
			camera.setParameters(params);
		}
		else {
			Toast.makeText(getApplicationContext(), "Camera not available!", Toast.LENGTH_LONG).show();
			finish();
		}
	}

	@Override
	public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
		if (previewRunning){
			camera.stopPreview();
		}
		Camera.Parameters p = camera.getParameters();
		p.setPreviewSize(width, height);
		p.setPreviewFormat(PixelFormat.JPEG);
		camera.setParameters(p);

		try {
			camera.setPreviewDisplay(holder);
			camera.startPreview();
			previewRunning = true;
		}
		catch (IOException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
		}
	}

	@Override
	public void surfaceDestroyed(SurfaceHolder holder) {
		camera.stopPreview();
		previewRunning = false;
		camera.release();
	}
}

The main thing we are doing here is implementing a SurfaceHolder.Callback. This callback enables us to intervene when our surface is created, changed(format or size changes) or destroyed. Without this callback, our screen would just remain black.
After the surface is created, we obviously want to display what the camera is seeing. First, we are getting a reference to the camera by calling the static method Camera.open(). We only need to do this once, so we put this in the surfaceCreated method.
The actual start of the preview happens in the surfaceChanged method. This is because this method will not only be called right after surface creation(the first “change”), but also everytime something essential to the surface changes, and we want to stop previewing then, change some parameters and restart the preview. For example, we are using the passed width and height to set the preview size. By putting all of this in the surfaceChanged method, we are making sure our preview always remains consistent with our surface.
When the surface is destroyed(this happens for example at onPause or onDestroy of the activity), we are releasing the camera again, because otherwise other apps, like the native camera app, will start giving “Camera already in use” exceptions.

On a final note,

surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);

means the surface is not going to own its buffers, and this surface type is typically used for camera stuff.

Note: Instead of making the activity implement the surface callback, you could also make a class that extends SurfaceView, make that one implement the Callback and use that subclass in the layout xml instead of the SurfaceView. If your activity is getting very long in terms of code, this might be a good thing to do.

Capturing the video

We are now adding the following method to our activity, which will be called when the user decides to start recording:

        private MediaRecorder mediaRecorder;
	private final int maxDurationInMs = 20000;
	private final long maxFileSizeInBytes = 500000;
	private final int videoFramesPerSecond = 20;

	public boolean startRecording(){
		try {
			camera.unlock();

			mediaRecorder = new MediaRecorder();

			mediaRecorder.setCamera(camera);
			mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
			mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);

			mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.DEFAULT);

			mediaRecorder.setMaxDuration(maxDurationInMs);

			tempFile = new File(getCacheDir(),cacheFileName);
			mediaRecorder.setOutputFile(tempFile.getPath());

			mediaRecorder.setVideoFrameRate(videoFramesPerSecond);
			mediaRecorder.setVideoSize(surfaceView.getWidth(), surfaceView.getHeight());

			mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT);
			mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT);

			mediaRecorder.setPreviewDisplay(surfaceHolder.getSurface());

			mediaRecorder.setMaxFileSize(maxFileSizeInBytes);

                        mediaRecorder.prepare();
			mediaRecorder.start();

			return true;
		} catch (IllegalStateException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
			return false;
		} catch (IOException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
			return false;
		}
	}

In this method we are preparing the MediaRecorder with all the necessary details.

First, we unlock the camera so we can pass it in a usable state to another process, in this case the recording process. We are doing this in the third line of the code.

Then we are setting all the properties of the MediaRecorder.
Two things are important here.
The order in which the methods are called is the first one. For example, we need to set the sources before setting the encoders and we have to set encoders before calling prepare.
The second important and less documented one, is that ALL properties have to be set. Prepare is a very sensitive and obscure method. The implementation is a native function that just returns an error code in case something goes wrong. So, for example, if you forget to set “maxDuration” on the above mediaRecorder, you will get some obscure “prepare failed” error on most devices, which will not give you any hint at all you didnt set the maxDuration property. Many people assume that these properties are not required at all, and are getting these hard to debug errors.

After preparing the recorder, we start the actual recording.

Stop recording

Then we stop recording in the following method:

public void stopRecording(){
	mediaRecorder.stop();
	camera.lock();
}

which speaks for itself.

Note: To finish the activity, our methods still need to be linked to button actions. We are leaving this to the reader. The easiest way is probably to add stop and start buttons to the layout xml file with the surfaceview, whose onClick attribute points to some action on the activity that calls respectively the startRecording and stopRecording method.

Android 2.1 vs 2.2

At the time of this writing, most Android devices are still running on 2.1 and most developers are aiming their apps to be compatible with 2.1 and above. Which makes sense, if one looks at some Android platform distribution information.

The reference documentation is already updated for 2.2 though.

If we take a look again at the official instructions again,

we notice that we have gone through all the steps mentioned there. We clarified some steps, like “passing a fully initialized SurfaceHolder” and we also took care of the “see Media recorder information” part.

But some things we did different too. We are looking at the 2.2 instructions, and some methods are not yet available in 2.1.
In general, the camera API has been changing/improving at lightning speed. The downside to this is that old apis are getting deprecated very fast and that you cant just use the latest api, since you would seriously hurt your potential number of customers on the market.

Portrait orientation

In 2.2, the setDisplayOrientation method is there, but it isnt in 2.1. Actually, portrait mode for capturing videos through the api is only supported since 2.2, as clearly stated in the New Developer APIs paragraph of Android 2.2 highlights.
So, for our activity, it is necessary to specify

android:screenOrientation="landscape"

Otherwise, it is likely that the camera will have a 90 degrees discrepancy with what the user is seeing(which can be changed by setting the rotation parameter on the camera, but hacking into the code to make the camera work with portrait mode on 2.1 is outside the scope of this tutorial).

Reconnect

Another method that is not there yet in 2.1, so we are obviously not calling it.

PixelFormat.JPEG

This constant, which we are using in our activity above, is already deprecrated in 2.2. But since ImageFormat.JPEG, the suggested replacement, is not there yet in 2.1, we are forced to use the deprecated api.

Calling the undocumented setParameters method on MediaRecorder

In 2.2, there are setters for the properties videoBitrate, audioBitrate, audioChannels and audioSamplingRate onMediaRecorder.
In 2.1, these properties cant be set officially.

If we take a look at the VideoCamera implementation at the android source code, in the 2.1 tree, we find code like:

mMediaRecorder.setParameters(String.format("video-param-encoding-bitrate=%d", mProfile.mVideoBitrate));
mMediaRecorder.setParameters(String.format("audio-param-encoding-bitrate=%d", mProfile.mAudioBitrate));
mMediaRecorder.setParameters(String.format("audio-param-number-of-channels=%d", mProfile.mAudioChannels));
mMediaRecorder.setParameters(String.format("audio-param-sampling-rate=%d", mProfile.mAudioSamplingRate));

Unfortunately, although it is present on all 2.1 devices as far as I know, the setParameters method is not part of the public API, so 2.1 developers are left in the cold there.
Luckily, there is a workaround.

When preparing the MediaRecorder you can add the following lines:

Method[] methods = mediaRecorder.getClass().getMethods();
for (Method method: methods){
	if (method.getName().equals("setParameters")){
		try {
			method.invoke(mediaRecorder, String.format("video-param-encoding-bitrate=%d", 360000));
			method.invoke(mediaRecorder, String.format("audio-param-encoding-bitrate=%d", 23450));
			method.invoke(mediaRecorder, String.format("audio-param-number-of-channels=%d", 1));
			method.invoke(mediaRecorder, String.format("audio-param-sampling-rate=%d",8000));
		} catch (IllegalArgumentException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
		} catch (IllegalAccessException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
		} catch (InvocationTargetException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
		}
	}
}

Through reflection, we are iterating over the available methods on the MediaRecorder. If we find the setParameters method, we invoke the found method for the same effect as in the camera app for the android 2.1 source code.

 

分享到:
评论

相关推荐

    Android Programming for Developers -- CODE (2/2)

    4.Adding user interaction, data captures, sound, and animation to your apps. 5.Managing your apps' data using the built-in Android SQLite database. 6.Getting familiar with the android process model ...

    Android Programming for Developers -- CODE (1/2)

    4.Adding user interaction, data captures, sound, and animation to your apps. 5.Managing your apps' data using the built-in Android SQLite database. 6.Getting familiar with the android process model ...

    Microsoft Excel 2016 Programming by Example

    From recording and editing a macro and writing VBA code to working with XML documents and using Classic ASP pages to access and display data on the Web, this book takes you on a programming journey ...

    Android代码-人脸拍照并识别情绪

    The application captures the face of users who are watching videos and send it to server to analyze emotions. Device support Android device with the version higher than Android 6.0 Marshmallow is ...

    Representation Learning A Review and New Perspectives

    The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different ...

    Machine learning for graph-based representations

    support vector machines, we rapidly identify a subnetwork that captures the flow patterns of the full DFN, based primarily on node centrality features in the graph. Our supervised learning techniques ...

    Android代码-A2DP Volume

    = Overview Please read the link:../../wiki/Manual... The location can also be automatically captured when exiting Car Mode on your Android device. The location can be read by any app that understands

    如何用win10系统自带的录像工具做游戏录像.mp4

    大家好 今天教大家“如何...C:\Users\Administrator\Videos\Captures 如果嫌这个麻烦 看我操作 录像工具 只录游戏 不录其他东西 视频保存的地方也告诉大家了 大家仔细看录像很快就会学会的 教程结束 感谢大家的观看

    Provisioned Analysis of Data-Centric Processes.

    Our solution is based on the notion of a provi- sioned expression (which in turn is based on the notion of data provenance), namely an expression that captures, in a compact way, the analysis result ...

    Android 抓包分析

    在Android平台上进行网络数据抓包分析是开发者和测试人员常用的一种技术手段,它可以帮助我们理解应用程序与服务器之间的通信细节,排查网络问题,甚至用于安全分析。本文将详细讲解如何使用`tcpdump`这一命令行工具...

    Addison.Wesley.Real-Time Design Patterns.chm

    Rhapsody(TM)—a UML-compliant design automation tool that captures the analysis and design of systems and generates full behavioral code with intrinsic model-level debug capabilities RapidRMA(TM)—a...

    Android代码-一个简单好用的 bug 提交工具

    [Bugtags] for Android, reports bugs and their diagnosis information in one step, captures crashes automatically. Improve your apps anywhere, anytime. Create a free account and invite your team to ...

    Context_Encoders_Feature_Learning_by_Inpainting.pdf

    We found that a con- text encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned...

    nRF_Sniffer_UG_v2.2--nrf官方说明文档.pdf

    software tool that captures wireless traffic and reproduces it in a readable format. • An operating system that runs the required version of Wireshark • Windows 7 or later • 64 bit OS X/macOS ...

    OV23850 Datasheet

    Supporting an active array of 5632 x 4224 pixels (23.8 megapixels) operating at 24 frames per second (fps), the OV23850 is the highest resolution image sensor currently available that captures images...

Global site tag (gtag.js) - Google Analytics