打开Camera第一键事情就是预览取景preview的动作,我们先从Camera app分析起 。所有拥有拍照功能的应用,它在预览时候都要实现SurfaceHolder.Callback接口,并实现其surfaceCreated、surfaceChanged、surfaceDestroyed三个函数,同时声明一个用于预览的窗口SurfaceView ,以下是系统自带ap的源代码
SurfaceView preview = (SurfaceView) findViewById(R.id.camera_preview);
SurfaceHolder holder = preview.getHolder();
holder.addCallback(this);
还要设置camera预览的surface缓存区 ,系统自带app实在surfaceChange()方法里面设置Camera的预览区,以供底层获取的preview数据不断投递到这个surface缓存区内。
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
mSurfaceHolder = holder;
// The mCameraDevice will be null if it fails to connect to the camera
// hardware. In this case we will show a dialog and then finish the
// activity, so it's OK to ignore it.
if (mCameraDevice == null) return;
// Sometimes surfaceChanged is called after onPause or before onResume.
// Ignore it.
if (mPausing || isFinishing()) return;
// Set preview display if the surface is being created. Preview was
// already started. Also restart the preview if display rotation has
// changed. Sometimes this happens when the device is held in portrait
// and camera app is opened. Rotation animation takes some time and
// display rotation in onCreate may not be what we want.
if (mCameraState == PREVIEW_STOPPED) {
startPreview();
startFaceDetection();
} else {
if (Util.getDisplayRotation(this) != mDisplayRotation) {
setDisplayOrientation();
}
if (holder.isCreating()) {
// Set preview display if the surface is being created and preview
// was already started. That means preview display was set to null
// and we need to set it now.
setPreviewDisplay(holder);
}
}
设置好以上参数后,就可以调用startPreview()进行取景预览
startPreview()也是一层层往下调用,最后到Camera的服务端CameraService,我们看下它的过程
Camera.java(应用)-------------> Camera.java(框架)-------------->android_hardware_camera.cpp(JNI)-------------------->Camera.cpp(客户端)------------------->CameraService.cpp(服务端)--------------------->CameraHarwareInterface(HAL接口)
在CameraService端将处理preview的请求并进入HAL层
status_t CameraService::Client::startPreview() {
enableMsgType(CAMERA_MSG_PREVIEW_METADATA);
return startCameraMode(CAMERA_PREVIEW_MODE);
}
先是传递preview的消息到HAL层,然后执行preview
status_t CameraService::Client::startCameraMode(camera_mode mode) {
switch(mode) {
case CAMERA_PREVIEW_MODE:
if (mSurface == 0 && mPreviewWindow == 0) {
LOG1("mSurface is not set yet.");
// still able to start preview in this case.
}
return startPreviewMode();
}
}
status_t CameraService::Client::startPreviewMode() {
LOG1("startPreviewMode");
status_t result = NO_ERROR;
// if preview has been enabled, nothing needs to be done
if (mHardware->previewEnabled()) {
return NO_ERROR;
}
if (mPreviewWindow != 0) {
native_window_set_scaling_mode(mPreviewWindow.get(),
NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW);
native_window_set_buffers_transform(mPreviewWindow.get(),
mOrientation);
}
mHardware->setPreviewWindow(mPreviewWindow);
result = mHardware->startPreview();
return result;
}
然后就近去HAL层调用,并通过回调函数源源不断的将数据投递到surfaceview的缓存去,因为preview的数据是比较大的,所以数据不会携带着传上上层,而是直接在两个缓存区之间copy,一个是底层采集数据的缓存区,另一个是用于显示的surfaceview缓存区
我们看看preview的回调函数是怎么处理的
首先在Camera客户端与服务端连接成功的时候就会设置一个回调函数dataCallBack
CameraService::Client::Client(const sp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
const sp<CameraHardwareInterface>& hardware,
int cameraId, int cameraFacing, int clientPid) {
......
mHardware->setCallbacks(notifyCallback,
dataCallback,
dataCallbackTimestamp,
(void *)cameraId);
}
在上篇有介绍到,client与server连接成功后就会new 一个client返回,在client的构造函数中,就对camera设置了notifyCallback、dataCallback、dataCallbackTimestamp三个回调函数,用于返回底层数据用于处理,看下它的处理方法
void CameraService::Client::dataCallback(int32_t msgType,
const sp<IMemory>& dataPtr, camera_frame_metadata_t *metadata, void* user) {
switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
case CAMERA_MSG_PREVIEW_FRAME:
client->handlePreviewData(msgType, dataPtr, metadata);
break;
.......
}
void CameraService::Client::handlePreviewData(int32_t msgType,
const sp<IMemory>& mem,
camera_frame_metadata_t *metadata) {
sp<ICameraClient> c = mCameraClient;
.......
if (c != 0) {
// Is the received frame copied out or not?
if (flags & CAMERA_FRAME_CALLBACK_FLAG_COPY_OUT_MASK) {
LOG2("frame is copied");
copyFrameAndPostCopiedFrame(msgType, c, heap, offset, size, metadata);
} else {
LOG2("frame is forwarded");
mLock.unlock();
c->dataCallback(msgType, mem, metadata);
}
} else {
mLock.unlock();
}
}
copyFrameAndPostCopiedFrame就是这个函数执行两个buff区preview数据的投递
void CameraService::Client::copyFrameAndPostCopiedFrame(
int32_t msgType, const sp<ICameraClient>& client,
const sp<IMemoryHeap>& heap, size_t offset, size_t size,
camera_frame_metadata_t *metadata) {
......
previewBuffer = mPreviewBuffer;
memcpy(previewBuffer->base(), (uint8_t *)heap->base() + offset, size);
sp<MemoryBase> frame = new MemoryBase(previewBuffer, 0, size);
if (frame == 0) {
LOGE("failed to allocate space for frame callback");
mLock.unlock();
return;
}
mLock.unlock();
client->dataCallback(msgType, frame, metadata);
}
将数据处理成frame,继续调用客户端client的回调函数 client->dataCallback(msgType, frame, metadata);
// callback from camera service when frame or image is ready
void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
camera_frame_metadata_t *metadata)
{
sp<CameraListener> listener;
{
Mutex::Autolock _l(mLock);
listener = mListener;
}
if (listener != NULL) {
listener->postData(msgType, dataPtr, metadata);
}
}
还记得初始化的时候,在jni里面有设置listener吗?
static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
jobject weak_this, jint cameraId)
{
sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
context->incStrong(thiz);
camera->setListener(context);
}
继续 listener->postData(msgType, dataPtr, metadata);
void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,
camera_frame_metadata_t *metadata)
{
......
switch (dataMsgType) {
case CAMERA_MSG_VIDEO_FRAME:
// should never happen
break;
default:
LOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
copyAndPost(env, dataPtr, dataMsgType);
break;
}
}
继续copyAndPost(env, dataPtr, dataMsgType);
void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
{
jbyteArray obj = NULL;
// allocate Java byte array and copy data
if (dataPtr != NULL) {
.......
} else {
LOGV("Allocating callback buffer");
obj = env->NewByteArray(size);
.......
env->SetByteArrayRegion(obj, 0, size, data);
}
} else {
LOGE("image heap is NULL");
}
}
void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
{
jbyteArray obj = NULL;
// allocate Java byte array and copy data
if (dataPtr != NULL) {
.......
} else {
LOGV("Allocating callback buffer");
obj = env->NewByteArray(size);
.......
env->SetByteArrayRegion(obj, 0, size, data);
}
} else {
LOGE("image heap is NULL");
}
}
// post image data to Java
env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
mCameraJObjectWeak, msgType, 0, 0, obj);
if (obj) {
env->DeleteLocalRef(obj);
}
}
解释一下标红的部分,先建立一个byte数组obj,将data缓存数据存储进obj数组,CallStaticVoidMethod是C调用java函数,最后执行实在Camera.java(框架)的postEventFromNative()
private static void postEventFromNative(Object camera_ref,
int what, int arg1, int arg2, Object obj)
{
Camera c = (Camera)((WeakReference)camera_ref).get();
if (c == null)
return;
if (c.mEventHandler != null) {
Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
c.mEventHandler.sendMessage(m);
}
}
public void handleMessage(Message msg) {
switch(msg.what) {
case CAMERA_MSG_SHUTTER:
if (mShutterCallback != null) {
mShutterCallback.onShutter();
}
return;
case CAMERA_MSG_RAW_IMAGE:
if (mRawImageCallback != null) {
mRawImageCallback.onPictureTaken((byte[])msg.obj, mCamera);
}
return;
case CAMERA_MSG_COMPRESSED_IMAGE:
if (mJpegCallback != null) {
mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
}
return;
case CAMERA_MSG_PREVIEW_FRAME:
if (mPreviewCallback != null) {
PreviewCallback cb = mPreviewCallback;
if (mOneShot) {
// Clear the callback variable before the callback
// in case the app calls setPreviewCallback from
// the callback function
mPreviewCallback = null;
} else if (!mWithBuffer) {
// We're faking the camera preview mode to prevent
// the app from being flooded with preview frames.
// Set to oneshot mode again.
setHasPreviewCallback(true, false);
}
cb.onPreviewFrame((byte[])msg.obj, mCamera);
}
return;
}
}
}
分享到:
相关推荐
【Android Camera系统移植初探】 在Android系统移植过程中,Camera模块的移植是一项关键任务,因为每个设备的摄像头硬件可能都有所不同。以下是在进行Camera系统移植时需要关注的主要工作内容: 1. **硬件驱动适配...
在"OpenCV初探:二、Android程序示例 源代码"中,我们可以预期获得一些关于如何在Android应用中集成OpenCV的实例代码。这些源代码将帮助开发者理解如何在Android环境下设置和使用OpenCV库,从而实现各种视觉任务。 ...
网络视频服务器和IP Camera的普及使得监控视频能够通过网络进行集中管理和远程控制,极大地提升了监控系统的灵活性和可扩展性。 #### 监控存储技术核心 根据当前的技术发展情况,我们可以总结出几种主要的监控存储...
首先,Three.js的核心概念包括场景(Scene)、相机(Camera)和渲染器(Renderer)。场景是所有3D对象的容器,你可以把它想象成一个大舞台;相机则是我们观察这个舞台的窗口,通过调整相机的位置和视角,我们可以...
《使用Three.js构建塔防游戏:td-trial初探》 在数字娱乐领域,游戏开发是一种结合技术与艺术的创新活动。随着WebGL技术的发展,JavaScript库Three.js为开发者提供了在浏览器中创建3D图形的强大工具。本篇文章将...
`Android_Sensor传感器系统架构初探.docx`和`Android_Camera_构架和实现.docx`分别探讨了Android系统的传感器系统和相机架构,这对于Android开发者来说是深入理解系统内部运作的重要资源。 `LCD驱动架构.docx`和`...
camera: :laptop: :party_popper: :party_popper: :dog_face:进一步探索渲染道具 查询组件要点指南,Apollo文档使用Apollo,Alexey Kureev查询组件 异步React跟踪React Apollo 3.0的进度异步React Apollo初探Peggy ...
《Unity3D跑酷类游戏开发初探》 在当今的游戏开发领域,Unity3D以其强大的功能和易用性成为了许多开发者,尤其是初学者的首选工具。本资源提供了一个名为"Prisoners Fled"的跑酷类游戏Demo,旨在帮助初学者了解并...
《JimMaze:基于libgdx的2D游戏开发初探》 JimMaze是一款以Java语言编写,专为Android平台设计的2D游戏。它充分利用了libgdx这一跨平台的游戏开发框架,使得游戏不仅能在Android设备上运行,还可以在其他支持libgdx...
THREE.Camera则控制了观察者从哪个角度查看3D场景。在3Ddemo中,可能至少有一个默认的正交相机(OrthographicCamera)或透视相机(PerspectiveCamera)被设置。 其次,HTML5的canvas元素是另一个关键组件,它提供了...
《WebGL游戏开发初探:基于three.js的“威罗”项目》 在现代网络技术中,WebGL是一项重要的里程碑,它使得浏览器可以直接在网页上渲染3D图形,无需任何插件。本项目“威罗”是利用JavaScript库three.js构建的一款...
《ThreeJs构建的3D太阳系模型初探》 ThreeJs是基于WebGL的JavaScript库,用于在浏览器中创建3D图形。它使得开发者能够利用HTML5 Canvas元素,轻松地在网页上展示交互式的三维图形,无需安装任何插件。在“3...
- 如何使用Camera API来控制摄像头的参数设置,如分辨率、聚焦等。 **7.6 闹钟设置** - 如何设置闹钟提醒,使用AlarmManager来安排定时任务。 **7.7 铃声设置** - 设置默认的铃声以及如何更改铃声。 **7.8 小结...