在 Android 开发的历史中,Camera 的 API 是一直受人诟病的,使用过的人都知道,直观的感觉就是配置复杂、臃肿、难用、不易理解,从官方关于 Camera 的 API 迭代路线可以看出官方也在尝试着不断改进开发者关于Camera的使用体验,Camera 的 API 截止目前经历了 Camera(已废弃)、Camera2、CameraX 三个版本。

初代 Camera API从 5.0 开始已经宣布废弃,而 Camera2 的 API 特别难用,很多人用过之后直呼还不如以前的 Camera,所以就有了 CameraX ,它其实还是基于 Camera2 的,只不过使用上做了一些更人道的优化,它是 Jetpack 组件库的一部分,目前也是官方强推的 Camera 方案。所以,如果你有新项目涉及 Camera 的 API 或者打算对旧的 Camera API 进行升级,建议直接使用 CameraX。
本文主要探索如何在 Jetpack Compose 中使用 CameraX。
CameraX 准备工作
首先添加依赖:
dependencies {
def camerax_version = "1.3.0-alpha04"
// implementation "androidx.camera:camera-core:${camerax_version}" // 可选,因为camera-camera2 包含了camera-core
implementation "androidx.camera:camera-camera2:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
implementation "androidx.camera:camera-video:${camerax_version}"
implementation "androidx.camera:camera-view:${camerax_version}"
implementation "androidx.camera:camera-extensions:${camerax_version}"
}
注,以上各个库的最新版本信息可以在这里查找:https://developer.android.com/jetpack/androidx/releases/camera?hl=zh-cn
由于使用相机需要进行Camera权限申请,所以还需要添加一个
val accompanist_version = "0.31.2-alpha" implementation "com.google.accompanist:accompanist-permissions:$accompanist_version"
注,以上库的最新版本信息可以在这里查找:https://github.com/google/accompanist/releases
然后记得在
<manifest .. >
<uses-permission android:name="android.permission.CAMERA" />
..
</manifest>
CameraX 具有以下最低版本要求:
- Android API 级别 21
- Android 架构组件 1.1.1
对于能够感知生命周期的 Activity,请使用 FragmentActivity 或 AppCompatActivity。
CameraX 相机预览
下面主要看一下 CameraX 如何进行相机预览
创建预览 PreviewView
由于 Jetpack Compose 中目前并没有直接提供一个单独的组件来专门用于Camera预览,因此办法还是使用
@Composable
private fun CameraPreviewExample() {
Scaffold(modifier = Modifier.fillMaxSize()) { innerPadding: PaddingValues ->
AndroidView(
modifier = Modifier
.fillMaxSize()
.padding(innerPadding),
factory = { context ->
PreviewView(context).apply {
setBackgroundColor(Color.White.toArgb())
layoutParams = LinearLayout.LayoutParams(MATCH_PARENT, MATCH_PARENT)
scaleType = PreviewView.ScaleType.FILL_START
implementationMode = PreviewView.ImplementationMode.COMPATIBLE
}
}
)
}
}
这里的
1.
实现模式
-
PERFORMANCE 是默认模式,PreviewView 会使用SurfaceView 显示视频串流,但在某些情况下会回退为使用TextureView 。SurfaceView 具有专用的绘图界面,该对象更有可能通过内部硬件合成器实现硬件叠加层,尤其是当预览视频上面没有其他界面元素(如按钮)时。通过使用硬件叠加层进行渲染,视频帧会避开 GPU 路径,从而能降低平台功耗并缩短延迟时间。 -
COMPATIBLE 模式,在此模式下,PreviewView 会使用TextureView 。不同于SurfaceView ,该对象没有专用的绘图表面。因此,视频要通过混合渲染,才能显示。在这个额外的步骤中,应用可以执行额外的处理工作,例如不受限制地缩放和旋转视频。
注:对于
PERFORMANCE 是默认模式,如果设备不支持SurfaceView ,则PreviewView 将回退为使用TextureView 。当API级别为24 或更低、相机硬件支持级别为CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY 或Preview.getTargetRotation() 与PreviewView 的显示旋转不同时,PreviewView 会返回到TextureView 。
如果Preview.Builder.setTargetRotation(int) 设置为不同于显示器旋转的值,请不要使用此模式,因为SurfaceView 不支持任意旋转。如果“预览视图”需要设置动画,请不要使用此模式。API24 级或更低级别不支持SurfaceView 动画。此外,对于getPreviewStreamState 中提供的预览流状态,如果使用此模式,PreviewView.StreamState.streaming 状态可能会提前发生。
显然如果是为了性能考虑应该使用
2.
缩放类型
当预览视频分辨率与目标
-
FIT_CENTER 、FIT_START 和FIT_END ,用于添加遮幅式黑边。整个视频内容会调整(放大或缩小)为可在目标PreviewView 中显示的最大尺寸。不过,虽然整个视频帧会完整显示,但屏幕画面中可能会出现空白部分。视频帧会与目标视图的中心、起始或结束位置对齐,具体取决于您在上述三种缩放类型中选择了哪一种。 -
FILL_CENTER 、FILL_START 和FILL_END ,用于进行剪裁。如果视频的宽高比与PreviewView 不匹配,画面中只会显示部分内容,但视频仍会填满整个PreviewView 。
注意:缩放类型主要目的是为了保持预览时不会出现拉伸变形问题,如果是使用以前的Camera或Camera2 API,我的一般做法是获取相机支持的预览分辨率列表,选择一种预览分辨率,然后将
例如,下面左图是正常预览显示效果,而右图是拉伸变形的预览显示效果:

这种体验非常不好,最大的问题就是不能做到所见即所得(保存的图片或视频文件跟预览时看到效果不一致)。
以4:3的图片显示到16:9的预览屏为例,如果不做处理,是百分百会出现拉伸变形的:

下图是应用了不同缩放类型的效果:

使用
- 创建
SurfaceTexture ,以在TextureView 和Preview.SurfaceProvider 上进行设置。 - 从
TextureView 检索SurfaceTexture ,并在Preview.SurfaceProvider 上对其进行设置。 - 从
SurfaceView 获取Surface ,并在Preview.SurfaceProvider 上对其进行设置。
如果出现上述任何一种情况,
绑定生命周期 CameraController
在创建
@OptIn(ExperimentalComposeUiApi::class)
@Composable
private fun CameraPreviewExample() {
val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current
val cameraController = remember { LifecycleCameraController(context) }
Scaffold(modifier = Modifier.fillMaxSize()) { innerPadding: PaddingValues ->
AndroidView(
modifier = Modifier
.fillMaxSize()
.padding(innerPadding),
factory = { context ->
PreviewView(context).apply {
setBackgroundColor(Color.White.toArgb())
layoutParams = LinearLayout.LayoutParams(MATCH_PARENT, MATCH_PARENT)
scaleType = PreviewView.ScaleType.FILL_START
implementationMode = PreviewView.ImplementationMode.COMPATIBLE
}.also { previewView ->
previewView.controller = cameraController
cameraController.bindToLifecycle(lifecycleOwner)
}
},
onReset = {},
onRelease = {
cameraController.unbind()
}
)
}
}
请注意,在上面代码中,我们在
申请权限
只有应用获取了Camera授权之后,才显示预览的Composable界面,否则显示一个占位的Composable界面。获取授权的参考代码如下:
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun ExampleCameraScreen() {
val cameraPermissionState = rememberPermissionState(android.Manifest.permission.CAMERA)
LaunchedEffect(key1 = Unit) {
if (!cameraPermissionState.status.isGranted && !cameraPermissionState.status.shouldShowRationale) {
cameraPermissionState.launchPermissionRequest()
}
}
if (cameraPermissionState.status.isGranted) { // 相机权限已授权, 显示预览界面
CameraPreviewExample()
} else { // 未授权,显示未授权页面
NoCameraPermissionScreen(cameraPermissionState = cameraPermissionState)
}
}
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun NoCameraPermissionScreen(cameraPermissionState: PermissionState) {
// In this screen you should notify the user that the permission
// is required and maybe offer a button to start another camera perission request
Column(horizontalAlignment = Alignment.CenterHorizontally) {
val textToShow = if (cameraPermissionState.status.shouldShowRationale) {
// 如果用户之前选择了拒绝该权限,应当向用户解释为什么应用程序需要这个权限
"未获取相机授权将导致该功能无法正常使用。"
} else {
// 首次请求授权
"该功能需要使用相机权限,请点击授权。"
}
Text(textToShow)
Spacer(Modifier.height(8.dp))
Button(onClick = { cameraPermissionState.launchPermissionRequest() }) { Text("请求权限") }
}
}
更多关于如何在 Compose 中进行动态权限申请请参考 Jetpack Compose 中的 Accompanist,这里不再赘述。
全屏设置
为了相机预览时全屏展示,没有顶部的状态栏,可以在
if (isFullScreen) {
requestWindowFeature(Window.FEATURE_NO_TITLE)
//这个必须设置,否则不生效。
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
window.attributes.layoutInDisplayCutoutMode =
WindowManager.LayoutParams.LAYOUT_IN_DISPLAY_CUTOUT_MODE_SHORT_EDGES
}
WindowCompat.setDecorFitsSystemWindows(window, false)
val windowInsetsController = WindowCompat.getInsetsController(window, window.decorView)
windowInsetsController.hide(WindowInsetsCompat.Type.statusBars()) // 隐藏状态栏
windowInsetsController.hide(WindowInsetsCompat.Type.navigationBars()) // 隐藏导航栏
//将底部的navigation操作栏弄成透明,滑动显示,并且浮在上面
windowInsetsController.systemBarsBehavior = WindowInsetsController.BEHAVIOR_SHOW_TRANSIENT_BARS_BY_SWIPE;
}
通常这个代码应该会起作用,如果不行,可尝试修改theme主题:
// themes.xml
<?xml version="1.0" encoding="utf-8"?>
<resources>
<style name="Theme.MyComposeApplication" parent="android:Theme.Material.Light.NoActionBar.Fullscreen" >
<item name="android:statusBarColor">@android:color/transparent</item>
<item name="android:navigationBarColor">@android:color/transparent</item>
<item name="android:windowTranslucentStatus">true</item>
</style>
</resources>
CameraX 拍照
CameraX 中拍照主要提供了两个重载方法:
takePicture(Executor, OnImageCapturedCallback) :此方法为拍摄的图片提供内存缓冲区。takePicture(OutputFileOptions, Executor, OnImageSavedCallback) :此方法将拍摄的图片保存到提供的文件位置。
我们添加一个
@OptIn(ExperimentalComposeUiApi::class)
@Composable
private fun CameraPreviewExample() {
val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current
val cameraController = remember { LifecycleCameraController(context) }
Scaffold(
modifier = Modifier.fillMaxSize(),
floatingActionButton = {
FloatingActionButton(onClick = { takePhoto(context, cameraController) }) {
Icon(
imageVector = ImageVector.vectorResource(id = R.drawable.ic_camera_24),
contentDescription = "Take picture"
)
}
},
floatingActionButtonPosition = FabPosition.Center,
) { innerPadding: PaddingValues ->
AndroidView(
modifier = Modifier
.fillMaxSize()
.padding(innerPadding),
factory = { context ->
PreviewView(context).apply {
setBackgroundColor(Color.White.toArgb())
layoutParams = LinearLayout.LayoutParams(MATCH_PARENT, MATCH_PARENT)
scaleType = PreviewView.ScaleType.FILL_START
implementationMode = PreviewView.ImplementationMode.COMPATIBLE
}.also { previewView ->
previewView.controller = cameraController
cameraController.bindToLifecycle(lifecycleOwner)
}
},
onReset = {},
onRelease = {
cameraController.unbind()
}
)
}
}
fun takePhoto(context: Context, cameraController: LifecycleCameraController) {
val mainExecutor = ContextCompat.getMainExecutor(context)
// Create time stamped name and MediaStore entry.
val name = SimpleDateFormat(FILENAME, Locale.CHINA)
.format(System.currentTimeMillis())
val contentValues = ContentValues().apply {
put(MediaStore.MediaColumns.DISPLAY_NAME, name)
put(MediaStore.MediaColumns.MIME_TYPE, PHOTO_TYPE)
if(Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
val appName = context.resources.getString(R.string.app_name)
put(MediaStore.Images.Media.RELATIVE_PATH, "Pictures/${appName}")
}
}
// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions
.Builder(context.contentResolver, MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
contentValues)
.build()
cameraController.takePicture(outputOptions, mainExecutor, object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
val savedUri = outputFileResults.savedUri
Log.d(TAG, "Photo capture succeeded: $savedUri")
context.notifySystem(savedUri)
}
override fun onError(exception: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exception.message}", exception)
}
}
)
context.showFlushAnimation()
}
在
如果拍照后想自己执行保存逻辑,或者不保存只是用来展示,可以使用另一个回调
fun takePhoto2(context: Context, cameraController: LifecycleCameraController) {
val mainExecutor = ContextCompat.getMainExecutor(context)
cameraController.takePicture(mainExecutor, object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(image: ImageProxy) {
Log.e(TAG, "onCaptureSuccess: ${image.imageInfo}")
// Process the captured image here
try {
// The supported format is ImageFormat.YUV_420_888 or PixelFormat.RGBA_8888.
val bitmap = image.toBitmap()
Log.e(TAG, "onCaptureSuccess bitmap: ${bitmap.width} x ${bitmap.height}")
} catch (e: Exception) {
Log.e(TAG, "onCaptureSuccess Exception: ${e.message}")
}
}
})
context.showFlushAnimation()
}
该回调中可以利用
fun takePhoto2(context: Context, cameraController: LifecycleCameraController) {
val mainExecutor = ContextCompat.getMainExecutor(context)
cameraController.takePicture(mainExecutor, object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(image: ImageProxy) {
Log.e(TAG, "onCaptureSuccess: ${image.format}")
// Process the captured image here
try {
var bitmap: Bitmap? = null
// The supported format is ImageFormat.YUV_420_888 or PixelFormat.RGBA_8888.
if (image.format == ImageFormat.YUV_420_888 || image.format == PixelFormat.RGBA_8888) {
bitmap = image.toBitmap()
} else if (image.format == ImageFormat.JPEG) {
val planes = image.planes
val buffer = planes[0].buffer // 因为是ImageFormat.JPEG格式,所以 image.getPlanes()返回的数组只有一个,也就是第0个。
val size = buffer.remaining()
val bytes = ByteArray(size)
buffer.get(bytes, 0, size)
// ImageFormat.JPEG格式直接转化为Bitmap格式。
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
if (bitmap != null) {
Log.e(TAG, "onCaptureSuccess bitmap: ${bitmap.width} x ${bitmap.height}")
}
} catch (e: Exception) {
Log.e(TAG, "onCaptureSuccess Exception: ${e.message}")
}
}
})
context.showFlushAnimation()
}
如果这里得到的是 YUV 格式,除了直接调用
以上示例的完整代码:
@Composable
fun ExampleCameraNavHost() {
val navController = rememberNavController()
NavHost(navController, startDestination = "CameraScreen") {
composable("CameraScreen") {
ExampleCameraScreen(navController = navController)
}
composable("ImageScreen") {
ImageScreen(navController = navController)
}
}
}
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun ExampleCameraScreen(navController: NavHostController) {
val cameraPermissionState = rememberPermissionState(Manifest.permission.CAMERA)
LaunchedEffect(key1 = Unit) {
if (!cameraPermissionState.status.isGranted && !cameraPermissionState.status.shouldShowRationale) {
cameraPermissionState.launchPermissionRequest()
}
}
if (cameraPermissionState.status.isGranted) { // 相机权限已授权, 显示预览界面
CameraPreviewExample(navController)
} else { // 未授权,显示未授权页面
NoCameraPermissionScreen(cameraPermissionState = cameraPermissionState)
}
}
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun NoCameraPermissionScreen(cameraPermissionState: PermissionState) {
// In this screen you should notify the user that the permission
// is required and maybe offer a button to start another camera perission request
Column(horizontalAlignment = Alignment.CenterHorizontally) {
val textToShow = if (cameraPermissionState.status.shouldShowRationale) {
// 如果用户之前选择了拒绝该权限,应当向用户解释为什么应用程序需要这个权限
"未获取相机授权将导致该功能无法正常使用。"
} else {
// 首次请求授权
"该功能需要使用相机权限,请点击授权。"
}
Text(textToShow)
Spacer(Modifier.height(8.dp))
Button(onClick = { cameraPermissionState.launchPermissionRequest() }) { Text("请求权限") }
}
}
private const val TAG = "CameraXBasic"
private const val FILENAME = "yyyy-MM-dd-HH-mm-ss-SSS"
private const val PHOTO_TYPE = "image/jpeg"
@OptIn(ExperimentalComposeUiApi::class)
@Composable
private fun CameraPreviewExample(navController: NavHostController) {
val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current
val cameraController = remember { LifecycleCameraController(context) }
Scaffold(
modifier = Modifier.fillMaxSize(),
floatingActionButton = {
FloatingActionButton(onClick = {
takePhoto(context, cameraController, navController)
// takePhoto2(context, cameraController, navController)
// takePhoto3(context, cameraController, navController)
}) {
Icon(
imageVector = ImageVector.vectorResource(id = R.drawable.ic_camera_24),
contentDescription = "Take picture"
)
}
},
floatingActionButtonPosition = FabPosition.Center,
) { innerPadding: PaddingValues ->
AndroidView(
modifier = Modifier
.fillMaxSize()
.padding(innerPadding),
factory = { context ->
cameraController.imageCaptureMode = CAPTURE_MODE_MINIMIZE_LATENCY
PreviewView(context).apply {
setBackgroundColor(Color.White.toArgb())
layoutParams = LinearLayout.LayoutParams(MATCH_PARENT, MATCH_PARENT)
scaleType = PreviewView.ScaleType.FILL_CENTER
implementationMode = PreviewView.ImplementationMode.COMPATIBLE
}.also { previewView ->
previewView.controller = cameraController
cameraController.bindToLifecycle(lifecycleOwner)
}
},
onReset = {},
onRelease = {
cameraController.unbind()
}
)
}
}
fun takePhoto(context: Context, cameraController: LifecycleCameraController, navController: NavHostController) {
val mainExecutor = ContextCompat.getMainExecutor(context)
// Create time stamped name and MediaStore entry.
val name = SimpleDateFormat(FILENAME, Locale.CHINA)
.format(System.currentTimeMillis())
val contentValues = ContentValues().apply {
put(MediaStore.MediaColumns.DISPLAY_NAME, name)
put(MediaStore.MediaColumns.MIME_TYPE, PHOTO_TYPE)
if(Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
val appName = context.resources.getString(R.string.app_name)
put(MediaStore.Images.Media.RELATIVE_PATH, "Pictures/${appName}")
}
}
// Create output options object which contains file + metadata
val outputOptions = ImageCapture.OutputFileOptions
.Builder(context.contentResolver, MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
contentValues)
.build()
cameraController.takePicture(outputOptions, mainExecutor, object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
val savedUri = outputFileResults.savedUri
Log.d(TAG, "Photo capture succeeded: $savedUri")
context.notifySystem(savedUri)
navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
navController.navigate("ImageScreen")
}
override fun onError(exception: ImageCaptureException) {
Log.e(TAG, "Photo capture failed: ${exception.message}", exception)
}
}
)
context.showFlushAnimation()
}
fun takePhoto2(context: Context, cameraController: LifecycleCameraController, navController: NavHostController) {
val mainExecutor = ContextCompat.getMainExecutor(context)
cameraController.takePicture(mainExecutor, object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(image: ImageProxy) {
Log.e(TAG, "onCaptureSuccess: ${image.format}")
// Process the captured image here
val scopeWithNoEffect = CoroutineScope(SupervisorJob())
scopeWithNoEffect.launch {
val savedUri = withContext(Dispatchers.IO) {
try {
var bitmap: Bitmap? = null
// The supported format is ImageFormat.YUV_420_888 or PixelFormat.RGBA_8888.
if (image.format == ImageFormat.YUV_420_888 || image.format == PixelFormat.RGBA_8888) {
bitmap = image.toBitmap()
} else if (image.format == ImageFormat.JPEG) {
val planes = image.planes
val buffer = planes[0].buffer // 因为是ImageFormat.JPEG格式,所以 image.getPlanes()返回的数组只有一个,也就是第0个。
val size = buffer.remaining()
val bytes = ByteArray(size)
buffer.get(bytes, 0, size)
// ImageFormat.JPEG格式直接转化为Bitmap格式。
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
}
bitmap?.let {
// 保存bitmap到文件中
val photoFile = File(
context.getOutputDirectory(),
SimpleDateFormat(FILE_FORMAT, Locale.CHINA).format(System.currentTimeMillis()) + ".jpg"
)
BitmapUtilJava.saveBitmap(bitmap, photoFile.absolutePath, 100)
val savedUri = Uri.fromFile(photoFile)
savedUri
}
} catch (e: Exception) {
if (e is CancellationException) throw e
Log.e(TAG, "onCaptureSuccess Exception: ${e.message}")
null
}
}
mainExecutor.execute {
context.notifySystem(savedUri)
navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
navController.navigate("ImageScreen")
}
}
}
})
context.showFlushAnimation()
}
fun takePhoto3(context: Context, cameraController: LifecycleCameraController, navController: NavHostController) {
val photoFile = File(
context.getOutputDirectory(),
SimpleDateFormat(FILENAME, Locale.CHINA).format(System.currentTimeMillis()) + ".jpg"
)
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()
val mainExecutor = ContextCompat.getMainExecutor(context)
cameraController.takePicture(outputOptions, mainExecutor, object: ImageCapture.OnImageSavedCallback {
override fun onError(exception: ImageCaptureException) {
Log.e(TAG, "Take photo error:", exception)
onError(exception)
}
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
val savedUri = Uri.fromFile(photoFile)
Log.d(TAG, "Photo capture succeeded: $savedUri")
context.notifySystem(savedUri)
navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
navController.navigate("ImageScreen")
}
})
context.showFlushAnimation()
}
// flash 动画
private fun Context.showFlushAnimation() {
// We can only change the foreground Drawable using API level 23+ API
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
// Display flash animation to indicate that photo was captured
if (this is Activity) {
val decorView = window.decorView
decorView.postDelayed({
decorView.foreground = ColorDrawable(android.graphics.Color.WHITE)
decorView.postDelayed({ decorView.foreground = null }, ANIMATION_FAST_MILLIS)
}, ANIMATION_SLOW_MILLIS)
}
}
}
// 发送系统广播
private fun Context.notifySystem(savedUri: Uri?) {
// 对于运行API级别>=24的设备,将忽略隐式广播,因此,如果您只针对24+级API,则可以删除此语句
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
sendBroadcast(Intent(Camera.ACTION_NEW_PICTURE, savedUri))
}
}
private fun Context.getOutputDirectory(): File {
val mediaDir = externalMediaDirs.firstOrNull()?.let {
File(it, resources.getString(R.string.app_name)).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists()) mediaDir else filesDir
}
// ImageScreen.kt 用于展示拍照结果的屏幕
@Composable
fun ImageScreen(navController: NavHostController) {
val context = LocalContext.current
var imageBitmap by remember { mutableStateOf<ImageBitmap?>(null) }
val scope = rememberCoroutineScope()
val savedUri = navController.previousBackStackEntry?.savedStateHandle?.get<Uri>("savedUri")
savedUri?.run {
scope.launch {
withContext(Dispatchers.IO){
val bitmap = BitmapUtilJava.getBitmapFromUri(context, savedUri)
imageBitmap = BitmapUtilJava.scaleBitmap(bitmap, 1920, 1080).asImageBitmap()
}
}
imageBitmap?.let {
Image(it,
contentDescription = null,
modifier = Modifier.fillMaxWidth(),
contentScale = ContentScale.Crop
)
}
}
}
// 用到的几个工具类
public static void saveBitmap(Bitmap mBitmap, String filePath, int quality) {
File f = new File(filePath);
FileOutputStream fOut = null;
try {
fOut = new FileOutputStream(f);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
mBitmap.compress(Bitmap.CompressFormat.JPEG, quality, fOut);
try {
if (fOut != null) {
fOut.flush();
}
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* 宽高比取最大值缩放图片.
*
* @param bitmap 加载的图片
* @param widthSize 缩放之后的图片宽度,一般就是屏幕的宽度.
* @param heightSize 缩放之后的图片高度,一般就是屏幕的高度.
*/
public static Bitmap scaleBitmap(Bitmap bitmap, int widthSize, int heightSize) {
int bmpW = bitmap.getWidth();
int bmpH = bitmap.getHeight();
float scaleW = ((float) widthSize) / bmpW;
float scaleH = ((float) heightSize) / bmpH;
//取宽高最大比例来缩放图片
float max = Math.max(scaleW, scaleH);
Matrix matrix = new Matrix();
matrix.postScale(max, max);
return Bitmap.createBitmap(bitmap, 0, 0, bmpW, bmpH, matrix, true);
}
/**
* 根据Uri返回Bitmap对象
* @param context
* @param uri
* @return
*/
public static Bitmap getBitmapFromUri(Context context, Uri uri){
try {
// 这种方式也可以
// BitmapFactory.decodeStream(context.getContentResolver().openInputStream(uri));
return MediaStore.Images.Media.getBitmap(context.getContentResolver(), uri);
}catch (Exception e){
e.printStackTrace();
return null;
}
}
注:Bitmap相关的操作属于耗时任务,应该使用协程放在脱离主线程的协程调度器中执行,在实践中如果在生产项目中使用需要自行完善。
CameraProvider 与 CameraController
官方所有 CameraX 代码其实都提供了
为了确定哪种实现适合您,下面列出了每种实现的优势:
| CameraController | CameraProvider |
|---|---|
| 需要很少的设置代码 | 允许有更大的控制权 |
| 允许 CameraX 处理更多设置流程,这意味着点按对焦和双指张合缩放等功能可自动工作 | 由于应用开发者负责处理设置,因此有更多机会自定义配置,例如在 ImageAnalysis 中启用输出图片旋转功能或设置输出图片格式 |
| 要求使用 PreviewView 以进行摄像头预览,从而可使 CameraX 提供无缝的端到端集成,就像在我们的机器学习套件集成中那样,后者可将机器学习模型结果坐标(例如人脸边界框)直接映射到预览坐标 | 能够使用自定义“Surface”进行摄像头预览,从而可以实现更高的灵活性,例如使用您现有的“Surface”代码,该代码可作为应用其他部分的输入 |
使用 CameraProvider 实现拍照功能
为了方便获取
private suspend fun Context.getCameraProvider(): ProcessCameraProvider {
return ProcessCameraProvider.getInstance(this).await()
}
使用
private fun takePhoto(
context: Context,
imageCapture: ImageCapture,
onImageCaptured: (Uri) -> Unit,
onError: (ImageCaptureException) -> Unit
) {
val photoFile = File(
context.getOutputDirectory(),
SimpleDateFormat(FILE_FORMAT, Locale.CHINA).format(System.currentTimeMillis()) + ".jpg"
)
val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()
val mainExecutor = ContextCompat.getMainExecutor(context)
imageCapture.takePicture(outputOptions, mainExecutor, object: ImageCapture.OnImageSavedCallback {
override fun onError(exception: ImageCaptureException) {
Log.e(TAG, "Take photo error:", exception)
onError(exception)
}
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
val savedUri = Uri.fromFile(photoFile)
onImageCaptured(savedUri)
context.notifySystem(savedUri)
}
})
context.showFlushAnimation()
}
调用代码:
@OptIn(ExperimentalComposeUiApi::class)
@Composable
fun CameraPreviewExample2(navController: NavHostController) {
val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current
val previewView = remember { PreviewView(context) }
// Create Preview UseCase.
val preview = remember {
Preview.Builder().build().apply {
setSurfaceProvider(previewView.surfaceProvider)
}
}
val imageCapture: ImageCapture = remember { ImageCapture.Builder().build() }
val cameraSelector = remember { CameraSelector.DEFAULT_BACK_CAMERA } // Select default back camera.
var pCameraProvider: ProcessCameraProvider? = null
LaunchedEffect(cameraSelector) {
val cameraProvider = context.getCameraProvider()
cameraProvider.unbindAll() // Unbind UseCases before rebinding.
// Bind UseCases to camera. This function returns a camera
// object which can be used to perform operations like zoom, flash, and focus.
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, imageCapture)
pCameraProvider = cameraProvider
}
Scaffold(
modifier = Modifier.fillMaxSize(),
floatingActionButton = {
FloatingActionButton(onClick = {
takePhoto(
context,
imageCapture = imageCapture,
onImageCaptured = { savedUri ->
Log.d(TAG, "Photo capture succeeded: $savedUri")
context.notifySystem(savedUri)
navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
navController.navigate("ImageScreen")
},
onError = {
Log.e(TAG, "Photo capture failed: ${it.message}", it)
}
)
}) {
Icon(
imageVector = ImageVector.vectorResource(id = R.drawable.ic_camera_24),
contentDescription = "Take picture"
)
}
},
floatingActionButtonPosition = FabPosition.Center,
) { innerPadding: PaddingValues ->
AndroidView(
modifier = Modifier
.fillMaxSize()
.padding(innerPadding),
factory = { previewView },
onReset = {},
onRelease = {
pCameraProvider?.unbindAll()
}
)
}
}
这里最大的一个区别就是使用了
CameraX 常用设置
设置拍摄模式
无论是使用
CameraX支持的拍摄模式有:
CAPTURE_MODE_MINIMIZE_LATENCY :缩短图片拍摄的延迟时间。CAPTURE_MODE_MAXIMIZE_QUALITY :提高图片拍摄的图片质量。CAPTURE_MODE_ZERO_SHUTTER_LAG :零快门延迟模式,从 1.2 开始提供。与默认拍摄模式CAPTURE_MODE_MINIMIZE_LATENCY 相比,启用零快门延迟后,延迟时间会明显缩短,这样您便不会错过拍摄机会。
拍摄模式默认为
零快门延迟会使用环形缓冲区来存储三个最近拍摄的帧。当用户按下拍摄按钮时,CameraX 会调用
在启用零快门延迟之前,请使用
- 以 Android 6.0 及更高版本(API 级别 23 及更高级别)为目标平台。
- 支持 PRIVATE 重新处理。
如果设备不符合最低要求,CameraX 便会回退到 CAPTURE_MODE_MINIMIZE_LATENCY。
零快门延迟仅适用于图片拍摄用例。您无法为视频拍摄用例或相机扩展程序启用该功能。最后,由于使用闪光灯会增加延迟时间,因此当闪光灯开启或处于自动模式时,零快门延迟将不起作用。
使用
cameraController.imageCaptureMode = CAPTURE_MODE_MINIMIZE_LATENCY
使用
val imageCapture: ImageCapture = remember {
ImageCapture.Builder().setCaptureMode(ImageCapture.CAPTURE_MODE_ZERO_SHUTTER_LAG).build()
}
设置闪光灯
默认闪光灯模式为
FLASH_MODE_ON :闪光灯始终处于开启状态。FLASH_MODE_AUTO :在弱光环境下拍摄时,自动开启闪光灯。
使用
cameraController.imageCaptureFlashMode = ImageCapture.FLASH_MODE_AUTO
使用
ImageCapture.Builder() .setFlashMode(FLASH_MODE_AUTO) .build()
选择摄像头
在 CameraX 中,摄像头选择是通过
以下是通过
var cameraController = LifecycleCameraController(baseContext) // val selector = CameraSelector.Builder() // .requireLensFacing(CameraSelector.LENS_FACING_BACK).build() val selector = CameraSelector.DEFAULT_BACK_CAMERA // 等价上面的代码 cameraController.cameraSelector = selector
以下是通过
val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA cameraProvider.unbindAll() var camera = cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, useCases)
点按对焦
当摄像头预览显示在屏幕上时,一种常见的控制方式是在用户点按预览时设置焦点。
// CameraX: track the state of tap-to-focus over the Lifecycle of a PreviewView,
// with handlers you can define for focused, not focused, and failed states.
val tapToFocusStateObserver = Observer { state ->
when (state) {
CameraController.TAP_TO_FOCUS_NOT_STARTED ->
Log.d(TAG, "tap-to-focus init")
CameraController.TAP_TO_FOCUS_STARTED ->
Log.d(TAG, "tap-to-focus started")
CameraController.TAP_TO_FOCUS_FOCUSED ->
Log.d(TAG, "tap-to-focus finished (focus successful)")
CameraController.TAP_TO_FOCUS_NOT_FOCUSED ->
Log.d(TAG, "tap-to-focus finished (focused unsuccessful)")
CameraController.TAP_TO_FOCUS_FAILED ->
Log.d(TAG, "tap-to-focus failed")
}
}
cameraController.getTapToFocusState().observe(this, tapToFocusStateObserver)
使用
使用
- 设置用于处理点按事件的手势检测器。
- 对于点按事件,请使用
MeteringPointFactory.createPoint() 创建一个MeteringPoint 。 - 对于
MeteringPoint ,请创建一个FocusMeteringAction 。 - 对于 Camera 上的
CameraControl 对象(从bindToLifecycle() 返回),请调用startFocusAndMetering() ,使其传递到FocusMeteringAction 。 - (可选)响应
FocusMeteringResult 。 - 设置手势检测器,以便在
PreviewView.setOnTouchListener() 中响应触摸事件。
// CameraX: implement tap-to-focus with CameraProvider.
// Define a gesture detector to respond to tap events and call
// startFocusAndMetering on CameraControl. If you want to use a
// coroutine with await() to check the result of focusing, see the
// "Android development concepts" section above.
val gestureDetector = GestureDetectorCompat(context,
object : SimpleOnGestureListener() {
override fun onSingleTapUp(e: MotionEvent): Boolean {
val previewView = previewView ?: return
val camera = camera ?: return
val meteringPointFactory = previewView.meteringPointFactory
val focusPoint = meteringPointFactory.createPoint(e.x, e.y)
val meteringAction = FocusMeteringAction
.Builder(meteringPoint).build()
lifecycleScope.launch {
val focusResult = camera.cameraControl
.startFocusAndMetering(meteringAction).await()
if (!result.isFocusSuccessful()) {
Log.d(TAG, "tap-to-focus failed")
}
}
}
}
)
...
// Set the gestureDetector in a touch listener on the PreviewView.
previewView.setOnTouchListener { _, event ->
// See pinch-to-zooom scenario for scaleGestureDetector definition.
var didConsume = scaleGestureDetector.onTouchEvent(event)
if (!scaleGestureDetector.isInProgress) {
didConsume = gestureDetector.onTouchEvent(event)
}
didConsume
}
双指缩放
缩放预览是对摄像头预览进行的另一种常见的直接操控。随着设备上的摄像头越来越多,用户还希望缩放后自动选择具有最佳焦距的摄像头。
与点按对焦类似,
// CameraX: track the state of pinch-to-zoom over the Lifecycle of
// a PreviewView, logging the linear zoom ratio.
val pinchToZoomStateObserver = Observer { state ->
val zoomRatio = state.getZoomRatio()
Log.d(TAG, "ptz-zoom-ratio $zoomRatio")
}
cameraController.getZoomState().observe(this, pinchToZoomStateObserver)
使用
使用
- 设置用于处理双指张合事件的缩放手势检测器。
- 从
Camera.CameraInfo 对象获取ZoomState ,当您调用bindToLifecycle() 时,系统会返回Camera 实例。 - 如果
ZoomState 具有zoomRatio 值,请将其保存为当前缩放比例。如果ZoomState 没有zoomRatio ,则使用相机的默认缩放比例 (1.0 )。 - 获取当前缩放比例与
scaleFactor 的乘积,以确定新的缩放比例,并将其传递到CameraControl.setZoomRatio() 。 - 设置手势检测器,以便在
PreviewView.setOnTouchListener() 中响应触摸事件。
// CameraX: implement pinch-to-zoom with CameraProvider.
// Define a scale gesture detector to respond to pinch events and call
// setZoomRatio on CameraControl.
val scaleGestureDetector = ScaleGestureDetector(context,
object : SimpleOnGestureListener() {
override fun onScale(detector: ScaleGestureDetector): Boolean {
val camera = camera ?: return
val zoomState = camera.cameraInfo.zoomState
val currentZoomRatio: Float = zoomState.value?.zoomRatio ?: 1f
camera.cameraControl.setZoomRatio(
detector.scaleFactor * currentZoomRatio
)
}
}
)
...
// Set the scaleGestureDetector in a touch listener on the PreviewView.
previewView.setOnTouchListener { _, event ->
var didConsume = scaleGestureDetector.onTouchEvent(event)
if (!scaleGestureDetector.isInProgress) {
// See pinch-to-zooom scenario for gestureDetector definition.
didConsume = gestureDetector.onTouchEvent(event)
}
didConsume
}
CameraX 拍摄视频
捕获系统通常会录制视频流和音频流,对其进行压缩,对这两个流进行多路复用,然后将生成的流写入磁盘。

VideoCapture API 概述
在 CameraX 中,用于视频捕获的解决方案是

CameraX 视频捕获包括几个高级架构组件:
- SurfaceProvider,表示视频来源。
- AudioSource,表示音频来源。
- 用于对视频/音频进行编码和压缩的两个编码器。
- 用于对两个流进行多路复用的媒体复用器。
- 用于写出结果的文件保存器。
注意:
VideoCapture 是在 CameraX 的camera-video 库中实现的,在1.1.0-alpha10 及更高版本中可用。CameraXVideoCapture API 并非最终版本,可能会随着时间发生变化。
VideoCapture 是顶级用例类。VideoCapture 通过CameraSelector 和其他 CameraX 用例绑定到LifecycleOwner 。Recorder 是与VideoCapture 紧密耦合的VideoOutput 实现。Recorder 用于执行视频和音频捕获操作。应用通过Recorder 创建录制对象。PendingRecording 会配置录制对象,同时提供启用音频和设置事件监听器等选项。您必须使用Recorder 来创建PendingRecording 。PendingRecording 不会录制任何内容。Recording 会执行实际录制操作。您必须使用PendingRecording 来创建Recording 。
下图展示了这些对象之间的关系:

图例:
- 使用
QualitySelector 创建Recorder 。 - 使用其中一个
OutputOptions 配置Recorder 。 - 如果需要,使用
withAudioEnabled() 启用音频。 - 使用
VideoRecordEvent 监听器调用start() 以开始录制。 - 针对
Recording 使用pause()/resume()/stop() 来控制录制操作。 - 在事件监听器内响应
VideoRecordEvents 。
详细的 API 列表位于源代码内的 current-txt 中。
使用 CameraProvider 拍摄视频
如果使用
创建 QualitySelector 对象
应用可以通过
CameraX
- Quality.UHD,适用于 4K 超高清视频大小 (2160p)
- Quality.FHD,适用于全高清视频大小 (1080p)
- Quality.HD,适用于高清视频大小 (720p)
- Quality.SD,适用于标清视频大小 (480p)
请注意,获得应用授权后,CameraX 还可以选择其他分辨率。每个选项对应的确切视频大小取决于相机和编码器的功能。如需了解详情,请参阅 CamcorderProfile 的文档。
您可以使用以下方法之一创建
-
使用
fromOrderedList() 提供几个首选分辨率,并包含一个后备策略,以备在不支持任何首选分辨率时使用。CameraX 可以根据所选相机的功能确定最佳后备匹配项。如需了解详情,请参阅
QualitySelector 的 FallbackStrategy specification。例如,以下代码会请求支持的最高录制分辨率;如果所有请求分辨率都不受支持,则授权 CameraX 选择最接近Quality.SD 分辨率的分辨率:
val qualitySelector = QualitySelector.fromOrderedList(
listOf(Quality.UHD, Quality.FHD, Quality.HD, Quality.SD),
FallbackStrategy.lowerQualityOrHigherThan(Quality.SD))
- 首先查询相机支持的分辨率,然后使用
QualitySelector::from() 从受支持的分辨率中进行选择:
val cameraInfo = cameraProvider.availableCameraInfos.filter {
Camera2CameraInfo
.from(it)
.getCameraCharacteristic(CameraCharacteristics.LENS\_FACING) == CameraMetadata.LENS_FACING_BACK
}
val supportedQualities = QualitySelector.getSupportedQualities(cameraInfo[0])
val filteredQualities = arrayListOf (Quality.UHD, Quality.FHD, Quality.HD, Quality.SD)
.filter { supportedQualities.contains(it) }
// Use a simple ListView with the id of simple_quality_list_view
viewBinding.simpleQualityListView.apply {
adapter = ArrayAdapter(context,
android.R.layout.simple_list_item_1,
filteredQualities.map { it.qualityToString() })
// Set up the user interaction to manually show or hide the system UI.
setOnItemClickListener { _, _, position, _ ->
// Inside View.OnClickListener,
// convert Quality.* constant to QualitySelector
val qualitySelector = QualitySelector.from(filteredQualities[position])
// Create a new Recorder/VideoCapture for the new quality
// and bind to lifecycle
val recorder = Recorder.Builder()
.setQualitySelector(qualitySelector).build()
// ...
}
}
// A helper function to translate Quality to a string
fun Quality.qualityToString() : String {
return when (this) {
Quality.UHD -> "UHD"
Quality.FHD -> "FHD"
Quality.HD -> "HD"
Quality.SD -> "SD"
else -> throw IllegalArgumentException()
}
}
请注意,
创建并绑定 VideoCapture 对象
具有
val recorder = Recorder.Builder()
.setExecutor(cameraExecutor)
.setQualitySelector(QualitySelector.from(Quality.FHD))
.build()
val videoCapture = VideoCapture.withOutput(recorder)
try {
// Bind use cases to camera
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, videoCapture)
} catch(exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
注意:目前无法配置最终的视频编解码器和容器格式。
配置并生成 Recording 对象
接下来就可以在
val name = SimpleDateFormat(FILENAME_FORMAT, Locale.US)
.format(System.currentTimeMillis())
val contentValues = ContentValues().apply {
put(MediaStore.MediaColumns.DISPLAY_NAME, name)
put(MediaStore.MediaColumns.MIME_TYPE, "video/mp4")
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
put(MediaStore.Video.Media.RELATIVE_PATH, "Movies/CameraX-Video")
}
}
// Create MediaStoreOutputOptions for our recorder
val mediaStoreOutputOptions = MediaStoreOutputOptions
.Builder(contentResolver, MediaStore.Video.Media.EXTERNAL_CONTENT_URI)
.setContentValues(contentValues)
.build()
// 2. Configure Recorder and Start recording to the mediaStoreOutput.
val recording = videoCapture.output
.prepareRecording(context, mediaStoreOutputOptions)
.withAudioEnabled() // 启用音频
.start(ContextCompat.getMainExecutor(this), captureListener) // 启动并注册录制事件监听
OutputOptions
FileDescriptorOutputOptions ,用于捕获到FileDescriptor 中。FileOutputOptions ,用于捕获到 File 中。MediaStoreOutputOptions ,用于捕获到 MediaStore 中。
无论使用哪种
暂停、恢复和停止
当您调用
pause() ,用于暂停当前的活跃录制。resume() ,用于恢复已暂停的活跃录制。stop() ,用于完成录制并清空所有关联的录制对象。
请注意,无论录制处于暂停状态还是活跃状态,您都可以调用
if (recording != null) {
// Stop the current recording session.
recording.stop()
recording = null
return
}
..
recording = ..
事件监听
如果您已使用
一旦在相应相机设备上开始录制,CameraX 就会发送
VideoRecordEvent.Status 用于录制统计信息,例如当前文件的大小和录制的时间跨度。VideoRecordEvent.Finalize 用于录制结果,会包含最终文件的URI 以及任何相关错误等信息。
在您的应用收到表示录制会话成功的
recording = videoCapture.output
.prepareRecording(context, mediaStoreOutputOptions)
.withAudioEnabled()
.start(ContextCompat.getMainExecutor(context)) { recordEvent ->
when(recordEvent) {
is VideoRecordEvent.Start -> {
}
is VideoRecordEvent.Status -> {
}
is VideoRecordEvent.Pause -> {
}
is VideoRecordEvent.Resume -> {
}
is VideoRecordEvent.Finalize -> {
if (!recordEvent.hasError()) {
val msg = "Video capture succeeded: ${recordEvent.outputResults.outputUri}"
context.showToast(msg)
Log.d(TAG, msg)
} else {
recording?.close()
recording = null
Log.e(TAG, "video capture ends with error", recordEvent.cause)
}
}
}
}
完整示例代码
以下是在 Compose 中使用
// CameraProvider 拍摄视频示例
private const val TAG = "CameraXVideo"
private const val FILENAME_FORMAT = "yyyy-MM-dd-HH-mm-ss-SSS"
@Composable
fun CameraVideoExample(navController: NavHostController) {
val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current
val previewView = remember { PreviewView(context) }
// Create Preview UseCase.
val preview = remember {
Preview.Builder().build().apply { setSurfaceProvider(previewView.surfaceProvider) }
}
var cameraSelector by remember { mutableStateOf(CameraSelector.DEFAULT_BACK_CAMERA) }
// Create VideoCapture UseCase.
val videoCapture = remember(cameraSelector) {
val qualitySelector = QualitySelector.from(Quality.FHD)
val recorder = Recorder.Builder()
.setExecutor(ContextCompat.getMainExecutor(context))
.setQualitySelector(qualitySelector)
.build()
VideoCapture.withOutput(recorder)
}
// Bind UseCases
LaunchedEffect(cameraSelector) {
try {
val cameraProvider = context.getCameraProvider()
cameraProvider.unbindAll()
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, videoCapture)
} catch(exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}
}
var recording: Recording? = null
var isRecording by remember { mutableStateOf(false) }
var time by remember { mutableStateOf(0L) }
Scaffold(
modifier = Modifier.fillMaxSize(),
floatingActionButton = {
FloatingActionButton(onClick = {
if (!isRecording) {
isRecording = true
recording?.stop()
time = 0L
recording = startRecording(context, videoCapture,
onFinished = { savedUri ->
if (savedUri != Uri.EMPTY) {
val msg = "Video capture succeeded: $savedUri"
context.showToast(msg)
Log.d(TAG, msg)
navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
navController.navigate("VideoPlayerScreen")
}
},
onProgress = { time = it },
onError = {
isRecording = false
recording?.close()
recording = null
time = 0L
Log.e(TAG, "video capture ends with error", it)
}
)
} else {
isRecording = false
recording?.stop()
recording = null
time = 0L
}
}) {
val iconId = if (!isRecording) R.drawable.ic_start_record_36
else R.drawable.ic_stop_record_36
Icon(
imageVector = ImageVector.vectorResource(id = iconId),
tint = Color.Red,
contentDescription = "Capture Video"
)
}
},
floatingActionButtonPosition = FabPosition.Center,
) { innerPadding: PaddingValues ->
Box(modifier = Modifier
.padding(innerPadding)
.fillMaxSize()) {
AndroidView(
modifier = Modifier.fillMaxSize(),
factory = { previewView },
)
if (time > 0 && isRecording) {
Text(text = "${SimpleDateFormat("mm:ss", Locale.CHINA).format(time)} s",
modifier = Modifier.align(Alignment.TopCenter),
color = Color.Red,
fontSize = 16.sp
)
}
if (!isRecording) {
IconButton(
onClick = {
cameraSelector = when(cameraSelector) {
CameraSelector.DEFAULT_BACK_CAMERA -> CameraSelector.DEFAULT_FRONT_CAMERA
else -> CameraSelector.DEFAULT_BACK_CAMERA
}
},
modifier = Modifier
.align(Alignment.TopEnd)
.padding(bottom = 32.dp)
) {
Icon(
painter = painterResource(R.drawable.ic_switch_camera),
contentDescription = "",
tint = Color.Green,
modifier = Modifier.size(36.dp)
)
}
}
}
}
}
@SuppressLint("MissingPermission")
private fun startRecording(
context: Context,
videoCapture: VideoCapture<Recorder>,
onFinished: (Uri) -> Unit,
onProgress: (Long) -> Unit,
onError: (Throwable?) -> Unit
): Recording{
// Create and start a new recording session.
val name = SimpleDateFormat(FILENAME_FORMAT, Locale.CHINA)
.format(System.currentTimeMillis())
val contentValues = ContentValues().apply {
put(MediaStore.MediaColumns.DISPLAY_NAME, name)
put(MediaStore.MediaColumns.MIME_TYPE, "video/mp4")
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
put(MediaStore.Video.Media.RELATIVE_PATH, "Movies/CameraX-Video")
}
}
val mediaStoreOutputOptions = MediaStoreOutputOptions
.Builder(context.contentResolver, MediaStore.Video.Media.EXTERNAL_CONTENT_URI)
.setContentValues(contentValues)
.build()
return videoCapture.output
.prepareRecording(context, mediaStoreOutputOptions)
.withAudioEnabled() // 启用音频
.start(ContextCompat.getMainExecutor(context)) { recordEvent ->
when(recordEvent) {
is VideoRecordEvent.Start -> {}
is VideoRecordEvent.Status -> {
val duration = recordEvent.recordingStats.recordedDurationNanos / 1000 / 1000
onProgress(duration)
}
is VideoRecordEvent.Pause -> {}
is VideoRecordEvent.Resume -> {}
is VideoRecordEvent.Finalize -> {
if (!recordEvent.hasError()) {
val savedUri = recordEvent.outputResults.outputUri
onFinished(savedUri)
} else {
onError(recordEvent.cause)
}
}
}
}
}
private suspend fun Context.getCameraProvider(): ProcessCameraProvider {
return ProcessCameraProvider.getInstance(this).await()
}
路由和权限配置:
@Composable
fun CameraVideoCaptureNavHost() {
val navController = rememberNavController()
NavHost(navController, startDestination = "CameraVideoScreen") {
composable("CameraVideoScreen") {
CameraVideoScreen(navController = navController)
}
composable("VideoPlayerScreen") {
VideoPlayerScreen(navController = navController)
}
}
}
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun CameraVideoScreen(navController: NavHostController) {
val multiplePermissionsState = rememberMultiplePermissionsState(
listOf(
Manifest.permission.CAMERA,
Manifest.permission.RECORD_AUDIO,
)
)
LaunchedEffect(Unit) {
if (!multiplePermissionsState.allPermissionsGranted) {
multiplePermissionsState.launchMultiplePermissionRequest()
}
}
if (multiplePermissionsState.allPermissionsGranted) {
CameraVideoExample(navController)
} else {
NoCameraPermissionScreen(multiplePermissionsState)
}
}
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun NoCameraPermissionScreen(permissionState: MultiplePermissionsState) {
Column(modifier = Modifier.padding(10.dp)) {
Text(
getTextToShowGivenPermissions(
permissionState.revokedPermissions, // 被拒绝/撤销的权限列表
permissionState.shouldShowRationale
),
fontSize = 16.sp
)
Spacer(Modifier.height(8.dp))
Button(onClick = { permissionState.launchMultiplePermissionRequest() }) {
Text("请求权限")
}
}
}
@OptIn(ExperimentalPermissionsApi::class)
private fun getTextToShowGivenPermissions(
permissions: List<PermissionState>,
shouldShowRationale: Boolean
): String {
val size = permissions.size
if (size == 0) return ""
val textToShow = StringBuilder().apply { append("以下权限:
") }
for (i in permissions.indices) {
textToShow.append(permissions[i].permission).apply {
if (i == size - 1) append("
") else append(", ")
}
}
textToShow.append(
if (shouldShowRationale) {
" 需要被授权,以保证应用功能正常使用."
} else {
" 未获得授权. 应用功能将不能正常使用."
}
)
return textToShow.toString()
}
为了查看录制的视频,我们在另一个路由屏幕中使用 Google 的 ExoPlayer 库来播放视频,添加依赖:
implementation "com.google.android.exoplayer:exoplayer:2.18.7"
// 展示拍摄视频
@Composable
fun VideoPlayerScreen(navController: NavHostController) {
val savedUri = navController.previousBackStackEntry?.savedStateHandle?.get<Uri>("savedUri")
val context = LocalContext.current
val exoPlayer = savedUri?.let {
remember(context) {
ExoPlayer.Builder(context).build().apply {
setMediaItem(MediaItem.fromUri(savedUri))
prepare()
}
}
}
DisposableEffect(
Box(
modifier = Modifier.fillMaxSize()
) {
AndroidView(
factory = { context ->
StyledPlayerView(context).apply {
player = exoPlayer
setShowFastForwardButton(false)
setShowNextButton(false)
setShowPreviousButton(false)
setShowRewindButton(false)
controllerHideOnTouch = true
controllerShowTimeoutMs = 200
}
},
modifier = Modifier.fillMaxSize()
)
}
) {
onDispose {
exoPlayer?.release()
}
}
}
使用 CameraController 拍摄视频
借助 CameraX 的
如果使用
// CameraX: Enable VideoCapture UseCase on CameraController. cameraController.setEnabledUseCases(VIDEO_CAPTURE);
如果您想开始录制视频,可以调用
从
@SuppressLint("MissingPermission")
@androidx.annotation.OptIn(ExperimentalVideo::class)
private fun startStopVideo(context: Context, cameraController: LifecycleCameraController): Recording {
// Define the File options for saving the video.
val name = SimpleDateFormat(FILENAME_FORMAT, Locale.CHINA)
.format(System.currentTimeMillis())+".mp4"
val outputFileOptions = FileOutputOptions
.Builder(File(context.filesDir, name))
.build()
// Call startRecording on the CameraController.
return cameraController.startRecording(
outputFileOptions,
AudioConfig.create(true), // 开启音频
ContextCompat.getMainExecutor(context),
) { videoRecordEvent ->
when(videoRecordEvent) {
is VideoRecordEvent.Start -> {}
is VideoRecordEvent.Status -> {}
is VideoRecordEvent.Pause -> {}
is VideoRecordEvent.Resume -> {}
is VideoRecordEvent.Finalize -> {
if (!videoRecordEvent.hasError()) {
val savedUri = videoRecordEvent.outputResults.outputUri
val msg = "Video capture succeeded: $savedUri"
context.showToast(msg)
Log.d(TAG, msg)
} else {
Log.d(TAG, "video capture ends with error", videoRecordEvent.cause)
}
}
}
}
}
可以看到,使用
完整示例代码
以下是在 Compose 中使用
// CameraController 拍摄视频示例
private const val TAG = "CameraXVideo"
private const val FILENAME_FORMAT = "yyyy-MM-dd-HH-mm-ss-SSS"
@androidx.annotation.OptIn(ExperimentalVideo::class)
@OptIn(ExperimentalComposeUiApi::class)
@Composable
fun CameraVideoExample2(navController: NavHostController) {
val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current
val cameraController = remember { LifecycleCameraController(context) }
var recording: Recording? = null
var time by remember { mutableStateOf(0L) }
Scaffold(
modifier = Modifier.fillMaxSize(),
floatingActionButton = {
FloatingActionButton(onClick = {
if (!cameraController.isRecording) {
recording?.stop()
time = 0L
recording = startRecording(context, cameraController,
onFinished = { savedUri ->
if (savedUri != Uri.EMPTY) {
val msg = "Video capture succeeded: $savedUri"
context.showToast(msg)
Log.d(TAG, msg)
navController.currentBackStackEntry?.savedStateHandle?.set("savedUri", savedUri)
navController.navigate("VideoPlayerScreen")
}
},
onProgress = { time = it },
onError = {
recording?.close()
recording = null
time = 0L
Log.e(TAG, "video capture ends with error", it)
}
)
} else {
recording?.stop()
recording = null
time = 0L
}
}) {
val iconId = if (!cameraController.isRecording) R.drawable.ic_start_record_36
else R.drawable.ic_stop_record_36
Icon(
imageVector = ImageVector.vectorResource(id = iconId),
tint = Color.Red,
contentDescription = "Capture Video"
)
}
},
floatingActionButtonPosition = FabPosition.Center,
) { innerPadding: PaddingValues ->
Box(modifier = Modifier
.padding(innerPadding)
.fillMaxSize()) {
AndroidView(
modifier = Modifier
.fillMaxSize()
.padding(innerPadding),
factory = { context ->
PreviewView(context).apply {
setBackgroundColor(Color.White.toArgb())
layoutParams = LinearLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT)
scaleType = PreviewView.ScaleType.FILL_CENTER
implementationMode = PreviewView.ImplementationMode.COMPATIBLE
}.also { previewView ->
cameraController.cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
previewView.controller = cameraController
cameraController.bindToLifecycle(lifecycleOwner)
// cameraController.cameraInfo?.let {
// val supportedQualities = QualitySelector.getSupportedQualities(it)
// }
cameraController.setEnabledUseCases(VIDEO_CAPTURE) // 启用 VIDEO_CAPTURE UseCase
cameraController.videoCaptureTargetQuality = Quality.FHD
}
},
onReset = {},
onRelease = {
cameraController.unbind()
}
)
if (time > 0 && cameraController.isRecording) {
Text(text = "${SimpleDateFormat("mm:ss", Locale.CHINA).format(time)} s",
modifier = Modifier.align(Alignment.TopCenter),
color = Color.Red,
fontSize = 16.sp
)
}
if (!cameraController.isRecording) {
IconButton(
onClick = {
cameraController.cameraSelector = when(cameraController.cameraSelector) {
CameraSelector.DEFAULT_BACK_CAMERA -> CameraSelector.DEFAULT_FRONT_CAMERA
else -> CameraSelector.DEFAULT_BACK_CAMERA
}
},
modifier = Modifier
.align(Alignment.TopEnd)
.padding(bottom = 32.dp)
) {
Icon(
painter = painterResource(R.drawable.ic_switch_camera),
contentDescription = "",
tint = Color.Green,
modifier = Modifier.size(36.dp)
)
}
}
}
}
}
@SuppressLint("MissingPermission")
@androidx.annotation.OptIn(ExperimentalVideo::class)
private fun startRecording(
context: Context,
cameraController: LifecycleCameraController,
onFinished: (Uri) -> Unit,
onProgress: (Long) -> Unit,
onError: (Throwable?) -> Unit,
): Recording {
// Define the File options for saving the video.
val name = SimpleDateFormat(FILENAME_FORMAT, Locale.CHINA)
.format(System.currentTimeMillis())+".mp4"
val outputFileOptions = FileOutputOptions
.Builder(File(context.getOutputDirectory(), name))
.build()
// Call startRecording on the CameraController.
return cameraController.startRecording(
outputFileOptions,
AudioConfig.create(true), // 开启音频
ContextCompat.getMainExecutor(context),
) { videoRecordEvent ->
when(videoRecordEvent) {
is VideoRecordEvent.Start -> {}
is VideoRecordEvent.Status -> {
val duration = videoRecordEvent.recordingStats.recordedDurationNanos / 1000 / 1000
onProgress(duration)
}
is VideoRecordEvent.Pause -> {}
is VideoRecordEvent.Resume -> {}
is VideoRecordEvent.Finalize -> {
if (!videoRecordEvent.hasError()) {
val savedUri = videoRecordEvent.outputResults.outputUri
onFinished(savedUri)
context.notifySystem(savedUri, outputFileOptions.file)
} else {
onError(videoRecordEvent.cause)
}
}
}
}
}
private fun Context.getOutputDirectory(): File {
val mediaDir = externalMediaDirs.firstOrNull()?.let {
File(it, resources.getString(R.string.app_name)).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists()) mediaDir else filesDir
}
// 发送系统广播
private fun Context.notifySystem(savedUri: Uri?, file: File) {
// 对于运行API级别>=24的设备,将忽略隐式广播,因此,如果您只针对24+级API,则可以删除此语句
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.N) {
sendBroadcast(Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, savedUri)) //刷新单个文件
} else {
MediaScannerConnection.scanFile(this, arrayOf(file.absolutePath), null, null)
}
}
ImageAnalysis
图像分析用例为您的应用提供可供 CPU 访问的图像,您可以对这些图像执行图像处理、计算机视觉或机器学习推断。应用会实现对每一帧运行的
如需在您的应用中使用图像分析,请按以下步骤操作:
- 构建
ImageAnalysis 用例。 - 创建
ImageAnalysis.Analyzer 。 - 为
ImageAnalysis 配置分析器。 - 将
lifecycleOwner 、cameraSelector 和ImageAnalysis 用例绑定到生命周期。(ProcessCameraProvider.bindToLifecycle() )
绑定后,CameraX 会立即将图像发送到已注册的分析器。 完成分析后,调用
构建 ImageAnalysis 用例
图像输出参数:
- 格式:CameraX 可通过
setOutputImageFormat(int) 支持YUV_420_888 和RGBA_8888 。默认格式为YUV_420_888 。 - Resolution 和 AspectRatio:您可以设置其中一个参数,但请注意,您不能同时设置这两个值。
- 旋转角度。
- 目标名称:使用该参数进行调试。
图像流控制:
- 后台执行器Executor
- 图像队列深度
- 背压策略
以下是构建
private fun getImageAnalysis(): ImageAnalysis{
val imageAnalysis = ImageAnalysis.Builder()
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
.setTargetResolution(Size(1280, 720))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build()
val executor = Executors.newSingleThreadExecutor()
imageAnalysis.setAnalyzer(executor, ImageAnalysis.Analyzer { imageProxy ->
val rotationDegrees = imageProxy.imageInfo.rotationDegrees
Log.e(TAG, "ImageAnalysis.Analyzer: imageProxy.format = ${imageProxy.format}")
// insert your code here.
if (imageProxy.format == ImageFormat.YUV_420_888 || imageProxy.format == PixelFormat.RGBA_8888) {
val bitmap = imageProxy.toBitmap()
}
// ...
// after done, release the ImageProxy object
imageProxy.close()
})
return imageAnalysis
}
val imageAnalysis = getImageAnalysis() cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, imageCapture, imageAnalysis)
注意:
应用可以设置分辨率或宽高比,但不能同时设置这两个值。确切的输出分辨率取决于应用请求的大小(或宽高比)和硬件功能,并可能与请求的大小或宽高比不同。如需了解分辨率匹配算法,请参阅有关 setTargetResolution() 的文档
应用可以将输出图像像素配置为采用 YUV(默认)或 RGBA 颜色空间。设置 RGBA 输出格式时,CameraX 会在内部将图像从 YUV 颜色空间转换为 RGBA 颜色空间,并将图像位打包到
ImageProxy.getPlanes()[0].buffer[0]: alpha ImageProxy.getPlanes()[0].buffer[1]: red ImageProxy.getPlanes()[0].buffer[2]: green ImageProxy.getPlanes()[0].buffer[3]: blue ...
操作模式
当应用的分析流水线无法满足 CameraX 的帧速率要求时,您可以将 CameraX 配置为通过以下其中一种方式丢帧:
-
非阻塞(默认):在该模式下,执行器始终会将最新的图像缓存到图像缓冲区(与深度为 1 的队列相似),与此同时,应用会分析上一个图像。如果 CameraX 在应用完成处理之前收到新图像,则新图像会保存到同一缓冲区,并覆盖上一个图像。 请注意,在这种情况下,
ImageAnalysis.Builder.setImageQueueDepth() 不起任何作用,缓冲区内容始终会被覆盖。您可以通过使用STRATEGY_KEEP_ONLY_LATEST 调用setBackpressureStrategy() 来启用该非阻塞模式。如需详细了解执行器的相关影响,请参阅 STRATEGY_KEEP_ONLY_LATEST 的参考文档。 -
阻塞:在该模式下,内部执行器可以向内部图像队列添加多个图像,并仅在队列已满时才开始丢帧。系统会在整个相机设备上进行屏蔽:如果相机设备具有多个绑定用例,那么在 CameraX 处理这些图像时,系统会屏蔽所有这些用例。例如,如果预览和图像分析都已绑定到某个相机设备,那么在 CameraX 处理图像时,系统也会屏蔽相应预览。您可以通过将
STRATEGY_BLOCK_PRODUCER 传递到setBackpressureStrategy() 来启用阻塞模式。此外,您还可以通过使用ImageAnalysis.Builder.setImageQueueDepth() 来配置图像队列深度。
如果分析器延迟低且性能高,在这种情况下用于分析图像的总时间低于 CameraX 帧的时长(例如,60fps 用时 16 毫秒),那么上述两种操作模式均可提供顺畅的总体体验。在某些情况下,阻塞模式仍非常有用,例如在处理非常短暂的系统抖动时。
如果分析器延迟高且性能高,则需要结合使用阻塞模式和较长的队列来抵补延迟。但请注意,在这种情况下,应用仍可以处理所有帧。
如果分析器延迟高且耗时长(分析器无法处理所有帧),非阻塞模式可能更为适用,因为在这种情况下,系统必须针对分析路径进行丢帧,但要让其他同时绑定的用例仍能看到所有帧。
ML Kit Analyzer(机器学习套件分析器)
Google 的 机器学习套件可提供设备端机器学习 Vision API,用于检测人脸、扫描条形码、为图片加标签等。借助机器学习套件分析器,您可以更轻松地将机器学习套件与 CameraX 应用集成。
机器学习套件分析器是
实现机器学习套件分析器
如需实现机器学习套件分析器,建议使用
若要将机器学习套件分析器与
-
机器学习套件
Detector 的列表,CameraX 将按顺序依次调用。 -
用于确定机器学习套件输出坐标的目标坐标系:
COORDINATE_SYSTEM_VIEW_REFERENCED :转换后的PreviewView 坐标。
COORDINATE_SYSTEM_ORIGINAL :原始的ImageAnalysis 流坐标。 -
用于调用
Consumer 回调并将MlKitAnalyzer.Result 传递给应用的Executor 。 -
CameraX 在有新的机器学习套件输出内容时调用的
Consumer 。
使用机器学习套件分析器需要添加依赖:
def camerax_version = "1.3.0-alpha04"
implementation "androidx.camera:camera-mlkit-vision:${camerax_version}"
二维码/条形码识别
添加 ML Kit 条形码依赖库:
implementation 'com.google.mlkit:barcode-scanning:17.1.0'
下面是使用示例:
private const val TAG = "MLKitAnalyzer"
@OptIn(ExperimentalComposeUiApi::class)
@Composable
fun MLKitAnalyzerCameraExample(navController: NavHostController) {
val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current
val cameraController = remember { LifecycleCameraController(context) }
AndroidView(
modifier = Modifier.fillMaxSize() ,
factory = { context ->
PreviewView(context).apply {
setBackgroundColor(Color.White.toArgb())
layoutParams = LinearLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT)
scaleType = PreviewView.ScaleType.FILL_CENTER
implementationMode = PreviewView.ImplementationMode.COMPATIBLE
}.also { previewView ->
previewView.controller = cameraController
cameraController.bindToLifecycle(lifecycleOwner)
cameraController.imageAnalysisBackpressureStrategy = ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST
cameraController.setBarcodeAnalyzer(context) { result ->
navController.currentBackStackEntry?.savedStateHandle?.set("result", result)
navController.navigate("ResultScreen")
}
}
},
onReset = {},
onRelease = {
cameraController.unbind()
}
)
}
private fun LifecycleCameraController.setBarcodeAnalyzer(
context: Context,
onFound: (String?) -> Unit
) {
// create BarcodeScanner object
val options = BarcodeScannerOptions.Builder()
.setBarcodeFormats(Barcode.FORMAT_QR_CODE,
Barcode.FORMAT_AZTEC, Barcode.FORMAT_DATA_MATRIX, Barcode.FORMAT_PDF417,
Barcode.FORMAT_CODABAR, Barcode.FORMAT_CODE_39, Barcode.FORMAT_CODE_93,
Barcode.FORMAT_EAN_8, Barcode.FORMAT_EAN_13, Barcode.FORMAT_ITF,
Barcode.FORMAT_UPC_A, Barcode.FORMAT_UPC_E
)
.build()
val barcodeScanner = BarcodeScanning.getClient(options)
setImageAnalysisAnalyzer(
ContextCompat.getMainExecutor(context),
MlKitAnalyzer(
listOf(barcodeScanner),
COORDINATE_SYSTEM_VIEW_REFERENCED,
ContextCompat.getMainExecutor(context)
) { result: MlKitAnalyzer.Result? ->
val value = result?.getValue(barcodeScanner)
value?.let { list ->
if (list.size > 0) {
list.forEach { barCode ->
Log.e(TAG, "format:${barCode.format}, displayValue:${barCode.displayValue}")
context.showToast("识别到:${barCode.displayValue}")
}
val res = list[0].displayValue
if (!res.isNullOrEmpty()) onFound(res)
}
}
}
)
}
在上面的代码示例中,机器学习套件分析器会将以下内容传递给
- 基于代表目标坐标系的
COORDINATE_SYSTEM_VIEW_REFERENCED 的转换Matrix 。 - 相机框架。
如果
还可以通过
for (barcode in barcodes) {
val bounds = barcode.boundingBox
val corners = barcode.cornerPoints
val rawValue = barcode.rawValue
val valueType = barcode.valueType
// See API reference for complete list of supported types
when (valueType) {
Barcode.TYPE_WIFI -> {
val ssid = barcode.wifi!!.ssid
val password = barcode.wifi!!.password
val type = barcode.wifi!!.encryptionType
}
Barcode.TYPE_URL -> {
val title = barcode.url!!.title
val url = barcode.url!!.url
}
}
}
您还可以使用
路由和权限配置:
@Composable
fun ExampleMLKitAnalyzerNavHost() {
val navController = rememberNavController()
NavHost(navController, startDestination = "MLKitAnalyzerCameraScreen") {
composable("MLKitAnalyzerCameraScreen") {
MLKitAnalyzerCameraScreen(navController = navController)
}
composable("ResultScreen") {
ResultScreen(navController = navController)
}
}
}
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun MLKitAnalyzerCameraScreen(navController: NavHostController) {
val cameraPermissionState = rememberPermissionState(Manifest.permission.CAMERA)
LaunchedEffect(Unit) {
if (!cameraPermissionState.status.isGranted && !cameraPermissionState.status.shouldShowRationale) {
cameraPermissionState.launchPermissionRequest()
}
}
if (cameraPermissionState.status.isGranted) { // 相机权限已授权, 显示预览界面
MLKitAnalyzerCameraExample(navController)
} else { // 未授权,显示未授权页面
NoCameraPermissionScreen(cameraPermissionState = cameraPermissionState)
}
}
@OptIn(ExperimentalPermissionsApi::class)
@Composable
fun NoCameraPermissionScreen(cameraPermissionState: PermissionState) {
// In this screen you should notify the user that the permission
// is required and maybe offer a button to start another camera perission request
Column(horizontalAlignment = Alignment.CenterHorizontally) {
val textToShow = if (cameraPermissionState.status.shouldShowRationale) {
// 如果用户之前选择了拒绝该权限,应当向用户解释为什么应用程序需要这个权限
"未获取相机授权将导致该功能无法正常使用。"
} else {
// 首次请求授权
"该功能需要使用相机权限,请点击授权。"
}
Text(textToShow)
Spacer(Modifier.height(8.dp))
Button(onClick = { cameraPermissionState.launchPermissionRequest() }) { Text("请求权限") }
}
}
// 展示识别结果
@Composable
fun ResultScreen(navController: NavHostController) {
val result = navController.previousBackStackEntry?.savedStateHandle?.get<String>("result")
result?.let {
Box(modifier = Modifier.fillMaxSize()) {
Text("$it", fontSize = 18.sp, modifier = Modifier.align(Alignment.Center))
}
}
}
更多相关内容请参考:https://developers.google.cn/ml-kit/vision/barcode-scanning/android?hl=zh-cn
文字识别
添加依赖:
dependencies {
// To recognize Latin script
implementation 'com.google.mlkit:text-recognition:16.0.0'
// To recognize Chinese script
implementation 'com.google.mlkit:text-recognition-chinese:16.0.0'
// To recognize Devanagari script
implementation 'com.google.mlkit:text-recognition-devanagari:16.0.0'
// To recognize Japanese script
implementation 'com.google.mlkit:text-recognition-japanese:16.0.0'
// To recognize Korean script
implementation 'com.google.mlkit:text-recognition-korean:16.0.0'
}
示例代码:
private fun LifecycleCameraController.setTextAnalyzer(
context: Context,
onFound: (String) -> Unit
) {
var called = false
// val recognizer = TextRecognition.getClient(TextRecognizerOptions.DEFAULT_OPTIONS) // 拉丁文
val recognizer = TextRecognition.getClient(ChineseTextRecognizerOptions.Builder().build()) // 中文
setImageAnalysisAnalyzer(
ContextCompat.getMainExecutor(context),
MlKitAnalyzer(
listOf(recognizer),
COORDINATE_SYSTEM_VIEW_REFERENCED,
ContextCompat.getMainExecutor(context)
) { result: MlKitAnalyzer.Result? ->
val value = result?.getValue(recognizer)
value?.let { resultText ->
val sb = StringBuilder()
for (block in resultText.textBlocks) {
val blockText = block.text
sb.append(blockText).append("
")
val blockCornerPoints = block.cornerPoints
val blockFrame = block.boundingBox
for (line in block.lines) {
val lineText = line.text
val lineCornerPoints = line.cornerPoints
val lineFrame = line.boundingBox
for (element in line.elements) {
val elementText = element.text
val elementCornerPoints = element.cornerPoints
val elementFrame = element.boundingBox
}
}
}
val res = sb.toString()
if (res.isNotEmpty() && !called) {
Log.e(TAG, "$res")
context.showToast("识别到:$res")
onFound(res)
called = true
}
}
}
)
}
文本识别器会将文本细分为块、行、元素和符号。大致说来:
- 块block是一组连续的文本行,例如段落或列。
- 线line是同一轴上的一组连续的字词,
- Element 是一组拉丁字母组成的连续字母数字(“word”),在拉丁字母的大多数列中,该字母均指其他字符
- Symbol 是大多数拉丁语语言中的同一个轴上的单个字母数字字符,或者是其他语言中的字符
下图按降序突出显示了这些示例。第一个突出显示的块(以青色表示)是一段文本块。第二组突出显示的蓝色块是文本行。最后,突出显示的第三组文本为深蓝色的单词是 Word。

对于所有检测到的块、线条、元素和符号,该 API 会返回边界框、边角、旋转信息、置信度分数、可识别的语言和识别的文本。
更多相关内容请参考:https://developers.google.cn/ml-kit/vision/text-recognition/v2/android?hl=zh-cn
人脸识别
添加依赖:
dependencies {
// Use this dependency to bundle the model with your app
implementation 'com.google.mlkit:face-detection:16.1.5'
}
在对图片应用人脸检测之前,如果要更改人脸检测器的默认设置,请使用 FaceDetectorOptions 对象指定这些设置。您可以更改以下设置:
| 设置 | 功能 |
|---|---|
请注意,启用轮廓检测后,仅会检测一张人脸,因此人脸跟踪不会产生有用的结果。为此,若要加快检测速度,请勿同时启用轮廓检测和人脸跟踪。 |
例如:
// High-accuracy landmark detection and face classification
val highAccuracyOpts = FaceDetectorOptions.Builder()
.setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)
.setLandmarkMode(FaceDetectorOptions.LANDMARK_MODE_ALL)
.setClassificationMode(FaceDetectorOptions.CLASSIFICATION_MODE_ALL)
.build()
// Real-time contour detection
val realTimeOpts = FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build()
// 获取 FaceDetector 实例
val detector = FaceDetection.getClient(highAccuracyOpts)
// Or, to use the default option:
// val detector = FaceDetection.getClient();
如果人脸检测操作成功,系统会向成功监听器传递一组
for (face in faces) {
val bounds = face.boundingBox
val rotY = face.headEulerAngleY // Head is rotated to the right rotY degrees
val rotZ = face.headEulerAngleZ // Head is tilted sideways rotZ degrees
// If landmark detection was enabled (mouth, ears, eyes, cheeks, and nose available):
val leftEar = face.getLandmark(FaceLandmark.LEFT_EYE)
leftEar?.let {
val leftEarPos = leftEar.position
}
// If contour detection was enabled:
val leftEyeContour = face.getContour(FaceContour.LEFT_EYE)?.points
val upperLipBottomContour = face.getContour(FaceContour.UPPER_LIP_BOTTOM)?.points
// If classification was enabled:
if (face.smilingProbability != null) {
val smileProb = face.smilingProbability
}
if (face.rightEyeOpenProbability != null) {
val rightEyeOpenProb = face.rightEyeOpenProbability
}
// If face tracking was enabled:
if (face.trackingId != null) {
val id = face.trackingId
}
}
示例代码:
@OptIn(ExperimentalComposeUiApi::class)
@Composable
fun MLKitFaceDetectorExample() {
val context = LocalContext.current
val lifecycleOwner = LocalLifecycleOwner.current
val cameraController = remember { LifecycleCameraController(context) }
var faces by remember { mutableStateOf(listOf<Face>()) }
val bounds = remember(faces) {
faces.map { face -> face.boundingBox }
}
val points = remember(faces) { getPoints(faces) }
Box(modifier = Modifier.fillMaxSize()) {
AndroidView(
modifier = Modifier.fillMaxSize(),
factory = { context ->
PreviewView(context).apply {
setBackgroundColor(Color.White.toArgb())
layoutParams = LinearLayout.LayoutParams(
ViewGroup.LayoutParams.MATCH_PARENT,
ViewGroup.LayoutParams.MATCH_PARENT
)
scaleType = PreviewView.ScaleType.FILL_CENTER
implementationMode = PreviewView.ImplementationMode.COMPATIBLE
}.also { previewView ->
previewView.controller = cameraController
cameraController.bindToLifecycle(lifecycleOwner)
cameraController.imageAnalysisBackpressureStrategy =
ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST
cameraController.setFaceDetectorAnalyzer(context) { faces = it }
}
},
onReset = {},
onRelease = {
cameraController.unbind()
}
)
Canvas(modifier = Modifier.fillMaxSize()) {
bounds.forEach { rect ->
drawRect(
Color.Red,
size = Size(rect.width().toFloat(), rect.height().toFloat()),
topLeft = Offset(x = rect.left.toFloat(), y = rect.top.toFloat()),
style = Stroke(width = 5f)
)
}
points.forEach { point ->
drawCircle(
Color.Green,
radius = 2.dp.toPx(),
center = Offset(x = point.x, y = point.y),
)
}
}
}
}
private fun LifecycleCameraController.setFaceDetectorAnalyzer(
context: Context,
onFound: (List<Face>) -> Unit
) {
// High-accuracy landmark detection and face classification
val highAccuracyOpts = FaceDetectorOptions.Builder()
.setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)
.setLandmarkMode(FaceDetectorOptions.LANDMARK_MODE_ALL)
.enableTracking()
.build()
// Real-time contour detection
val realTimeOpts = FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build()
val detector = FaceDetection.getClient(highAccuracyOpts)
setImageAnalysisAnalyzer(
ContextCompat.getMainExecutor(context),
MlKitAnalyzer(
listOf(detector),
COORDINATE_SYSTEM_VIEW_REFERENCED,
ContextCompat.getMainExecutor(context)
) { result: MlKitAnalyzer.Result? ->
val value = result?.getValue(detector)
value?.let { onFound(it) }
}
)
}
// All landmarks
private val landMarkTypes = intArrayOf(
FaceLandmark.MOUTH_BOTTOM,
FaceLandmark.MOUTH_RIGHT,
FaceLandmark.MOUTH_LEFT,
FaceLandmark.RIGHT_EYE,
FaceLandmark.LEFT_EYE,
FaceLandmark.RIGHT_EAR,
FaceLandmark.LEFT_EAR,
FaceLandmark.RIGHT_CHEEK,
FaceLandmark.LEFT_CHEEK,
FaceLandmark.NOSE_BASE
)
private fun getPoints(faces: List<Face>) : List<PointF> {
val points = mutableListOf<PointF>()
for (face in faces) {
landMarkTypes.forEach { landMarkType ->
face.getLandmark(landMarkType)?.let {
points.add(it.position)
}
}
}
return points
}
效果:

更多相关内容请参考:https://developers.google.cn/ml-kit/vision/face-detection/android?hl=zh-cn
CameraX 其他高级配置选项
CameraXConfig
为简单起见,CameraX 具有适合大多数使用场景的默认配置(例如内部执行器和处理程序)。但是,如果您的应用有特殊要求或希望自定义这些配置,可使用
借助
- 使用
setAvailableCameraLimiter() 优化启动延迟时间。 - 使用
setCameraExecutor() 向 CameraX 提供应用执行器。 - 将默认调度器处理程序替换为
setSchedulerHandler() 。 - 使用
setMinimumLoggingLevel() 更改日志记录级别。
以下程序说明了如何使用
- 使用您的自定义配置创建一个
CameraXConfig 对象。 - 在
Application 中实现CameraXConfig.Provider 接口,并在getCameraXConfig() 中返回CameraXConfig 对象。 - 将
Application 类添加到AndroidManifest.xml 文件中。
例如,以下代码示例将 CameraX 日志记录限制为仅记录错误消息:
class CameraApplication : Application(), CameraXConfig.Provider {
override fun getCameraXConfig(): CameraXConfig {
return CameraXConfig.Builder.fromConfig(Camera2Config.defaultConfig())
.setMinimumLoggingLevel(Log.ERROR).build()
}
}
如果您的应用需要在设置 CameraX 配置后了解该配置,请保留
摄像头限制器
在第一次调用
如果传递给
class MainApplication : Application(), CameraXConfig.Provider {
override fun getCameraXConfig(): CameraXConfig {
return CameraXConfig.Builder.fromConfig(Camera2Config.defaultConfig())
.setAvailableCamerasLimiter(CameraSelector.DEFAULT_BACK_CAMERA)
.build()
}
}
线程
构建 CameraX 时所采用的很多平台 API 都要求阻塞与硬件之间的进程间通信 (IPC),此类通信有时可能需要数百毫秒的响应时间。因此,CameraX 仅从后台线程调用这些 API,从而避免主线程发生阻塞,使界面保持流畅。CameraX 会在内部管理这些后台线程,因此这类行为显得比较透明。但是,某些应用需要严格控制线程。
自动选择
CameraX 会根据运行您的应用的设备自动提供专用的功能。例如,如果您未指定分辨率或您指定的分辨率不受支持,CameraX 会自动确定要使用的最佳分辨率。所有这些操作均由库进行处理,无需您编写设备专属代码。
CameraX 的目标是成功初始化摄像头会话。这意味着,CameraX 会根据设备功能降低分辨率和宽高比。发生这种情况的原因如下:
- 设备不支持请求的分辨率。
- 设备存在兼容性问题,例如需要特定分辨率才能正常运行的旧设备。
- 在某些设备上,某些格式仅在某些宽高比下可用。
- 对于 JPEG 或视频编码,设备首选“最近的 mod16”。如需了解详情,请参阅 SCALER_STREAM_CONFIGURATION_MAP。
尽管 CameraX 会创建并管理会话,您也应始终在代码中检查用例输出所返回的图片大小,并进行相应调整。
旋转
默认情况下,在用例创建期间,摄像头的旋转角度会设置为与默认的显示屏旋转角度保持一致。在此默认情况下,CameraX 会生成输出,确保应用与您预期在预览中看到的内容保持一致。通过在配置用例对象时传入当前显示屏方向或在创建用例对象之后动态传入显示屏方向,您可以将旋转角度更改为自定义值以支持多显示屏设备。
您的应用可以使用配置设置来设置目标旋转角度。然后,即使生命周期处于运行状态,应用也可以通过使用用例 API 中的方法(例如 ImageAnalysis.setTargetRotation())更新旋转设置。您可以在应用锁定为纵向模式时执行上述操作,这样就无需重新配置旋转角度,但是照片或分析用例需要了解设备当前的旋转角度。例如,用例可能需要了解旋转角度才能以正确的方向进行人脸检测,或者将照片设置为横向或纵向。
存储所拍摄图片的数据时可能不会包含旋转信息。Exif 数据包含旋转信息,以便图库应用在保存后以正确的屏幕方向显示图片。
如需以正确的屏幕方向显示预览数据,您可以使用
以下代码示例展示了如何为屏幕方向事件设置旋转角度:
override fun onCreate() {
val imageCapture = ImageCapture.Builder().build()
val orientationEventListener = object : OrientationEventListener(this as Context) {
override fun onOrientationChanged(orientation : Int) {
// Monitors orientation values to determine the target rotation value
val rotation : Int = when (orientation) {
in 45..134 -> Surface.ROTATION_270
in 135..224 -> Surface.ROTATION_180
in 225..314 -> Surface.ROTATION_90
else -> Surface.ROTATION_0
}
imageCapture.targetRotation = rotation
}
}
orientationEventListener.enable()
}
每个用例都会根据设定的旋转角度直接旋转图片数据,或者向用户提供未旋转图片数据的旋转元数据。
- Preview:提供元数据输出,以便使用
Preview.getTargetRotation() 了解目标分辨率的旋转设置。 - ImageAnalysis:提供元数据输出,以便了解图片缓冲区坐标相对于显示坐标的位置。
- ImageCapture:更改图片 Exif 元数据、缓冲区或同时更改两者,从而反映旋转设置。更改的值取决于 HAL 实现。
更多旋转相关内容请参考:https://developer.android.google.cn/training/camerax/orientation-rotation?hl=zh-cn
剪裁矩形
默认情况下,剪裁矩形是完整的缓冲区矩形,您可通过
以下代码段展示了这两个类的使用方法:
val viewPort = ViewPort.Builder(Rational(width, height), display.rotation).build()
val useCaseGroup = UseCaseGroup.Builder()
.addUseCase(preview)
.addUseCase(imageAnalysis)
.addUseCase(imageCapture)
.setViewPort(viewPort)
.build()
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, useCaseGroup)
以下代码段展示了如何获取
val viewport = findViewById<PreviewView>(R.id.preview_view).viewPort
在前面的示例中,应用通过
选择可用摄像头
CameraX 会根据应用的要求和用例自动选择最佳摄像头设备。如果您希望使用自动选择的设备以外的其他设备,有以下几种选项供您选择:
- 使用
CameraSelector.DEFAULT_FRONT_CAMERA 请求默认的前置摄像头。 - 使用
CameraSelector.DEFAULT_BACK_CAMERA 请求默认的后置摄像头。 - 使用
CameraSelector.Builder.addCameraFilter() 按CameraCharacteristics 过滤可用设备列表。
注意:摄像头设备必须经过系统识别,并显示在
CameraManager.getCameraIdList() 中,然后才可供使用。
此外,每个原始设备制造商 (OEM) 都必须自行选择是否支持外接摄像头设备。因此,在尝试使用任何外接摄像头之前,请务必检查
以下代码示例展示了如何创建
fun selectExternalOrBestCamera(provider: ProcessCameraProvider):CameraSelector? {
val cam2Infos = provider.availableCameraInfos.map {
Camera2CameraInfo.from(it)
}.sortedByDescending {
// HARDWARE_LEVEL is Int type, with the order of:
// LEGACY < LIMITED < FULL < LEVEL_3 < EXTERNAL
it.getCameraCharacteristic(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)
}
return when {
cam2Infos.isNotEmpty() -> {
CameraSelector.Builder()
.addCameraFilter {
it.filter { camInfo ->
// cam2Infos[0] is either EXTERNAL or best built-in camera
val thisCamId = Camera2CameraInfo.from(camInfo).cameraId
thisCamId == cam2Infos[0].cameraId
}
}.build()
}
else -> null
}
}
// create a CameraSelector for the USB camera (or highest level internal camera)
val selector = selectExternalOrBestCamera(processCameraProvider)
processCameraProvider.bindToLifecycle(this, selector, preview, analysis)
同时选择多个摄像头
从 CameraX 1.3 开始,您还可以同时选择多个摄像头。 例如,您可以对前置和后置摄像头进行绑定,以便从两个视角同时拍摄照片或录制视频。
使用并发摄像头功能时,设备可以同时运行两个不同镜头方向的摄像头,或同时运行两个后置摄像头。以下代码块展示了如何在调用
// Build ConcurrentCameraConfig
val primary = ConcurrentCamera.SingleCameraConfig(
primaryCameraSelector,
useCaseGroup,
lifecycleOwner
)
val secondary = ConcurrentCamera.SingleCameraConfig(
secondaryCameraSelector,
useCaseGroup,
lifecycleOwner
)
val concurrentCamera = cameraProvider.bindToLifecycle(
listOf(primary, secondary)
)
val primaryCamera = concurrentCamera.cameras[0]
val secondaryCamera = concurrentCamera.cameras[1]
摄像头分辨率
您可以选择让 CameraX 根据设备功能、设备支持的硬件级别、用例和所提供的宽高比组合设置图片分辨率。或者,您也可以在支持相应配置的用例中设置特定目标分辨率或特定宽高比。
自动分辨率
CameraX 可以根据
图片拍摄和图片分析用例的默认宽高比为
对于具有可配置宽高比的用例,可让应用根据界面设计来指定所需的宽高比。CameraX 会按照请求的宽高比生成输出,并尽可能匹配设备支持的宽高比。如果没有任何支持的完全匹配分辨率,则选择满足最多条件的分辨率。也就是说,应用会决定摄像头在应用中的显示方式,CameraX 则会决定最佳摄像头分辨率设置,以满足不同设备的具体要求。
例如,应用可以执行以下任一操作:
- 为用例指定
4:3 或16:9 的目标分辨率 - 指定自定义分辨率,CameraX 会尝试查找与该分辨率最接近的分辨率
- 为
ImageCapture 指定剪裁宽高比
CameraX 会自动选择内部

指定分辨率
使用
val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(1280, 720))
.build()
您无法针对同一个用例设置目标宽高比和目标分辨率。如果这样做,则会在构建配置对象时抛出
按照目标旋转角度旋转支持的大小后,请在坐标系中表示分辨率
目标分辨率会尝试制定图片分辨率的下限。实际的图片分辨率是最接近的可用分辨率,其大小不小于由摄像头实现所决定的目标分辨率。
但是,如果不存在等于或大于目标分辨率的分辨率,就会从小于目标分辨率的可用分辨率中选择最接近的一个。与提供的
CameraX 会根据请求应用最合适的分辨率。如果主要需求是满足宽高比要求,则仅指定
注意:如果使用
setTargetResolution() ,可能会得到宽高比与其他用例不匹配的缓冲区。如果宽高比必须匹配,请检查两个用例返回的缓冲区尺寸,然后剪裁或缩放其中一个以与另一个匹配。
如果您的应用需要精确的分辨率,请参阅createCaptureSession() 内的表格,以确定每个硬件级别支持的最大分辨率。如需查看当前设备支持的特定分辨率,请参阅StreamConfigurationMap.getOutputSizes(int) 。
如果您的应用在 Android 10 或更高版本上运行,您可以使用
控制摄像头输出
CameraX 不仅让您可以视需要为每个单独的用例配置摄像头输出,还实现了以下接口,从而支持所有绑定用例中通用的摄像头操作:
- 利用
CameraControl ,您可以配置通用摄像头功能。 - 利用
CameraInfo ,您可以查询这些通用摄像头功能的状态。
以下是
- 变焦
- 手电筒
- 对焦和测光(点按即可对焦)
- 曝光补偿
获取 CameraControl 和 CameraInfo 的实例
使用
val camera = processCameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview) // For performing operations that affect all outputs. val cameraControl = camera.cameraControl // For querying information and states. val cameraInfo = camera.cameraInfo
例如,您可以在调用
注意:如果
LifecycleOwner 被停止或销毁,Camera 就会关闭,之后变焦、手电筒、对焦和测光以及曝光补偿控件的所有状态更改均会还原成默认值。
变焦
-
setZoomRatio() 用于按变焦比例设置变焦。
该比率必须在
CameraInfo.getZoomState().getValue().getMinZoomRatio() 到CameraInfo.getZoomState().getValue().getMaxZoomRatio() 的范围内。否则,该函数会返回失败的ListenableFuture 。 -
setLinearZoom() 使用
0 到1.0 之间的线性变焦值设置当前变焦操作。
线性变焦的优势在于,它可以使视野范围 (FOV) 随变焦的变化而缩放。因此,线性变焦非常适合与
这两个 API 返回的
手电筒
启用手电筒后,无论闪光灯模式设置如何,手电筒在拍照和拍视频时都会保持开启状态。仅当手电筒被停用时,
对焦和测光
MeteringPoint
首先,使用
下面的代码演示了如何创建
// Use PreviewView.getMeteringPointFactory if PreviewView is used for preview.
previewView.setOnTouchListener((view, motionEvent) -> {
val meteringPoint = previewView.meteringPointFactory
.createPoint(motionEvent.x, motionEvent.y)
…
}
// Use DisplayOrientedMeteringPointFactory if SurfaceView / TextureView is used for
// preview. Please note that if the preview is scaled or cropped in the View,
// it’s the application's responsibility to transform the coordinates properly
// so that the width and height of this factory represents the full Preview FOV.
// And the (x,y) passed to create MeteringPoint might need to be adjusted with
// the offsets.
val meteringPointFactory = DisplayOrientedMeteringPointFactory(
surfaceView.display,
camera.cameraInfo,
surfaceView.width,
surfaceView.height
)
// Use SurfaceOrientedMeteringPointFactory if the point is specified in
// ImageAnalysis ImageProxy.
val meteringPointFactory = SurfaceOrientedMeteringPointFactory(
imageWidth,
imageHeight,
imageAnalysis)
startFocusAndMetering 和 FocusMeteringAction
如需调用
val meteringPoint1 = meteringPointFactory.createPoint(x1, x1)
val meteringPoint2 = meteringPointFactory.createPoint(x2, y2)
val action = FocusMeteringAction.Builder(meteringPoint1) // default AF|AE|AWB
// Optionally add meteringPoint2 for AF/AE.
.addPoint(meteringPoint2, FLAG_AF | FLAG_AE)
// The action is canceled in 3 seconds (if not set, default is 5s).
.setAutoCancelDuration(3, TimeUnit.SECONDS)
.build()
val result = cameraControl.startFocusAndMetering(action)
// Adds listener to the ListenableFuture if you need to know the focusMetering result.
result.addListener({
// result.get().isFocusSuccessful returns if the auto focus is successful or not.
}, ContextCompat.getMainExecutor(this)
如上面的代码所示,
在内部,CameraX 会将其转换为 Camera2
由于并非所有设备都支持 AF/AE/AWB 和多个区域,CameraX 会尽最大努力执行
曝光补偿
当应用需要对自动曝光 (AE) 输出结果以外的曝光值 (EV) 进行微调时,曝光补偿很有用。CameraX 将按以下方式组合曝光补偿值,以确定当前图片条件下所需的曝光:
Exposure = ExposureCompensationIndex * ExposureCompensationStep
CameraX 提供
当索引值为正值时,会调亮图片;当索引值为负值时,会调暗图片。应用可以按下一部分中所述的
CameraX 仅保留最新的未完成
下面的代码段设置了曝光补偿索引,并注册一个回调,以便知晓曝光更改请求何时被执行:
camera.cameraControl.setExposureCompensationIndex(exposureCompensationIndex)
.addListener({
// Get the current exposure compensation index, it might be
// different from the asked value in case this request was
// canceled by a newer setting request.
val currentExposureIndex = camera.cameraInfo.exposureState.exposureCompensationIndex
…
}, mainExecutor)
- 对曝光补偿控制的可支持性。
- 当前的曝光补偿指数。
- 曝光补偿索引范围。
- 用于计算曝光补偿值的曝光补偿步骤。
例如,下面的代码会使用当前的
val exposureState = camera.cameraInfo.exposureState
binding.seekBar.apply {
isEnabled = exposureState.isExposureCompensationSupported
max = exposureState.exposureCompensationRange.upper
min = exposureState.exposureCompensationRange.lower
progress = exposureState.exposureCompensationIndex
}
CameraX Extensions API
CameraX 提供了一个 Extensions API,用于访问设备制造商在各种 Android 设备上实现的扩展。如需查看支持的扩展模式列表,请参阅相机扩展。
如需查看支持扩展的设备列表,请参阅支持的设备。
扩展架构
下图显示了相机扩展程序架构。

CameraX 应用可以通过 CameraX Extensions API 使用扩展。CameraX Extensions API 可用于管理可用扩展的查询、配置扩展相机会话以及与相机扩展 OEM 库的通信。这样,您的应用就可以使用夜间、HDR、自动、焦外成像或脸部照片修复等功能。
依赖项
CameraX Extensions API 是在
dependencies {
def camerax_version = "1.3.0-alpha04"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
//the CameraX Extensions library
implementation "androidx.camera:camera-extensions:${camerax_version}"
...
}
启用图片拍摄和预览的扩展程序
在使用 Extensions API 之前,请使用
注意:在
ImageCapture 和Preview 上启用扩展后,如果您将ImageCapture 和Preview 用作bindToLifecycle() 的参数,则您可以选择的相机数量可能会受到限制。如果找不到支持扩展的相机,ExtensionsManager#getExtensionEnabledCameraSelector() 会抛出异常。
如需实现图片拍摄和预览用例的扩展,请参阅以下代码示例:
import androidx.camera.extensions.ExtensionMode
import androidx.camera.extensions.ExtensionsManager
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
val lifecycleOwner = this
val cameraProviderFuture = ProcessCameraProvider.getInstance(applicationContext)
cameraProviderFuture.addListener({
// Obtain an instance of a process camera provider
// The camera provider provides access to the set of cameras associated with the device.
// The camera obtained from the provider will be bound to the activity lifecycle.
val cameraProvider = cameraProviderFuture.get()
val extensionsManagerFuture =
ExtensionsManager.getInstanceAsync(applicationContext, cameraProvider)
extensionsManagerFuture.addListener({
// Obtain an instance of the extensions manager
// The extensions manager enables a camera to use extension capabilities available on
// the device.
val extensionsManager = extensionsManagerFuture.get()
// Select the camera
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
// Query if extension is available.
// Not all devices will support extensions or might only support a subset of
// extensions.
if (extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.NIGHT)) {
// Unbind all use cases before enabling different extension modes.
try {
cameraProvider.unbindAll()
// Retrieve a night extension enabled camera selector
val nightCameraSelector =
extensionsManager.getExtensionEnabledCameraSelector(
cameraSelector,
ExtensionMode.NIGHT
)
// Bind image capture and preview use cases with the extension enabled camera
// selector.
val imageCapture = ImageCapture.Builder().build()
val preview = Preview.Builder().build()
// Connect the preview to receive the surface the camera outputs the frames
// to. This will allow displaying the camera frames in either a TextureView
// or SurfaceView. The SurfaceProvider can be obtained from the PreviewView.
preview.setSurfaceProvider(surfaceProvider)
// Returns an instance of the camera bound to the lifecycle
// Use this camera object to control various operations with the camera
// Example: flash, zoom, focus metering etc.
val camera = cameraProvider.bindToLifecycle(
lifecycleOwner,
nightCameraSelector,
imageCapture,
preview
)
} catch (e: Exception) {
Log.e(TAG, "Use case binding failed", e)
}
}
}, ContextCompat.getMainExecutor(this))
}, ContextCompat.getMainExecutor(this))
}
停用扩展程序
如需停用供应商扩展,请取消绑定所有用例,然后使用常规相机选择器重新绑定图片拍摄和预览用例。例如,使用
参考:https://developer.android.google.cn/training/camerax?hl=zh-cn