android 使用MediaCodec 编解码总结

时间:2019-05-13 19:14:24下载本文作者:会员上传
简介:写写帮文库小编为你整理了多篇相关的《android 使用MediaCodec 编解码总结》,但愿对你工作学习有帮助,当然你在写写帮文库还可以找到更多《android 使用MediaCodec 编解码总结》。

第一篇:android 使用MediaCodec 编解码总结

android 使用MediaCodec 编解码总结

本文将主要介绍在安卓中调用MediaCodec类实现视频文件的硬解码,以及如何将以byte[]类型存储的图像数据通过硬编码合成视频文件。1.MediaCodec类的编解码原理 参考链接:https://developer.Android.com/reference/android/media/MediaCodec.html 工作流是这样的: 以编码为例,首先要初始化硬件编码器,配置要编码的格式、视频文件的长宽、码率、帧率、关键帧间隔等等。这一步叫configure。之后开启编码器,当前编码器便是可用状态,随时准备接收数据。下一个过程便是编码的running过程,在此过程中,需要维护两个buffer队列,InputBuffer 和OutputBuffer,用户需要不断出队InputBuffer(即dequeueInputBuffer),往里边放入需要编码的图像数据之后再入队等待处理,然后硬件编码器开始异步处理,一旦处理结束,他会将数据放在OutputBuffer中,并且通知用户当前有输出数据可用了,那么用户就可以出队一个OutputBuffer,将其中的数据拿走,然后释放掉这个buffer。结束条件在于end-of-stream这个flag标志位的设定。在编码结束后,编码器调用stop函数停止编码,之后调用release函数将编码器完全释放掉,整体流程结束。

2.视频解码程序示例 代码来源于

Android: MediaCodec视频文件硬件解码以下所有代码可以在此处下载[java] view plain copy

print?

package com.example.guoheng_iri.helloworld;

import android.graphics.ImageFormat;

import android.graphics.Rect;

import android.graphics.YuvImage;

import android.media.Image;

import android.media.MediaCodec;

import android.media.MediaCodecInfo;

import android.media.MediaExtractor;

import android.media.MediaFormat;

import android.util.Log;

import java.io.File;import java.io.FileOutputStream;

import java.io.IOException;

import java.nio.ByteBuffer;

import java.util.concurrent.LinkedBlockingQueue;

public class VideoDecode {

private static final String TAG = “VideoToFrames”;

private static final boolean VERBOSE = true;

private static final long DEFAULT_TIMEOUT_US = 10000;

private static final int COLOR_FormatI420 = 1;

private static final int COLOR_FormatNV21 = 2;

public static final int FILE_TypeI420 = 1;

public static final int FILE_TypeNV21 = 2;

public static final int FILE_TypeJPEG = 3;

private final int decodeColorFormat = MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Flexible;

private int outputImageFileType =-1;

private String OUTPUT_DIR;

public int ImageWidth=0;

public int ImageHeight=0;

MediaExtractor extractor = null;

MediaCodec decoder = null;

MediaFormat mediaFormat;

public void setSaveFrames(String dir, int fileType)throws IOException {

if(fileType!= FILE_TypeI420 && fileType!= FILE_TypeNV21 && fileType!= FILE_TypeJPEG){

throw new IllegalArgumentException(“only support FILE_TypeI420 ” + “and FILE_TypeNV21 ” + “and FILE_TypeJPEG”);

}

outputImageFileType = fileType;

File theDir = new File(dir);

if(!theDir.exists()){

theDir.mkdirs();

} else if(!theDir.isDirectory()){

throw new IOException(“Not a directory”);

}

OUTPUT_DIR = theDir.getAbsolutePath()+ “/”;

}

public void VideoDecodePrepare(String videoFilePath){

extractor = null;

decoder = null;

try {

File videoFile = new File(videoFilePath);

extractor = new MediaExtractor();

extractor.setDataSource(videoFile.toString());

int trackIndex = selectTrack(extractor);

if(trackIndex < 0){

throw new RuntimeException(“No video track found in ” + videoFilePath);

}

extractor.selectTrack(trackIndex);

mediaFormat = extractor.getTrackFormat(trackIndex);

String mime = mediaFormat.getString(MediaFormat.KEY_MIME);

decoder = MediaCodec.createDecoderByType(mime);

showSupportedColorFormat(decoder.getCodecInfo().getCapabilitiesForType(mime));

if(isColorFormatSupported(decodeColorFormat, decoder.getCodecInfo().getCapabilitiesForType(mime))){

mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, decodeColorFormat);

Log.i(TAG, “set decode color format to type ” + decodeColorFormat);

} else {

Log.i(TAG, “unable to set decode color format, color format type ” + decodeColorFormat + “ not supported”);

}

decoder.configure(mediaFormat, null, null, 0);

decoder.start();

} catch(IOException ioe){

throw new RuntimeException(“failed init encoder”, ioe);

}

}

public void close(){

decoder.stop();

decoder.release();

if(extractor!= null){

extractor.release();

extractor = null;

}

}

public void excuate()

{

try {

decodeFramesToImage(decoder, extractor, mediaFormat);

}finally {

// release encoder, muxer, and input Surface

close();

}

}

private void showSupportedColorFormat(MediaCodecInfo.CodecCapabilities caps){

System.out.print(“supported color format: ”);

for(int c : caps.colorFormats){

System.out.print(c + “t”);

}

System.out.println();

}

private boolean isColorFormatSupported(int colorFormat, MediaCodecInfo.CodecCapabilities caps){

for(int c : caps.colorFormats){

if(c == colorFormat){

return true;

}

}

return false;

}

public void decodeFramesToImage(MediaCodec decoder, MediaExtractor extractor, MediaFormat mediaFormat){

MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();

boolean sawInputEOS = false;

boolean sawOutputEOS = false;

final int width = mediaFormat.getInteger(MediaFormat.KEY_WIDTH);

final int height = mediaFormat.getInteger(MediaFormat.KEY_HEIGHT);

ImageWidth=width;

ImageHeight=height;

int outputFrameCount = 0;

while(!sawOutputEOS){

if(!sawInputEOS){

int inputBufferId = decoder.dequeueInputBuffer(DEFAULT_TIMEOUT_US);

if(inputBufferId >= 0){

ByteBuffer inputBuffer = decoder.getInputBuffer(inputBufferId);

int sampleSize = extractor.readSampleData(inputBuffer, 0);//将一部分视频数据读取到inputbuffer中,大小为sampleSize

if(sampleSize < 0){

decoder.queueInputBuffer(inputBufferId, 0, 0, 0L, MediaCodec.BUFFER_FLAG_END_OF_STREAM);

sawInputEOS = true;

} else {

long presentationTimeUs = extractor.getSampleTime();

decoder.queueInputBuffer(inputBufferId, 0, sampleSize, presentationTimeUs, 0);

extractor.advance();//移动到视频文件的下一个地址

}

}

}

int outputBufferId = decoder.dequeueOutputBuffer(info, DEFAULT_TIMEOUT_US);

if(outputBufferId >= 0){

if((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM)!= 0){

sawOutputEOS = true;

}

boolean doRender =(info.size!= 0);

if(doRender){

outputFrameCount++;

Image image = decoder.getOutputImage(outputBufferId);

System.out.println(“image format: ” + image.getFormat());

if(outputImageFileType!=-1){

String fileName;

switch(outputImageFileType){

case FILE_TypeI420:

fileName = OUTPUT_DIR + String.format(“frame_%05d_I420_%dx%d.yuv”, outputFrameCount, width, height);

dumpFile(fileName, getDataFromImage(image, COLOR_FormatI420));

break;

case FILE_TypeNV21:

fileName = OUTPUT_DIR + String.format(“frame_%05d_NV21_%dx%d.yuv”, outputFrameCount, width, height);

dumpFile(fileName, getDataFromImage(image, COLOR_FormatNV21));

break;

case FILE_TypeJPEG:

fileName = OUTPUT_DIR + String.format(“frame_%05d.jpg”, outputFrameCount);

compressToJpeg(fileName, image);

break;

}

}

image.close();

decoder.releaseOutputBuffer(outputBufferId, true);

}

}

}

}

private static int selectTrack(MediaExtractor extractor){

int numTracks = extractor.getTrackCount();

for(int i = 0;i < numTracks;i++){

MediaFormat format = extractor.getTrackFormat(i);

String mime = format.getString(MediaFormat.KEY_MIME);

if(mime.startsWith(“video/”)){

if(VERBOSE){

Log.d(TAG, “Extractor selected track ” + i + “(” + mime + “): ” + format);

}

return i;

}

}

return-1;

}

private static boolean isImageFormatSupported(Image image){

int format = image.getFormat();

switch(format){

case ImageFormat.YUV_420_888:

case ImageFormat.NV21:

case ImageFormat.YV12:

return true;

}

return false;

}

public static byte[] getGrayFromData(Image image, int colorFormat){

if(colorFormat!= COLOR_FormatI420 && colorFormat!= COLOR_FormatNV21){

throw new IllegalArgumentException(“only support COLOR_FormatI420 ” + “and COLOR_FormatNV21”);

}

if(!isImageFormatSupported(image)){

throw new RuntimeException(“can't convert Image to byte array, format ” + image.getFormat());

}

Image.Plane[] planes = image.getPlanes();

int i = 0;

ByteBuffer buffer = planes[i].getBuffer();

byte[] data = new byte[buffer.remaining()];

buffer.get(data, 0, data.length);

if(VERBOSE)Log.v(TAG, “Finished reading data from plane ” + i);

return data;

}

public static byte[] getDataFromImage(Image image, int colorFormat){

if(colorFormat!= COLOR_FormatI420 && colorFormat!= COLOR_FormatNV21){

throw new IllegalArgumentException(“only support COLOR_FormatI420 ” + “and COLOR_FormatNV21”);

}

if(!isImageFormatSupported(image)){

throw new RuntimeException(“can't convert Image to byte array, format ” + image.getFormat());

}

Rect crop = image.getCropRect();

int format = image.getFormat();

int width = crop.width();

int height = crop.height();

Image.Plane[] planes = image.getPlanes();

byte[] data = new byte[width * height * ImageFormat.getBitsPerPixel(format)/ 8];

byte[] rowData = new byte[planes[0].getRowStride()];

int channelOffset = 0;

int outputStride = 1;

for(int i = 0;i < planes.length;i++){

switch(i){

case 0:

channelOffset = 0;

outputStride = 1;

break;

case 1:

if(colorFormat == COLOR_FormatI420){

channelOffset = width * height;

outputStride = 1;

} else if(colorFormat == COLOR_FormatNV21){

channelOffset = width * height;

outputStride = 2;

}

break;

case 2:

if(colorFormat == COLOR_FormatI420){

channelOffset =(int)(width * height * 1.25);

outputStride = 1;

} else if(colorFormat =

第二篇:Android 下log的使用总结

Android 下log的使用总结

一:在源码开发模式下

1:包含头文件:

1.#include

2:定义宏LOG_TAG

1.#define LOG_TAG “MY LOG TAG”

3:链接log对应的.so库

在Android.mk文件中加入如下语句:

1.LOCAL_SHARED_LIBRARIES +=

2.libcutils

接下来就可以直接使用LOGD来打印log信息了.二:在NDK开发模式下

1:包含头文件:

1.#include

2:定义宏LOG_TAG

1.#define LOG_TAG “MY LOG TAG”

2.#define LOGD(...)__android_log_print(ANDROID_LOG_DEBUG, LOG_TAG, __VA_ARGS__)3:链接log对应的.so库

在Android.mk文件中加入如下语句:

1.LOCAL_LDLIBS :=-llog

接下来就可以直接使用LOGD来打印log信息了.三:在Java代码中

1:导入包

1.import android.util.Log;

2:使用

1.private static final String TAG = “your_tag”;

2.Log.d(TAG,“show something”);

在程序运行过程中可以通过adb shell下的logcat指令看到相应的内容。或在Eclipse下的ADT的LogCat窗口中看到相应的内容了.

第三篇:Android总结

Android四大组件:

Activity—表现屏幕界面

Service—后台服务

BroadcastReceiver—实现广播机制

ContentProvider—实现数据存储

Intent类:用来启动程序并传递信息的类

用于Activity、Receiver、Service之间进行交互的类,通过无参构造方法创建对象,增加其action、category、data、extra等属性进行信息传递,并通过Activity中的startActivity(Intent intent)进行界面的跳转;通过Context中的StartService(Intent intent)进行服务跳转;通过Context中的registerReceive(Intent intent)对广播进行注册,并通过sendBroadcast()进行无序消息发送,或可以通过SendOrderedBroadcast()进行有序的消息发送。Handler类:

用来发送和处理消息,并配合主线程完成UI的更新;消息Message/Runnable传递通过MessageQueue(消息队列,先进先出)进行传递,并通过Lopper进行接收,传递的消息可以为Message对象,也可以是Runnable对象;接收方法通过HandleMessage(Message msg)进行获取。SharedPreferences类:

一般用于第一次登录时的设置,或者是各个界面的一些小型格式设置,如字体等。是本地的小型共享数据库,可以通过Context的静态方法getSharedPreferences获得其对象,对象内的值均为键值对进行储存。通过SharedPreferences对象调用editor()获取SharedPreferences.Editor对象,向共享数据库中增加数据,putString(),并提交数据,commit();通过SharedPreferences对象获取共享数据库中的数据,getString()。

ViewPager:实现界面滑动的类;

通过设置OnPagerChangedListener设置ViewPager的监听事件;

实现流程:

①布局文件中设置ViewPager控件;

②代码中进行绑定控件;

③通过继承PagerAdapter抽象类进行设置适配器,并传递数据源;

④适配器中实现两个抽象方法,两个重写方法:getCount()—获取滑动界面的数量,isViewFromObject()—判断视图是否是来自于Object文件中;重写两个方法,分别为destoryItem—销毁指定位置的视图;InstantiateItem(),设置指定位置的视图;

Timer与TimerTask类:

Timer为计时器的类,通过无参构造方法可以获取对象,通过Timer.schedule(TimerTask task,long time)进行设置多久后执行某任务,当任务执行完后,取消计时的功能,Timer.cancle();TimerTask类为抽象类,实例化时,必须重写run方法;执行的内容,均在run方法中进行设置,并且执行时,已在子线程中进行执行。自定义View:用到的类有Paint、Canvas、Spec、SpecF、Path、View.MeasureSpec、Timer、TimerTask;

抽象类,通过子类继承,获取对象;在布局文件中绑定后,通过代码,设置自定义View的属性;自定义View中,通过重写OnMeasure方法,对布局文件中的尺寸进行测量,并由View中的setMeasureDimenson()方法,进行数据的保存;通过重写Ondraw方法,进行绘图;当需要绘制动态图形时,使用计时器Timer的schedule(TimerTask,long time,delay time2)方法,在time时间后,每隔time2时间,重写执行run方法中的内容;将耗时的操作设置在run方法中,并通过View中的invalidate()方法刷新主线程中的绘的图形,通过postInvalidate()刷新子线程中的图形。数据库:

常用的数据库有Oracle,需要安装和配置的大型收费数据库;MySQL是中型数据库,同样需要安装配置,但不需要收费;Sqlite是小型免费的嵌入式数据库,占用内存低,最新版本为3.0。Sqlite数据库需要通过SqliteDatabaseOpenHelper进行创建数据库,并通过SqliteDatabase进行数据库的操作。辅助类是抽象类,通过继承,重写两个方法,并在子类的构造方法中通过OpenHelper的构造方法(Context context,String SqlName,SqliteDatabase.CursorFactory factory,int version)进行数据库的创建,在onCreate方法中,进行数据库表的创建,在onUpdate中进行数据库的版本更新。在数据库的操作类中,执行exect方法,通过sql语句对数据库进行操作。Create table student(_id integer primary key auto increament ,name text);insert into student(_id,name)values(1,zx);delete from student where _id=1;update student set _id=2 where name=zx;select *from student;ListView、GridView适配器的优化:

将布局文件中的控件进行封装,当视图加载时,判断可变视图是否存在,当不存在时,通过布局文件获取视图,并新建封装类,将地址通过setTag()进行发送;当视图存在时,重复利用地址—getTag()。反射:

存储数据的方式:

共享数据库、数据库、文件、网络、内容提供者

广播:

广播传播时,需要接收者、发送者、广播频道;根据发送者的发送方式不同,分为有序广播、无序广播;有序广播为接收者有接收顺序,根据设置的优先级不同,确定先后顺序,接收者同时也是发送者,向后面的广播发送消息,发送过程中,可以添加信息,也可以停止广播的传输;无序广播,接收者之间无联系,均从发送者处接收信息;广播在传输过程中,不能被添加信息,也不可能被停止。广播在发送前,需要对接收者进行注册,注册方式有两种,动态注册、静态注册。动态注册,是在代码中进行,通过Context对象调用静态方法进行注册,所有的广播均可以用动态注册,其生命周期依赖于应用,相对于静态注册,比较节省内存;静态方法在清单文件中进行注册,部分系统广播不能通过静态注册进行,其生命周期依赖于系统,当系统启动,即运行接收广播,较耗内存。广播接收者需要继承BroadcastReceiver,并实现抽象方法onReceive(),通过回调接口,进行数据的传输。注意:广播发送前,必须进行接收者的注册,并且,当显示跳转时,不需要意图过滤器。安卓布局:九种布局

线性布局,水平或垂直方向两种格式,主要特点为权重,即规定各控件在视图中的占有的比例;

相对布局,相对于父控件或兄弟控件的布局,各控件需指定相对位置; 绝对布局,指定各控件在视图中的绝对位置,几乎不再使用; 表格布局,子布局放在行中,列由控件表示(TableRow); 帧布局:覆盖前面布局的布局,一般用于暂停按钮等; 风格布局:可以跨行、跨列的布局,占满换行;

左右侧滑:可以实现左右侧滑,通过设置主菜单和二级菜单设置左右两个菜单; 下拉刷新:设置下拉刷新、上拉加载的功能; 抽屉布局;

安卓版本及对应的API:

1.6—4;2—7;3—11;4—15;4.3—18;5—20;5.1—21;6—23;7—25; 安卓四层架构:

应用层:Java语言开发,主要从事App开发;

运行库层:Java语言与C语言,View视图、管理类等的开发; 架构层:C语言与Linux语言,各种框架、浏览器等; 内核层:Linux、C语言,开发各种驱动; 安卓四大组件:

Activity:界面,实现程序与用户之间的交换,有自己的生命周期,七个生命周期;4种启动模式 Service:

BroadcastReceive:三要素,发送者、接收者、发送频道(Intent);类型:有序(接收有序,有数据传送,可以拦截数据)、无序广播(相对);注册方式:静态注册,持久监听,占用内存比较高生命周期跟随系统,动态注册(代码中),所有广播都可以动态注册,部分系统广播不能动态注册,临时监听,占用内存较少,生命周期随应用进行;

ContentProvide:不能存放数据,五种存放数据方式之一,特点为:①为数据的获取等操作添加一个统一的接口②可以实现跨应用访问数据;③可以实现Android中通讯录、消息、音频、视频等的访问或操作;通过ContentReceive进行数据的访问,可以对数据进行增删改查操作。

动画: IO流: 序列化: AlertDialog:

Set实现类: 手机电量检测:

自定义SurfaceView:

自定义View:三个构造方法的区别

Message:Handler.obtain/new/Message.obtain

HttpUriConnection访问网络

gride 异步任务 动画

抽象类和接口 反射 克隆 序列化 侧滑的实现 数据库 Socket:

Gson解析

异步任务和子线程区别 WebView 版本更新 照片的圆角化

Collection与Collections Sql语句

MVP框架与MVC: TCP与UDP的区别: 一键分享的流程: Http协议的理解: 不使用框架访问网络: List集合与set集合: 自定义View的流程: 线性布局的特点: ViewPager的原理: 服务的启动方式:

Activity的启动方式: Xml数据解析:

第四篇:Android Multimedia框架总结(二十二)MediaCodec中C++中创建到start过程及状态变换

Android Multimedia框架总结(二十二)MediaCodec中C++中创建到start过程

及状态变换

从今天开始,将深入源码中看看其c++过程,看下Agenda如下:

mediacodec.h CreateByType initMediaCodec中BufferInfo内部类: configure过程 start BufferInfo在MediaCodec.h中对应是一个结构体

//create by 逆流的鱼yuiop on 2016/12/11 //blog地址:http://blog.csdn.net/hejjunlin struct BufferInfo {

uint32_t mBufferID;

sp mData;

sp mEncryptedData;

sp mSharedEncryptedBuffer;

sp mNotify;

sp mFormat;

bool mOwnedByClient;};mediacodec.h的方法的声明,位于frameworksavincludemediastagefright下

//create by 逆流的鱼yuiop on 2016/12/11 //blog地址:http://blog.csdn.net/hejjunlin namespace android { struct ABuffer;struct AMessage;struct AReplyToken;struct AString;struct CodecBase;struct IBatteryStats;struct ICrypto;class IMemory;struct MemoryDealer;class IResourceManagerClient;class IResourceManagerService;struct PersistentSurface;struct SoftwareRenderer;struct Surface;struct MediaCodec : public AHandler {

enum ConfigureFlags {

CONFIGURE_FLAG_ENCODE

= 1,};

enum BufferFlags {

BUFFER_FLAG_SYNCFRAME

= 1,BUFFER_FLAG_CODECCONFIG = 2,BUFFER_FLAG_EOS

= 4,};

enum {

CB_INPUT_AVAILABLE = 1,CB_OUTPUT_AVAILABLE = 2,CB_ERROR = 3,CB_OUTPUT_FORMAT_CHANGED = 4,CB_RESOURCE_RECLAIMED = 5,};

static const pid_t kNoPid =-1;

static sp CreateByType(const sp &looper, const char *mime, bool encoder, status_t *err = NULL,pid_t pid = kNoPid);

static sp CreateByComponentName(const sp &looper, const char *name, status_t *err = NULL,pid_t pid = kNoPid);

static sp

CreatePersistentInputSurface();

status_t configure(const sp &format,const sp &nativeWindow,const sp &crypto,uint32_t flags);

status_t setCallback(const sp &callback);

status_t setOnFrameRenderedNotification(const sp ¬ify);

status_t createInputSurface(sp* bufferProducer);

status_t setInputSurface(const sp

&surface);

status_t start();

// Returns to a state in which the component remains allocated but

// unconfigured.status_t stop();

// Resets the codec to the INITIALIZED state.Can be called after an error

// has occured to make the codec usable.status_t reset();

// Client MUST call release before releasing final reference to this

// object.status_t release();

status_t flush();

status_t queueInputBuffer(size_t index,size_t offset,size_t size,int64_t presentationTimeUs,uint32_t flags,AString *errorDetailMsg = NULL);

status_t queueSecureInputBuffer(size_t index,size_t offset,const CryptoPlugin::SubSample *subSamples,size_t numSubSamples,const uint8_t key[16],const uint8_t iv[16],CryptoPlugin::Mode mode,int64_t presentationTimeUs,uint32_t flags,AString *errorDetailMsg = NULL);

status_t dequeueInputBuffer(size_t *index, int64_t timeoutUs = 0ll);

status_t dequeueOutputBuffer(size_t *index,size_t *offset,size_t *size,int64_t *presentationTimeUs,uint32_t *flags,int64_t timeoutUs = 0ll);

status_t renderOutputBufferAndRelease(size_t index, int64_t timestampNs);

status_t renderOutputBufferAndRelease(size_t index);

status_t releaseOutputBuffer(size_t index);

status_t signalEndOfInputStream();

status_t getOutputFormat(sp *format)const;

status_t getInputFormat(sp *format)const;

status_t getWidevineLegacyBuffers(Vector > *buffers)const;

status_t getInputBuffers(Vector > *buffers)const;

status_t getOutputBuffers(Vector > *buffers)const;

status_t getOutputBuffer(size_t index, sp *buffer);

status_t getOutputFormat(size_t index, sp *format);

status_t getInputBuffer(size_t index, sp *buffer);

status_t setSurface(const sp &nativeWindow);

status_t requestIDRFrame();

// Notification will be posted once there “is something to do”, i.e.// an input/output buffer has become available, a format change is

// pending, an error is pending.void requestActivityNotification(const sp ¬ify);

status_t getName(AString *componentName)const;

status_t setParameters(const sp ¶ms);

// Create a MediaCodec notification message from a list of rendered or dropped render infos

// by adding rendered frame information to a base notification message.Returns the number

// of frames that were rendered.static size_t CreateFramesRenderedMessage(std::list done, sp &msg);protected:

virtual ~MediaCodec();

virtual void onMessageReceived(const sp &msg);private:

// used by ResourceManagerClient

status_t reclaim(bool force = false);

friend struct ResourceManagerClient;private:

enum State {

UNINITIALIZED,INITIALIZING,INITIALIZED,CONFIGURING,CONFIGURED,STARTING,STARTED,FLUSHING,FLUSHED,STOPPING,RELEASING,};

enum {

kPortIndexInput

= 0,kPortIndexOutput

= 1,};

enum {

kWhatInit

= 'init',kWhatConfigure

= 'conf',kWhatSetSurface

= 'sSur',kWhatCreateInputSurface

= 'cisf',kWhatSetInputSurface

= 'sisf',kWhatStart

= 'strt',kWhatStop

= 'stop',kWhatRelease

= 'rele',kWhatDequeueInputBuffer

= 'deqI',kWhatQueueInputBuffer

= 'queI',kWhatDequeueOutputBuffer

= 'deqO',kWhatReleaseOutputBuffer

= 'relO',kWhatSignalEndOfInputStream

= 'eois',kWhatGetBuffers

= 'getB',kWhatFlush

= 'flus',kWhatGetOutputFormat

= 'getO',kWhatGetInputFormat

= 'getI',kWhatDequeueInputTimedOut

= 'dITO',kWhatDequeueOutputTimedOut

= 'dOTO',kWhatCodecNotify

= 'codc',kWhatRequestIDRFrame

= 'ridr',kWhatRequestActivityNotification

= 'racN',kWhatGetName

= 'getN',kWhatSetParameters

= 'setP',kWhatSetCallback

= 'setC',kWhatSetNotification

= 'setN',};

enum {

kFlagUsesSoftwareRenderer

= 1,kFlagOutputFormatChanged

= 2,kFlagOutputBuffersChanged

= 4,kFlagStickyError

= 8,kFlagDequeueInputPending

= 16,kFlagDequeueOutputPending

= 32,kFlagIsSecure

= 64,kFlagSawMediaServerDie

= 128,kFlagIsEncoder

= 256,kFlagGatherCodecSpecificData

= 512,kFlagIsAsync

= 1024,kFlagIsComponentAllocated

= 2048,kFlagPushBlankBuffersOnShutdown = 4096,};

struct BufferInfo {

uint32_t mBufferID;

sp mData;

sp mEncryptedData;

sp mSharedEncryptedBuffer;

sp mNotify;

sp mFormat;

bool mOwnedByClient;

};

struct ResourceManagerServiceProxy : public IBinder::DeathRecipient {

ResourceManagerServiceProxy(pid_t pid);

~ResourceManagerServiceProxy();

void init();

// implements DeathRecipient

virtual void binderDied(const wp& /*who*/);

void addResource(int64_t clientId,const sp client,const Vector &resources);

void removeResource(int64_t clientId);

bool reclaimResource(const Vector &resources);

private:

Mutex mLock;

sp mService;

pid_t mPid;

};

State mState;

bool mReleasedByResourceManager;

sp mLooper;

sp mCodecLooper;

sp mCodec;

AString mComponentName;

sp mReplyID;

uint32_t mFlags;

status_t mStickyError;

sp mSurface;

SoftwareRenderer *mSoftRenderer;

sp mOutputFormat;

sp mInputFormat;

sp mCallback;

sp mOnFrameRenderedNotification;

sp mDealer;

sp mResourceManagerClient;

sp mResourceManagerService;

bool mBatteryStatNotified;

bool mIsVideo;

int32_t mVideoWidth;

int32_t mVideoHeight;

int32_t mRotationDegrees;

// initial create parameters

AString mInitName;

bool mInitNameIsType;

bool mInitIsEncoder;

// configure parameter

sp mConfigureMsg;

// Used only to synchronize asynchronous getBufferAndFormat

// across all the other(synchronous)buffer state change

// operations, such as de/queueIn/OutputBuffer, start and

// stop/flush/reset/release.Mutex mBufferLock;

List mAvailPortBuffers[2];

Vector mPortBuffers[2];

int32_t mDequeueInputTimeoutGeneration;

sp mDequeueInputReplyID;

int32_t mDequeueOutputTimeoutGeneration;

sp mDequeueOutputReplyID;

sp mCrypto;

List > mCSD;

sp mActivityNotify;

bool mHaveInputSurface;

bool mHavePendingInputBuffers;

MediaCodec(const sp &looper, pid_t pid);

static status_t PostAndAwaitResponse(const sp &msg, sp *response);

void PostReplyWithError(const sp &replyID, int32_t err);

status_t init(const AString &name, bool nameIsType, bool encoder);

void setState(State newState);

void returnBuffersToCodec();

void returnBuffersToCodecOnPort(int32_t portIndex);

size_t updateBuffers(int32_t portIndex, const sp &msg);

status_t onQueueInputBuffer(const sp &msg);

status_t onReleaseOutputBuffer(const sp &msg);

ssize_t dequeuePortBuffer(int32_t portIndex);

status_t getBufferAndFormat(size_t portIndex, size_t index,sp *buffer, sp *format);

bool handleDequeueInputBuffer(const sp &replyID, bool newRequest = false);

bool handleDequeueOutputBuffer(const sp &replyID, bool newRequest = false);

void cancelPendingDequeueOperations();

void extractCSD(const sp &format);

status_t queueCSDInputBuffer(size_t bufferIndex);

status_t handleSetSurface(const sp &surface);

status_t connectToSurface(const sp &surface);

status_t disconnectFromSurface();

void postActivityNotificationIfPossible();

void onInputBufferAvailable();

void onOutputBufferAvailable();

void onError(status_t err, int32_t actionCode, const char *detail = NULL);

void onOutputFormatChanged();

status_t onSetParameters(const sp ¶ms);

status_t amendOutputFormatWithCodecSpecificData(const sp &buffer);

void updateBatteryStat();

bool isExecuting()const;

uint64_t getGraphicBufferSize();

void addResource(const String8 &type, const String8 &subtype, uint64_t value);

bool hasPendingBuffer(int portIndex);

bool hasPendingBuffer();

/* called to get the last codec error when the sticky flag is set.* if no such codec error is found, returns UNKNOWN_ERROR.*/

inline status_t getStickyError()const {

return mStickyError!= 0 ? mStickyError : UNKNOWN_ERROR;

}

inline void setStickyError(status_t err){

mFlags |= kFlagStickyError;

mStickyError = err;

}

DISALLOW_EVIL_CONSTRUCTORS(MediaCodec);};} // namespace android CreateByType

//create by 逆流的鱼yuiop on 2016/12/11 //blog地址:http://blog.csdn.net/hejjunlin // static sp MediaCodec::CreateByType(const sp &looper, const char *mime, bool encoder, status_t *err, pid_t pid){

sp codec = new MediaCodec(looper, pid);//这果实际上new出MediaCodec对象

const status_t ret = codec->init(mime, true /* nameIsType */, encoder);

if(err!= NULL){

*err = ret;

}

return ret == OK ? codec : NULL;// NULL deallocates codec.} 接着到init过程

//create by 逆流的鱼yuiop on 2016/12/11 //blog地址:http://blog.csdn.net/hejjunlin status_t MediaCodec::init(const AString &name, bool nameIsType, bool encoder){

mResourceManagerService->init();

// 保存 初始参数,到时用于reset

mInitName = name;

mInitNameIsType = nameIsType;

mInitIsEncoder = encoder;

// 目前视频解码器不能马上从OMX_FillThisBuffer返回,违反OpenMAX规格,直到提醒我们需要入驻另一个第三方的looper释放在事件队列中。

if(nameIsType ||!strncasecmp(name.c_str(), “omx.”, 4)){//omx.匹配

mCodec = new ACodec;//实例化ACodec

} else if(!nameIsType

&&!strncasecmp(name.c_str(), “android.filter.”, 15)){

mCodec = new MediaFilter;// 实例化MediaFilter

} else {

return NAME_NOT_FOUND;

}

bool secureCodec = false;

if(nameIsType &&!strncasecmp(name.c_str(), “video/”, 6)){

mIsVideo = true;

} else {

AString tmp = name;

if(tmp.endsWith(“.secure”)){

secureCodec = true;

tmp.erase(tmp.size()-7, 7);

}

const sp mcl = MediaCodecList::getInstance();

if(mcl == NULL){

mCodec = NULL;// remove the codec.return NO_INIT;// if called from Java should raise IOException

}

ssize_t codecIdx = mcl->findCodecByName(tmp.c_str());

if(codecIdx >= 0){

const sp info = mcl->getCodecInfo(codecIdx);

Vector mimes;

info->getSupportedMimes(&mimes);

for(size_t i = 0;i < mimes.size();i++){

if(mimes[i].startsWith(“video/”)){

mIsVideo = true;

break;

}

}

}

}

if(mIsVideo){

// video codec needs dedicated looper

if(mCodecLooper == NULL){

mCodecLooper = new ALooper;

mCodecLooper->setName(“CodecLooper”);//设置名字为CodecLooper

mCodecLooper->start(false, false, ANDROID_PRIORITY_AUDIO);

}

mCodecLooper->registerHandler(mCodec);

} else {

mLooper->registerHandler(mCodec);

}

mLooper->registerHandler(this);

mCodec->setNotificationMessage(new AMessage(kWhatCodecNotify, this));

sp msg = new AMessage(kWhatInit, this);

msg->setString(“name”, name);

msg->setInt32(“nameIsType”, nameIsType);

if(nameIsType){

msg->setInt32(“encoder”, encoder);

}

status_t err;

Vector resources;

const char *type = secureCodec ? kResourceSecureCodec : kResourceNonSecureCodec;

const char *subtype = mIsVideo ? kResourceVideoCodec : kResourceAudioCodec;

resources.push_back(MediaResource(String8(type), String8(subtype), 1));

for(int i = 0;i <= kMaxRetry;++i){

if(i > 0){

// Don't try to reclaim resource for the first time.if(!mResourceManagerService->reclaimResource(resources)){

break;

}

}

sp response;

err = PostAndAwaitResponse(msg, &response);

if(!isResourceError(err)){

break;

}

}

return err;} configure过程

//create by 逆流的鱼yuiop on 2016/12/11 //blog地址:http://blog.csdn.net/hejjunlin status_t MediaCodec::configure(const sp &format,const sp &surface,const sp &crypto,uint32_t flags){

sp msg = new AMessage(kWhatConfigure, this);

if(mIsVideo){

format->findInt32(“width”, &mVideoWidth);

format->findInt32(“height”, &mVideoHeight);

if(!format->findInt32(“rotation-degrees”, &mRotationDegrees)){

mRotationDegrees = 0;

}

}

msg->setMessage(“format”, format);

msg->setInt32(“flags”, flags);

msg->setObject(“surface”, surface);

if(crypto!= NULL){

msg->setPointer(“crypto”, crypto.get());

}

// save msg for reset

mConfigureMsg = msg;

status_t err;

Vector resources;

const char *type =(mFlags & kFlagIsSecure)?

kResourceSecureCodec : kResourceNonSecureCodec;

const char *subtype = mIsVideo ? kResourceVideoCodec : kResourceAudioCodec;

resources.push_back(MediaResource(String8(type), String8(subtype), 1));

// Don't know the buffer size at this point, but it's fine to use 1 because

// the reclaimResource call doesn't consider the requester's buffer size for now.resources.push_back(MediaResource(String8(kResourceGraphicMemory), 1));

for(int i = 0;i <= kMaxRetry;++i){

if(i > 0){

// Don't try to reclaim resource for the first time.if(!mResourceManagerService->reclaimResource(resources)){

break;

}

}

sp response;

err = PostAndAwaitResponse(msg, &response);

if(err!= OK && err!= INVALID_OPERATION){

// MediaCodec now set state to UNINITIALIZED upon any fatal error.// To maintain backward-compatibility, do a reset()to put codec

// back into INITIALIZED state.// But don't reset if the err is INVALID_OPERATION, which means

// the configure failure is due to wrong state.ALOGE(“configure failed with err 0x%08x, resetting...”, err);

reset();

}

if(!isResourceError(err)){

break;

}

}

return err;} start过程

//create by 逆流的鱼yuiop on 2016/12/11 //blog地址:http://blog.csdn.net/hejjunlin status_t MediaCodec::start(){

sp msg = new AMessage(kWhatStart, this);

status_t err;

Vector resources;

const char *type =(mFlags & kFlagIsSecure)?

kResourceSecureCodec : kResourceNonSecureCodec;

const char *subtype = mIsVideo ? kResourceVideoCodec : kResourceAudioCodec;

resources.push_back(MediaResource(String8(type), String8(subtype), 1));

// Don't know the buwww.xiexiebang.comffer size at this point, but it's fine to use 1 because

// the reclaimResource call doesn't consider the requester's buffer size for now.resources.push_back(MediaResource(String8(kResourceGraphicMemory), 1));

for(int i = 0;i <= kMaxRetry;++i){

if(i > 0){

// Don't try to reclaim resource for the first time.if(!mResourceManagerService->reclaimResource(resources)){

break;

}

// Recover codec from previous error before retry start.err = reset();

if(err!= OK){

ALOGE(“retrying start: failed to reset codec”);

break;

}

sp response;

err = PostAndAwaitResponse(mConfigureMsg, &response);

if(err!= OK){

ALOGE(“retrying start: failed to configure codec”);

break;

}

}

sp response;

err = PostAndAwaitResponse(msg, &response);

if(!isResourceError(err)){

break;

}

}

return err;} stop过程

//create by 逆流的鱼yuiop on 2016/12/11 //blog地址:http://blog.csdn.net/hejjunlin status_t MediaCodec::stop(){

sp msg = new AMessage(kWhatStop, this);

sp response;

return PostAndAwaitResponse(msg, &response);} 找到对应的AMessage.cpp,对应同样有一套AHandler.cpp,及ALooper.cpp,这此组成了在c++中一套机制,接口 方法的名字和Java层保持一致。

所有message都在onMessageReceived方法中处理,MediaCodec的各个状态的相关切换。

void MediaCodec::onMessageReceived(const sp &msg){

switch(mState){

case INITIALIZING://初始化中

{

setState(UNINITIALIZED);

break;

}

case CONFIGURING://配置中

{

setState(actionCode == ACTION_CODE_FATAL ?

UNINITIALIZED : INITIALIZED);

break;

}

case STARTING://start中

{

setState(actionCode == ACTION_CODE_FATAL ?

UNINITIALIZED : CONFIGURED);

break;

}

case STOPPING://停止中

case RELEASING://释放中

{

// Ignore the error, assuming we'll still get

// the shnc630.comutdown complete notification.sendErrorResponse = false;

if(mFlags & kFlagSawMediaServerDie){

// MediaServer died, there definitely won't

// be a shutdown complete notification after

// all.// note that we're directly going from

// STOPPING->UNINITIALIZED, instead of the

// usual STOPPING->INITIALIZED state.setState(UNINITIALIZED);

if(mState == RELEASING){

mComponentName.clear();

}

STARTED);

(new AMessage)->postReply(mReplyID);

}

break;}

case FLUSHING://刷新中 {

if(actionCode == ACTION_CODE_FATAL){

setState(UNINITIALIZED);

} else {

setState((mFlags & kFlagIsAsync)? FLUSHED :

}

break;}

case FLUSHED: case STARTED: {

sendErrorResponse = false;

setStickyError(err);

postActivityNotificationIfPossible();

cancelPendingDequeueOperations();

if(mFlags & kFlagIsAsync){

onError(err, actionCode);

}

switch(actionCode){

case ACTION_CODE_TRANSIENT:

break;

case ACTION_CODE_RECOVERABLE:

setState(INITIALIZED);

break;

default:

setState(UNINITIALIZED);

break;

}

break;

}

default:

{

sendErrorResponse = false;

setStickyError(err);

postActivityNotificationIfPossible();

// actionCode in an uninitialized state is always fatal.if(mState == UNINITIALIZED){

}

actionCode = ACTION_CODE_FATAL;

}

if(mFlags & kFlagIsAsync){

onError(err, actionCode);

}

switch(actionCode){

case ACTION_CODE_TRANSIENT:

break;

case ACTION_CODE_RECOVERABLE:

setState(INITIALIZED);

break;

default:

setState(UNINITIALIZED);

break;

}

break;}

第五篇:Android WebView总结

Android WebView总结

1、添加权限:AndroidManifest.xml中必须使用许可“Android.permission.INTERNET”,否则会出web page not available错误。

2、在要Activity中生成一个WebView组件:WebView webView = new WebView(this);

3、设置WebView基本信息:

如果访问的页面中有Javascript,则webview必须设置支持Javascript。

webview.getSettings().setJavaScriptEnabled(true);

触摸焦点起作用

requestFocus();

取消滚动条

this.setScrollBarStyle(SCROLLBARS_OUTSIDE_OVERLAY);

4、设置WevView要显示的网页:

互联网用:webView.loadUrl("");本地文件存放在:assets文件中

5、如果希望点击链接由自己处理,而不是新开Android的系统browser中响应该链接。给WebView添加一个事件监听对象(WebViewClient)

并重写其中的一些方法

shouldOverrideUrlLoading:对网页中超链接按钮的响应。

当按下某个连接时WebViewClient会调用这个方法,并传递参数:按下的url

onLoadResource

onPageStart

onPageFinish

onReceiveError

onReceivedHttpAuthRequest6、如果用webview点链接看了很多页以后,如果不做任何处理,点击系统“Back”键,整个浏览器会调用finish()而结束自身,如果希望浏览的网页回退而不是退出浏览器,需要在当前Activity中处理并消费掉该Back事件。

覆盖Activity类的onKeyDown(int keyCoder,KeyEvent event)方法。

public boolean onKeyDown(int keyCoder,KeyEvent event){

if(webView.canGoBack()&& keyCoder == KeyEvent.KEYCODE_BACK){

webview.goBack();//goBack()表示返回webView的上一页面

return true;

}

return false;

}

下载android 使用MediaCodec 编解码总结word格式文档
下载android 使用MediaCodec 编解码总结.doc
将本文档下载到自己电脑,方便修改和收藏,请勿使用迅雷等下载。
点此处下载文档

文档为doc格式


声明:本文内容由互联网用户自发贡献自行上传,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任。如果您发现有涉嫌版权的内容,欢迎发送邮件至:645879355@qq.com 进行举报,并提供相关证据,工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。

相关范文推荐

    Android 课程总结

    一、 Android开发环境的搭建。 1、 Android SDK的安装; 2、 ADT的安装和配置; 3、 Android 模拟器的配置。 二、 编写第一个Android程序───Hello World(1学时) 1、 创建一......

    Android方案总结

    一、硬件描述 如上图,应用程序的开发过程中我们使用了飞思卡尔的i.MX51 EVK Hardware。 设备提供的支持如下:多标准音频回放;多标准视频回放;开放的系统支持; 二、软体结构 1、An......

    Android面试总结

    1.activity的生命周期。 activity主要生命周期的方法说明: onCreate(Bundle savedInstanceState):创建activity时调用。设置在该方法中,还以Bundle的形式提供对以前储存的任......

    Android培训总结(定稿)

    Android培训总结 非常高兴能够参加清华大学的Android暑期培训,感谢老师们对我们的教导和关心,在短短一个月的时间里我学到了Java、数据库、Android、JSP等知识。最重要的是通......

    Android 个人总结

    Android 个人总结 通过本学期的的学习,我知道了android是由google开发的一款手机平台,android的基本架构是基于linux内核,由内核向外的反别为库和应用架构,然后就是我们手机上的......

    Android学习总结

    Android学习总结 零零碎碎的总结:1.客户端的功能逻辑不难,UI界面也不难,但写UI花的时间是写功能逻辑的两倍. 2.写代码前的思考过程非常重要,即使在简单的功能,也需要在本子......

    Android Studio2.1版本后使用虚拟机碰见的问题总结以及其他问题

    一、androidstudio的sdk配置问题 如果点击Start a new Android Studio project是没有反应的,并且在Configure下面的SDK Manager是灰色的,这是因为没有安装Android SDK的缘故。......

    Android蓝牙连接总结

    蓝牙连线时首先会定义一个专门用来连接的函数体BtconnectDevice,当按下Btconnect(为一个Buttton名时)时,程序会判断此时Btconnect键是什么状态,当Btconnect键为连接状态时,因为按......