Introduction
Android now supports devices with 512MB of RAM. This documentation is intended to help OEMs optimize and configure Android 4.4 for low-memory devices. Several of these optimizations are generic enough that they can be applied to previous releases as well.
Android 4.4 platform optimizations
Improved memory management
- Validated memory-saving kernel configurations: Kernel Same-page Merging (KSM), and Swap to ZRAM.
- Kill cached processes if about to be uncached and too large.
- Don’t allow large services to put themselves back into A Services (so they can’t cause the launcher to be killed).
- Kill processes (even ordinarily unkillable ones such as the current IME) that get too large in idle maintenance.
- Serialize the launch of background services.
- Tuned memory use of low-RAM devices: tighter out-of-memory (OOM) adjustment levels, smaller graphics caches, etc.
Reduced system memory
- Trimmed system_server and SystemUI processes (saved several MBs).
- Preload dex caches in Dalvik (saved several MBs).
- Validated JIT-off option (saves up to 1.5MB per process).
- Reduced per-process font cache overhead.
- Introduced ArrayMap/ArraySet and used extensively in framework as a lighter-footprint replacement for HashMap/HashSet.
Procstats
Added a new Developer Option to show memory state and application memory usage ranked by how often they run and amount of memory consumed.
API
Added a new ActivityManager.isLowRamDevice() to allow applications to detect when running on low memory devices and choose to disable large-RAM features.
Memory tracking
New memtrack HAL to track graphics memory allocations, additional information in dumpsys meminfo, clarified summaries in meminfo (for example reported free RAM includes RAM of cached processes, so that OEMs don’t try to optimize the wrong thing).
Build-time configuration
Enable Low Ram Device flag
We are introducing a new API called ActivityManager.isLowRamDevice()
for applications to determine if they should turn off specific memory-intensive features that work poorly on low-memory devices.
For 512MB devices, this API is expected to return: "true" It can be enabled by the following system property in the device makefile.PRODUCT_PROPERTY_OVERRIDES += ro.config.low_ram=true
Disable JIT
System-wide JIT memory usage is dependent on the number of applications running and the code footprint of those applications. The JIT establishes a maximum translated code cache size and touches the pages within it as needed. JIT costs somewhere between 3M and 6M across a typical running system.
The large apps tend to max out the code cache fairly quickly (which by default has been 1M). On average, JIT cache usage runs somewhere between 100K and 200K bytes per app. Reducing the max size of the cache can help somewhat with memory usage, but if set too low will send the JIT into a thrashing mode. For the really low-memory devices, we recommend the JIT be disabled entirely.
This can be achieved by adding the following line to the product makefile:PRODUCT_PROPERTY_OVERRIDES += dalvik.vm.jit.codecachesize=0
Launcher Configs
Ensure the default wallpaper setup on launcher is not using live-wallpaper. Low-memory devices should not pre-install any live wallpapers.
Kernel configuration
Tuning kernel/ActivityManager to reduce direct reclaim
Direct reclaim happens when a process or the kernel tries to allocate a page of memory (either directly or due to faulting in a new page) and the kernel has used all available free memory. This requires the kernel to block the allocation while it frees up a page. This in turn often requires disk I/O to flush out a dirty file-backed page or waiting for lowmemorykiller
to kill a process. This can result in extra I/O in any thread, including a UI thread.
To avoid direct reclaim, the kernel has watermarks that trigger kswapd
or background reclaim. This is a thread that tries to free up pages so the next time a real thread allocates it can succeed quickly.
The default threshold to trigger background reclaim is fairly low, around 2MB on a 2GB device and 636KB on a 512MB device. And the kernel reclaims only a few MB of memory in background reclaim. This means any process that quickly allocates more than a few megabytes is going to quickly hit direct reclaim.
Support for a new kernel tunable is added in the android-3.4 kernel branch as patch 92189d47f66c67e5fd92eafaa287e153197a454f ("add extra free kbytes tunable"). Cherry-picking this patch to a device's kernel will allow ActivityManager to tell the kernel to try to keep 3 full-screen 32 bpp buffers of memory free.
These thresholds can be configured via the framework config.xml
<!-- Device configuration setting the /proc/sys/vm/extra_free_kbytes tunable in the kernel (if it exists). A high value will increase the amount of memory that the kernel tries to keep free, reducing allocation time and causing the lowmemorykiller to kill earlier. A low value allows more memory to be used by processes but may cause more allocations to block waiting on disk I/O or lowmemorykiller. Overrides the default value chosen by ActivityManager based on screen size. 0 prevents keeping any extra memory over what the kernel keeps by default. -1 keeps the default. -->
<integer name="config_extraFreeKbytesAbsolute">-1</integer>
<!-- Device configuration adjusting the /proc/sys/vm/extra_free_kbytes tunable in the kernel (if it exists). 0 uses the default value chosen by ActivityManager. A positive value will increase the amount of memory that the kernel tries to keep free, reducing allocation time and causing the lowmemorykiller to kill earlier. A negative value allows more memory to be used by processes but may cause more allocations to block waiting on disk I/O or lowmemorykiller. Directly added to the default value chosen by ActivityManager based on screen size. -->
<integer name="config_extraFreeKbytesAdjust">0</integer>
Tuning LowMemoryKiller
ActivityManager configures the thresholds of the LowMemoryKiller to match its expectation of the working set of file-backed pages (cached pages) required to run the processes in each priority level bucket. If a device has high requirements for the working set, for example if the vendor UI requires more memory or if more services have been added, the thresholds can be increased.
The thresholds can be reduced if too much memory is being reserved for file backed pages, so that background processes are being killed long before disk thrashing would occur due to the cache getting too small.
<!-- Device configuration setting the minfree tunable in the lowmemorykiller in the kernel. A high value will cause the lowmemorykiller to fire earlier, keeping more memory in the file cache and preventing I/O thrashing, but allowing fewer processes to stay in memory. A low value will keep more processes in memory but may cause thrashing if set too low. Overrides the default value chosen by ActivityManager based on screen size and total memory for the largest lowmemorykiller bucket, and scaled proportionally to the smaller buckets. -1 keeps the default. -->
<integer name="config_lowMemoryKillerMinFreeKbytesAbsolute">-1</integer>
<!-- Device configuration adjusting the minfree tunable in the lowmemorykiller in the kernel. A high value will cause the lowmemorykiller to fire earlier, keeping more memory in the file cache and preventing I/O thrashing, but allowing fewer processes to stay in memory. A low value will keep more processes in memory but may cause thrashing if set too low. Directly added to the default value chosen by ActivityManager based on screen size and total memory for the largest lowmemorykiller bucket, and scaled proportionally to the smaller buckets. 0 keeps the default. -->
<integer name="config_lowMemoryKillerMinFreeKbytesAdjust">0</integer>
KSM (Kernel samepage merging)
KSM is a kernel thread that runs in the background and compares pages in memory that have been markedMADV_MERGEABLE
by user-space. If two pages are found to be the same, the KSM thread merges them back as a single copy-on-write page of memory.
KSM will save memory over time on a running system, gaining memory duplication at a cost of CPU power, which could have an impact on battery life. You should measure whether the power tradeoff is worth the memory savings you get by enabling KSM.
To test KSM, we recommend looking at long running devices (several hours) and seeing whether KSM makes any noticeable improvement on launch times and rendering times.
To enable KSM, enable CONFIG_KSM
in the kernel and then add the following lines to your` init.<device>.rc
file:write /sys/kernel/mm/ksm/pages_to_scan 100
write /sys/kernel/mm/ksm/sleep_millisecs 500
write /sys/kernel/mm/ksm/run 1
Once enabled, there are few utilities that will help in the debugging namely : procrank, librank, & ksminfo. These utilities allow you to see which KSM memory is mapped to what process, which processes use the most KSM memory. Once you have found a chunk of memory that looks worth exploring you can use either the hat utility if it's a duplicate object on the dalvik heap.
Swap to zRAM
zRAM swap can increase the amount of memory available in the system by compressing memory pages and putting them in a dynamically allocated swap area of memory.
Again, since this is trading off CPU time for a small increase in memory, you should be careful about measuring the performance impact zRAM swap has on your system.
Android handles swap to zRAM at several levels:
- First, the following kernel options must be enabled to use zRAM swap effectively:
CONFIG_SWAP
CONFIG_CGROUP_MEM_RES_CTLR
CONFIG_CGROUP_MEM_RES_CTLR_SWAP
CONFIG_ZRAM
- Then, you should add a line that looks like this to your fstab:
/dev/block/zram0 none swap defaults zramsize=<size in bytes>,swapprio=<swap partition priority>
is mandatory and indicates how much uncompressed memory you want the zram area to hold. Compression ratios in the 30-50% range are usually observed.
zramsizeswapprio
is optional and not needed if you don't have more than one swap area.
- By default, the Linux kernel swaps in 8 pages of memory at a time. When using ZRAM, the incremental cost of reading 1 page at a time is negligible and may help in case the device is under extreme memory pressure. To read only 1 page at a time, add the following to your init.rc:
`write /proc/sys/vm/page-cluster 0` - In your init.rc, after the `mount_all /fstab.X` line, add:
`swapon_all /fstab.X` - The memory cgroups are automatically configured at boot time if the feature is enabled in kernel.
- If memory cgroups are available, the ActivityManager will mark lower priority threads as being more swappable than other threads. If memory is needed, the Android kernel will start migrating memory pages to zRAM swap, giving a higher priority to those memory pages that have been marked by ActivityManager.
Carveouts, Ion and Contiguous Memory Allocation (CMA)
It is especially important on low memory devices to be mindful about carveouts, especially those that will not always be fully utilized -- for example a carveout for secure video playback. There are several solutions to minimizing the impact of your carveout regions that depend on the exact requirements of your hardware.
If hardware permits discontiguous memory allocations, the ion system heap allows memory allocations from system memory, eliminating the need for a carveout. It also attempts to make large allocations to eliminate TLB pressure on peripherals. If memory regions must be contiguous or confined to a specific address range, the contiguous memory allocator (CMA) can be used.
This creates a carveout that the system can also use of for movable pages. When the region is needed, movable pages will be migrated out of it, allowing the system to use a large carveout for other purposes when it is free. CMA can be used directly or more simply via ion by using the ion cma heap.
Application optimization tips
- Review Managing your App's Memory and these past blog posts on the same topic:
- Check/remove any unused assets from preinstalled apps - development/tools/findunused (should help make the app smaller).
- Use PNG format for assets, especially when they have transparent areas
- If writing native code, use calloc() rather than malloc/memset
- Don't enable code that is writing Parcel data to disk and reading it later.
- Don't subscribe to every package installed, instead use ssp filtering. Add filtering like below:
<data android:scheme="package" android:ssp="com.android.pkg1" />
<data android:scheme="package" android:ssp="com.myapp.act1" />
Understand the various process states in Android
-
SERVICE - SERVICE_RESTARTING
Applications that are making themselves run in the background for their own reason. Most common problem apps have when they run in the background too much. %duration * pss is probably a good "badness" metric, although this set is so focused that just doing %duration is probably better to focus on the fact that we just don't want them running at all. -
IMPORTANT_FOREGROUND - RECEIVER
Applications running in the background (not directly interacting with the user) for any reason. These all add memory load to the system. In this case the (%duration * pss) badness value is probably the best ordering of such processes, because many of these will be always running for good reason, and their pss size then is very important as part of their memory load. -
PERSISTENT
Persistent system processes. Track pss to watch for these processes getting too large. -
TOP
Process the user is currently interacting with. Again, pss is the important metric here, showing how much memory load the app is creating while in use. -
HOME - CACHED_EMPTY
All of these processes at the bottom are ones that the system is keeping around in case they are needed again; but they can be freely killed at any time and re-created if needed. These are the basis for how we compute the memory state -- normal, moderate, low, critical is based on how many of these processes the system can keep around. Again the key thing for these processes is the pss; these processes should try to get their memory footprint down as much as possible when they are in this state, to allow for the maximum total number of processes to be kept around. Generally a well behaved app will have a pss footprint that is significantly smaller when in this state than when TOP. -
TOP vs. CACHED_ACTIVITY-CACHED_ACTIVITY_CLIENT
The difference in pss between when a process is TOP vs. when it is in either of these specific cached states is the best data for seeing how well it is releasing memory when going into the background. Excluding CACHED_EMPTY state makes this data better, since it removes situations when the process has started for some reasons besides doing UI and so will not have to deal with all of the UI overhead it gets when interacting with the user.
Analysis
Analyzing app startup time
Use "adb shell am start
" with the -P
or --start-profiler
option to run the profiler when your app starts. This will start the profiler almost immediately after your process is forked from zygote, before any of your code is loaded into it.
Analyze using bugreports
Now contains various information that can be used for debugging. The services include batterystats
,netstats
, procstats
, and usagestats
. You can find them with lines like this:
------ CHECKIN BATTERYSTATS (dumpsys batterystats --checkin)------7,0,h,-2558644,97,1946288161,3,2,0,340,41837,0,h,-2553041,97,1946288161,3,2,0,340,4183
Check for any persistent processes
Reboot the device and check the processes.
Run for a few hours and check the processes again. There should not be any long running processes.
Run longevity tests
Run for longer durations and track the memory of the process. Does it increase? Does it stay constant? Create Canonical use cases and run longevity tests on these scenarios
相关推荐
# frameworks/native/data/etc/android.hardware.touchscreen.multitouch.jazzhand.xml:system/etc/permissions/android.hardware.touchscreen.multitouch.jazzhand.xml \ # frameworks/native/data/etc/android....
# frameworks/native/data/etc/android.hardware.touchscreen.multitouch.jazzhand.xml:system/etc/permissions/android.hardware.touchscreen.multitouch.jazzhand.xml \ # frameworks/native/data/etc/android....
嵌入式八股文面试题库资料知识宝典-华为的面试试题.zip
训练导控系统设计.pdf
嵌入式八股文面试题库资料知识宝典-网络编程.zip
人脸转正GAN模型的高效压缩.pdf
少儿编程scratch项目源代码文件案例素材-几何冲刺 转瞬即逝.zip
少儿编程scratch项目源代码文件案例素材-鸡蛋.zip
嵌入式系统_USB设备枚举与HID通信_CH559单片机USB主机键盘鼠标复合设备控制_基于CH559单片机的USB主机模式设备枚举与键盘鼠标数据收发系统支持复合设备识别与HID
嵌入式八股文面试题库资料知识宝典-linux常见面试题.zip
面向智慧工地的压力机在线数据的预警应用开发.pdf
基于Unity3D的鱼类运动行为可视化研究.pdf
少儿编程scratch项目源代码文件案例素材-霍格沃茨魔法学校.zip
少儿编程scratch项目源代码文件案例素材-金币冲刺.zip
内容概要:本文深入探讨了HarmonyOS编译构建子系统的作用及其技术细节。作为鸿蒙操作系统背后的关键技术之一,编译构建子系统通过GN和Ninja工具实现了高效的源代码到机器代码的转换,确保了系统的稳定性和性能优化。该系统不仅支持多系统版本构建、芯片厂商定制,还具备强大的调试与维护能力。其高效编译速度、灵活性和可扩展性使其在华为设备和其他智能终端中发挥了重要作用。文章还比较了HarmonyOS编译构建子系统与安卓和iOS编译系统的异同,并展望了其未来的发展趋势和技术演进方向。; 适合人群:对操作系统底层技术感兴趣的开发者、工程师和技术爱好者。; 使用场景及目标:①了解HarmonyOS编译构建子系统的基本概念和工作原理;②掌握其在不同设备上的应用和优化策略;③对比HarmonyOS与安卓、iOS编译系统的差异;④探索其未来发展方向和技术演进路径。; 其他说明:本文详细介绍了HarmonyOS编译构建子系统的架构设计、核心功能和实际应用案例,强调了其在万物互联时代的重要性和潜力。阅读时建议重点关注编译构建子系统的独特优势及其对鸿蒙生态系统的深远影响。
嵌入式八股文面试题库资料知识宝典-奇虎360 2015校园招聘C++研发工程师笔试题.zip
嵌入式八股文面试题库资料知识宝典-腾讯2014校园招聘C语言笔试题(附答案).zip
双种群变异策略改进RWCE算法优化换热网络.pdf
内容概要:本文详细介绍了基于瞬时无功功率理论的三电平有源电力滤波器(APF)仿真研究。主要内容涵盖并联型APF的工作原理、三相三电平NPC结构、谐波检测方法(ipiq)、双闭环控制策略(电压外环+电流内环PI控制)以及SVPWM矢量调制技术。仿真结果显示,在APF投入前后,电网电流THD从21.9%降至3.77%,显著提高了电能质量。 适用人群:从事电力系统研究、电力电子技术开发的专业人士,尤其是对有源电力滤波器及其仿真感兴趣的工程师和技术人员。 使用场景及目标:适用于需要解决电力系统中谐波污染和无功补偿问题的研究项目。目标是通过仿真验证APF的有效性和可行性,优化电力系统的电能质量。 其他说明:文中提到的仿真模型涉及多个关键模块,如三相交流电压模块、非线性负载、信号采集模块、LC滤波器模块等,这些模块的设计和协同工作对于实现良好的谐波抑制和无功补偿至关重要。
内容概要:本文探讨了在工业自动化和物联网交汇背景下,构建OPC DA转MQTT网关软件的需求及其具体实现方法。文中详细介绍了如何利用Python编程语言及相关库(如OpenOPC用于读取OPC DA数据,paho-mqtt用于MQTT消息传递),完成从OPC DA数据解析、格式转换到最终通过MQTT协议发布数据的关键步骤。此外,还讨论了针对不良网络环境下数据传输优化措施以及后续测试验证过程。 适合人群:从事工业自动化系统集成、物联网项目开发的技术人员,特别是那些希望提升跨协议数据交换能力的专业人士。 使用场景及目标:适用于需要在不同通信协议间建立高效稳定的数据通道的应用场合,比如制造业生产线监控、远程设备管理等。主要目的是克服传统有线网络限制,实现在不稳定无线网络条件下仍能保持良好性能的数据传输。 其他说明:文中提供了具体的代码片段帮助理解整个流程,并强调了实际部署过程中可能遇到的问题及解决方案。