`
phenom
  • 浏览: 409375 次
  • 性别: Icon_minigender_1
  • 来自: 福州
社区版块
存档分类
最新评论

android 磁盘缓存.

 
阅读更多
开发一个app,特别是图片的app,免不了要存储图片,内存缓存是必要的,之前的几篇文章已经有涉及.磁盘的缓存,也是相当必要的.

最近在做一个机顶盒应用,结果,机顶盒的sdcard真烂,导致整个程序的速度被影响了. 报怨一下,这么差的卡就不要拿出来丢人了.

图片下载中,使用了磁盘缓存.android系统中有一个可供使用的缓存类,是个不错的选择.
DiskLruCache
直接上代码吧,

具体的代码可以查看apollo 程序,git上有源码,也可以直接搜索disklrucache.
这个缓存策略多数情况是可用的,还带有渐变效果,是个不错的选择.然而我在使用时,出现崩溃的状况,程序因为jni异常,直接退出,大概是同时操作了一个磁盘文件,所以有限制线程数,看上去是必要的.
同时,如果加载的图片都是大图片,不可避免会有oome出现.比如微博程序,时间线查看图片都是大图就会出现.它还是不可直接使用,需要改造.

稍后有时间,把整个缓存打包与library工程,直接供外界使用.

class ImageCache:

    /**
     * Default disk cache size 10MB
     */
    private static final int DISK_CACHE_SIZE = 1024 * 1024 * 10;

    /**
     * Compression settings when writing images to disk cache
     */
    private static final CompressFormat COMPRESS_FORMAT = CompressFormat.JPEG;

    /**
     * Disk cache index to read from
     */
    private static final int DISK_CACHE_INDEX = 0;

    /**
     * Image compression quality
     */
    private static final int COMPRESS_QUALITY = 98;

下面初始化需要在非ui线程中运行,
public void initDiskCache(final Context context) {
        // Set up disk cache
        if (mDiskCache == null || mDiskCache.isClosed()) {
            File diskCacheDir = getDiskCacheDir(context, TAG);
            if (diskCacheDir != null) {
                if (!diskCacheDir.exists()) {
                    diskCacheDir.mkdirs();
                }
                if (getUsableSpace(diskCacheDir) > DISK_CACHE_SIZE) {
                    try {
                        mDiskCache = DiskLruCache.open(diskCacheDir, 1, 1, DISK_CACHE_SIZE);
                    } catch (final IOException e) {
                        diskCacheDir = null;
                    }
                }
            }
        }
    }

public static final File getDiskCacheDir(final Context context, final String uniqueName) {
        final String cachePath = Environment.MEDIA_MOUNTED.equals(Environment
                .getExternalStorageState()) || !isExternalStorageRemovable() ? getExternalCacheDir(
                context).getPath() : context.getCacheDir().getPath();

        return new File(cachePath + File.separator + uniqueName);
    }
不必太在意上面有些方法找不到. 
getExternalCacheDir(context).getPath() 取得sdcard上的一个缓存目录,因为这个磁盘缓存是可控制大小的,所以,如果不需要太大的缓存空间,还是使用context.getCacheDir().getPath(),即/data/data/下的目录,至少可以保证程序的运行速度,因为有些sdcard真是烂.


下面的策略,大概是下载图片,然后图片存储,存储时生成一个序列值,然后写到日志文件中.

外部调用的方法,获取缓存图片,这在下载前检测的.
public final Bitmap getCachedBitmap(final String data) {
        if (data == null) {
            return null;
        }
        Bitmap cachedImage = getBitmapFromMemCache(data);
        if (cachedImage == null) {
            cachedImage = getBitmapFromDiskCache(data);
        }
        if (cachedImage != null) {
            addBitmapToMemCache(data, cachedImage);
            return cachedImage;
        }
        return null;
    }

从磁盘检测
public final Bitmap getBitmapFromDiskCache(final String data) {
        if (data == null) {
            return null;
        }

        // Check in the memory cache here to avoid going to the disk cache less
        // often
        if (getBitmapFromMemCache(data) != null) {
            return getBitmapFromMemCache(data);
        }

        while (mPauseDiskAccess) {
            // Pause for moment
        }
        final String key = hashKeyForDisk(data);//md5生成key
        if (mDiskCache != null) {
            InputStream inputStream = null;
            try {
                final DiskLruCache.Snapshot snapshot = mDiskCache.get(key);
                if (snapshot != null) {
                    inputStream = snapshot.getInputStream(DISK_CACHE_INDEX);
                    if (inputStream != null) {
                        final Bitmap bitmap = BitmapFactory.decodeStream(inputStream);
                        if (bitmap != null) {
                            return bitmap;
                        }
                    }
                }
            } catch (final IOException e) {
                Log.e(TAG, "getBitmapFromDiskCache - " + e);
            } finally {
                try {
                    if (inputStream != null) {
                        inputStream.close();
                    }
                } catch (final IOException e) {
                }
            }
        }
        return null;
    }

添加图片到缓存中,不仅到内存缓存中,而且添加到磁盘缓存中.
public void addBitmapToCache(final String data, final Bitmap bitmap) {
        if (data == null || bitmap == null) {
            return;
        }

        // Add to memory cache
        addBitmapToMemCache(data, bitmap);

        // Add to disk cache
        if (mDiskCache != null) {
            final String key = hashKeyForDisk(data);
            OutputStream out = null;
            try {
                final DiskLruCache.Snapshot snapshot = mDiskCache.get(key);
                if (snapshot == null) {
                    final DiskLruCache.Editor editor = mDiskCache.edit(key);
                    if (editor != null) {
                        out = editor.newOutputStream(DISK_CACHE_INDEX);
                        bitmap.compress(COMPRESS_FORMAT, COMPRESS_QUALITY, out);
                        editor.commit();
                        out.close();
                        flush();
                    }
                } else {
                    snapshot.getInputStream(DISK_CACHE_INDEX).close();
                }
            } catch (final IOException e) {
                Log.e(TAG, "addBitmapToCache - " + e);
            } finally {
                try {
                    if (out != null) {
                        out.close();
                        out = null;
                    }
                } catch (final IOException e) {
                    Log.e(TAG, "addBitmapToCache - " + e);
                } catch (final IllegalStateException e) {
                    Log.e(TAG, "addBitmapToCache - " + e);
                }
            }
        }
    }


====================================

import java.io.BufferedInputStream;
import java.io.BufferedWriter;
import java.io.Closeable;
import java.io.EOFException;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileWriter;
import java.io.FilterOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.Reader;
import java.io.StringWriter;
import java.io.Writer;
import java.lang.reflect.Array;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;

/**
 ****************************************************************************** Taken from the JB source code, can be found in:
 * libcore/luni/src/main/java/libcore/io/DiskLruCache.java or direct link:
 * https:
 * //android.googlesource.com/platform/libcore/+/android-4.1.1_r1/luni/src/
 * main/java/libcore/io/DiskLruCache.java A cache that uses a bounded amount of
 * space on a filesystem. Each cache entry has a string key and a fixed number
 * of values. Values are byte sequences, accessible as streams or files. Each
 * value must be between {@code 0} and {@code Integer.MAX_VALUE} bytes in
 * length.
 * <p>
 * The cache stores its data in a directory on the filesystem. This directory
 * must be exclusive to the cache; the cache may delete or overwrite files from
 * its directory. It is an error for multiple processes to use the same cache
 * directory at the same time.
 * <p>
 * This cache limits the number of bytes that it will store on the filesystem.
 * When the number of stored bytes exceeds the limit, the cache will remove
 * entries in the background until the limit is satisfied. The limit is not
 * strict: the cache may temporarily exceed it while waiting for files to be
 * deleted. The limit does not include filesystem overhead or the cache journal
 * so space-sensitive applications should set a conservative limit.
 * <p>
 * Clients call {@link #edit} to create or update the values of an entry. An
 * entry may have only one editor at one time; if a value is not available to be
 * edited then {@link #edit} will return null.
 * <ul>
 * <li>When an entry is being <strong>created</strong> it is necessary to supply
 * a full set of values; the empty value should be used as a placeholder if
 * necessary.
 * <li>When an entry is being <strong>edited</strong>, it is not necessary to
 * supply data for every value; values default to their previous value.
 * </ul>
 * Every {@link #edit} call must be matched by a call to {@link Editor#commit}
 * or {@link Editor#abort}. Committing is atomic: a read observes the full set
 * of values as they were before or after the commit, but never a mix of values.
 * <p>
 * Clients call {@link #get} to read a snapshot of an entry. The read will
 * observe the value at the time that {@link #get} was called. Updates and
 * removals after the call do not impact ongoing reads.
 * <p>
 * This class is tolerant of some I/O errors. If files are missing from the
 * filesystem, the corresponding entries will be dropped from the cache. If an
 * error occurs while writing a cache value, the edit will fail silently.
 * Callers should handle other problems by catching {@code IOException} and
 * responding appropriately.
 */
public final class DiskLruCache implements Closeable {
    static final String JOURNAL_FILE = "journal";

    static final String JOURNAL_FILE_TMP = "journal.tmp";

    static final String MAGIC = "libcore.io.DiskLruCache";

    static final String VERSION_1 = "1";

    static final long ANY_SEQUENCE_NUMBER = -1;

    private static final String CLEAN = "CLEAN";

    private static final String DIRTY = "DIRTY";

    private static final String REMOVE = "REMOVE";

    private static final String READ = "READ";

    private static final Charset UTF_8 = Charset.forName("UTF-8");

    private static final int IO_BUFFER_SIZE = 8 * 1024;

    /*
     * This cache uses a journal file named "journal". A typical journal file
     * looks like this: libcore.io.DiskLruCache 1 100 2 CLEAN
     * 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054 DIRTY
     * 335c4c6028171cfddfbaae1a9c313c52 CLEAN 335c4c6028171cfddfbaae1a9c313c52
     * 3934 2342 REMOVE 335c4c6028171cfddfbaae1a9c313c52 DIRTY
     * 1ab96a171faeeee38496d8b330771a7a CLEAN 1ab96a171faeeee38496d8b330771a7a
     * 1600 234 READ 335c4c6028171cfddfbaae1a9c313c52 READ
     * 3400330d1dfc7f3f7f4b8d4d803dfcf6 The first five lines of the journal form
     * its header. They are the constant string "libcore.io.DiskLruCache", the
     * disk cache's version, the application's version, the value count, and a
     * blank line. Each of the subsequent lines in the file is a record of the
     * state of a cache entry. Each line contains space-separated values: a
     * state, a key, and optional state-specific values. o DIRTY lines track
     * that an entry is actively being created or updated. Every successful
     * DIRTY action should be followed by a CLEAN or REMOVE action. DIRTY lines
     * without a matching CLEAN or REMOVE indicate that temporary files may need
     * to be deleted. o CLEAN lines track a cache entry that has been
     * successfully published and may be read. A publish line is followed by the
     * lengths of each of its values. o READ lines track accesses for LRU. o
     * REMOVE lines track entries that have been deleted. The journal file is
     * appended to as cache operations occur. The journal may occasionally be
     * compacted by dropping redundant lines. A temporary file named
     * "journal.tmp" will be used during compaction; that file should be deleted
     * if it exists when the cache is opened.
     */

    private final File directory;

    private final File journalFile;

    private final File journalFileTmp;

    private final int appVersion;

    private final long maxSize;

    private final int valueCount;

    private long size = 0;

    private Writer journalWriter;

    private final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<String, Entry>(0,
            0.75f, true);

    private int redundantOpCount;

    /**
     * To differentiate between old and current snapshots, each entry is given a
     * sequence number each time an edit is committed. A snapshot is stale if
     * its sequence number is not equal to its entry's sequence number.
     */
    private long nextSequenceNumber = 0;

    /* From java.util.Arrays */
    @SuppressWarnings("unchecked")
    private static <T> T[] copyOfRange(final T[] original, final int start, final int end) {
        final int originalLength = original.length; // For exception priority
                                                    // compatibility.
        if (start > end) {
            throw new IllegalArgumentException();
        }
        if (start < 0 || start > originalLength) {
            throw new ArrayIndexOutOfBoundsException();
        }
        final int resultLength = end - start;
        final int copyLength = Math.min(resultLength, originalLength - start);
        final T[] result = (T[])Array.newInstance(original.getClass().getComponentType(),
                resultLength);
        System.arraycopy(original, start, result, 0, copyLength);
        return result;
    }

    /**
     * Returns the remainder of 'reader' as a string, closing it when done.
     */
    public static String readFully(final Reader reader) throws IOException {
        try {
            final StringWriter writer = new StringWriter();
            final char[] buffer = new char[1024];
            int count;
            while ((count = reader.read(buffer)) != -1) {
                writer.write(buffer, 0, count);
            }
            return writer.toString();
        } finally {
            reader.close();
        }
    }

    /**
     * Returns the ASCII characters up to but not including the next "\r\n", or
     * "\n".
     * 
     * @throws java.io.EOFException if the stream is exhausted before the next
     *             newline character.
     */
    public static String readAsciiLine(final InputStream in) throws IOException {
        // TODO: support UTF-8 here instead

        final StringBuilder result = new StringBuilder(80);
        while (true) {
            final int c = in.read();
            if (c == -1) {
                throw new EOFException();
            } else if (c == '\n') {
                break;
            }

            result.append((char)c);
        }
        final int length = result.length();
        if (length > 0 && result.charAt(length - 1) == '\r') {
            result.setLength(length - 1);
        }
        return result.toString();
    }

    /**
     * Closes 'closeable', ignoring any checked exceptions. Does nothing if
     * 'closeable' is null.
     */
    public static void closeQuietly(final Closeable closeable) {
        if (closeable != null) {
            try {
                closeable.close();
            } catch (final RuntimeException rethrown) {
                throw rethrown;
            } catch (final Exception ignored) {
            }
        }
    }

    /**
     * Recursively delete everything in {@code dir}.
     */
    // TODO: this should specify paths as Strings rather than as Files
    public static void deleteContents(final File dir) throws IOException {
        final File[] files = dir.listFiles();
        if (files == null) {
            throw new IllegalArgumentException("not a directory: " + dir);
        }
        for (final File file : files) {
            if (file.isDirectory()) {
                deleteContents(file);
            }
            if (!file.delete()) {
                throw new IOException("failed to delete file: " + file);
            }
        }
    }

    /** This cache uses a single background thread to evict entries. */
    private final ExecutorService executorService = new ThreadPoolExecutor(0, 1, 60L,
            TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());

    private final Callable<Void> cleanupCallable = new Callable<Void>() {
        @Override
        public Void call() throws Exception {
            synchronized (DiskLruCache.this) {
                if (journalWriter == null) {
                    return null; // closed
                }
                trimToSize();
                if (journalRebuildRequired()) {
                    rebuildJournal();
                    redundantOpCount = 0;
                }
            }
            return null;
        }
    };

    private DiskLruCache(final File directory, final int appVersion, final int valueCount,
            final long maxSize) {
        this.directory = directory;
        this.appVersion = appVersion;
        journalFile = new File(directory, JOURNAL_FILE);
        journalFileTmp = new File(directory, JOURNAL_FILE_TMP);
        this.valueCount = valueCount;
        this.maxSize = maxSize;
    }

    /**
     * Opens the cache in {@code directory}, creating a cache if none exists
     * there.
     * 
     * @param directory a writable directory
     * @param appVersion
     * @param valueCount the number of values per cache entry. Must be positive.
     * @param maxSize the maximum number of bytes this cache should use to store
     * @throws IOException if reading or writing the cache directory fails
     */
    public static DiskLruCache open(final File directory, final int appVersion,
            final int valueCount, final long maxSize) throws IOException {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        if (valueCount <= 0) {
            throw new IllegalArgumentException("valueCount <= 0");
        }

        // prefer to pick up where we left off
        DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize);
        if (cache.journalFile.exists()) {
            try {
                cache.readJournal();
                cache.processJournal();
                cache.journalWriter = new BufferedWriter(new FileWriter(cache.journalFile, true),
                        IO_BUFFER_SIZE);
                return cache;
            } catch (final IOException journalIsCorrupt) {
                // System.logW("DiskLruCache " + directory + " is corrupt: "
                // + journalIsCorrupt.getMessage() + ", removing");
                cache.delete();
            }
        }

        // create a new empty cache
        directory.mkdirs();
        cache = new DiskLruCache(directory, appVersion, valueCount, maxSize);
        cache.rebuildJournal();
        return cache;
    }

    private void readJournal() throws IOException {
        final InputStream in = new BufferedInputStream(new FileInputStream(journalFile),
                IO_BUFFER_SIZE);
        try {
            final String magic = readAsciiLine(in);
            final String version = readAsciiLine(in);
            final String appVersionString = readAsciiLine(in);
            final String valueCountString = readAsciiLine(in);
            final String blank = readAsciiLine(in);
            if (!MAGIC.equals(magic) || !VERSION_1.equals(version)
                    || !Integer.toString(appVersion).equals(appVersionString)
                    || !Integer.toString(valueCount).equals(valueCountString) || !"".equals(blank)) {
                throw new IOException("unexpected journal header: [" + magic + ", " + version
                        + ", " + valueCountString + ", " + blank + "]");
            }

            while (true) {
                try {
                    readJournalLine(readAsciiLine(in));
                } catch (final EOFException endOfJournal) {
                    break;
                }
            }
        } finally {
            closeQuietly(in);
        }
    }

    private void readJournalLine(final String line) throws IOException {
        final String[] parts = line.split(" ");
        if (parts.length < 2) {
            throw new IOException("unexpected journal line: " + line);
        }

        final String key = parts[1];
        if (parts[0].equals(REMOVE) && parts.length == 2) {
            lruEntries.remove(key);
            return;
        }

        Entry entry = lruEntries.get(key);
        if (entry == null) {
            entry = new Entry(key);
            lruEntries.put(key, entry);
        }

        if (parts[0].equals(CLEAN) && parts.length == 2 + valueCount) {
            entry.readable = true;
            entry.currentEditor = null;
            entry.setLengths(copyOfRange(parts, 2, parts.length));
        } else if (parts[0].equals(DIRTY) && parts.length == 2) {
            entry.currentEditor = new Editor(entry);
        } else if (parts[0].equals(READ) && parts.length == 2) {
            // this work was already done by calling lruEntries.get()
        } else {
            throw new IOException("unexpected journal line: " + line);
        }
    }

    /**
     * Computes the initial size and collects garbage as a part of opening the
     * cache. Dirty entries are assumed to be inconsistent and will be deleted.
     */
    private void processJournal() throws IOException {
        deleteIfExists(journalFileTmp);
        for (final Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext();) {
            final Entry entry = i.next();
            if (entry.currentEditor == null) {
                for (int t = 0; t < valueCount; t++) {
                    size += entry.lengths[t];
                }
            } else {
                entry.currentEditor = null;
                for (int t = 0; t < valueCount; t++) {
                    deleteIfExists(entry.getCleanFile(t));
                    deleteIfExists(entry.getDirtyFile(t));
                }
                i.remove();
            }
        }
    }

    /**
     * Creates a new journal that omits redundant information. This replaces the
     * current journal if it exists.
     */
    private synchronized void rebuildJournal() throws IOException {
        if (journalWriter != null) {
            journalWriter.close();
        }

        final Writer writer = new BufferedWriter(new FileWriter(journalFileTmp), IO_BUFFER_SIZE);
        writer.write(MAGIC);
        writer.write("\n");
        writer.write(VERSION_1);
        writer.write("\n");
        writer.write(Integer.toString(appVersion));
        writer.write("\n");
        writer.write(Integer.toString(valueCount));
        writer.write("\n");
        writer.write("\n");

        for (final Entry entry : lruEntries.values()) {
            if (entry.currentEditor != null) {
                writer.write(DIRTY + ' ' + entry.key + '\n');
            } else {
                writer.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n');
            }
        }

        writer.close();
        journalFileTmp.renameTo(journalFile);
        journalWriter = new BufferedWriter(new FileWriter(journalFile, true), IO_BUFFER_SIZE);
    }

    private static void deleteIfExists(final File file) throws IOException {
        // try {
        // Libcore.os.remove(file.getPath());
        // } catch (ErrnoException errnoException) {
        // if (errnoException.errno != OsConstants.ENOENT) {
        // throw errnoException.rethrowAsIOException();
        // }
        // }
        if (file.exists() && !file.delete()) {
            throw new IOException();
        }
    }

    /**
     * Returns a snapshot of the entry named {@code key}, or null if it doesn't
     * exist is not currently readable. If a value is returned, it is moved to
     * the head of the LRU queue.
     */
    public synchronized Snapshot get(final String key) throws IOException {
        checkNotClosed();
        validateKey(key);
        final Entry entry = lruEntries.get(key);
        if (entry == null) {
            return null;
        }

        if (!entry.readable) {
            return null;
        }

        /*
         * Open all streams eagerly to guarantee that we see a single published
         * snapshot. If we opened streams lazily then the streams could come
         * from different edits.
         */
        final InputStream[] ins = new InputStream[valueCount];
        try {
            for (int i = 0; i < valueCount; i++) {
                ins[i] = new FileInputStream(entry.getCleanFile(i));
            }
        } catch (final FileNotFoundException e) {
            // a file must have been deleted manually!
            return null;
        }

        redundantOpCount++;
        journalWriter.append(READ + ' ' + key + '\n');
        if (journalRebuildRequired()) {
            executorService.submit(cleanupCallable);
        }

        return new Snapshot(key, entry.sequenceNumber, ins);
    }

    /**
     * Returns an editor for the entry named {@code key}, or null if another
     * edit is in progress.
     */
    public Editor edit(final String key) throws IOException {
        return edit(key, ANY_SEQUENCE_NUMBER);
    }

    private synchronized Editor edit(final String key, final long expectedSequenceNumber)
            throws IOException {
        checkNotClosed();
        validateKey(key);
        Entry entry = lruEntries.get(key);
        if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER
                && (entry == null || entry.sequenceNumber != expectedSequenceNumber)) {
            return null; // snapshot is stale
        }
        if (entry == null) {
            entry = new Entry(key);
            lruEntries.put(key, entry);
        } else if (entry.currentEditor != null) {
            return null; // another edit is in progress
        }

        final Editor editor = new Editor(entry);
        entry.currentEditor = editor;

        // flush the journal before creating files to prevent file leaks
        journalWriter.write(DIRTY + ' ' + key + '\n');
        journalWriter.flush();
        return editor;
    }

    /**
     * Returns the directory where this cache stores its data.
     */
    public File getDirectory() {
        return directory;
    }

    /**
     * Returns the maximum number of bytes that this cache should use to store
     * its data.
     */
    public long maxSize() {
        return maxSize;
    }

    /**
     * Returns the number of bytes currently being used to store the values in
     * this cache. This may be greater than the max size if a background
     * deletion is pending.
     */
    public synchronized long size() {
        return size;
    }

    private synchronized void completeEdit(final Editor editor, final boolean success)
            throws IOException {
        final Entry entry = editor.entry;
        if (entry.currentEditor != editor) {
            throw new IllegalStateException();
        }

        // if this edit is creating the entry for the first time, every index
        // must have a value
        if (success && !entry.readable) {
            for (int i = 0; i < valueCount; i++) {
                if (!entry.getDirtyFile(i).exists()) {
                    editor.abort();
                    throw new IllegalStateException("edit didn't create file " + i);
                }
            }
        }

        for (int i = 0; i < valueCount; i++) {
            final File dirty = entry.getDirtyFile(i);
            if (success) {
                if (dirty.exists()) {
                    final File clean = entry.getCleanFile(i);
                    dirty.renameTo(clean);
                    final long oldLength = entry.lengths[i];
                    final long newLength = clean.length();
                    entry.lengths[i] = newLength;
                    size = size - oldLength + newLength;
                }
            } else {
                deleteIfExists(dirty);
            }
        }

        redundantOpCount++;
        entry.currentEditor = null;
        if (entry.readable | success) {
            entry.readable = true;
            journalWriter.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n');
            if (success) {
                entry.sequenceNumber = nextSequenceNumber++;
            }
        } else {
            lruEntries.remove(entry.key);
            journalWriter.write(REMOVE + ' ' + entry.key + '\n');
        }

        if (size > maxSize || journalRebuildRequired()) {
            executorService.submit(cleanupCallable);
        }
    }

    /**
     * We only rebuild the journal when it will halve the size of the journal
     * and eliminate at least 2000 ops.
     */
    private boolean journalRebuildRequired() {
        final int REDUNDANT_OP_COMPACT_THRESHOLD = 2000;
        return redundantOpCount >= REDUNDANT_OP_COMPACT_THRESHOLD
                && redundantOpCount >= lruEntries.size();
    }

    /**
     * Drops the entry for {@code key} if it exists and can be removed. Entries
     * actively being edited cannot be removed.
     * 
     * @return true if an entry was removed.
     */
    public synchronized boolean remove(final String key) throws IOException {
        checkNotClosed();
        validateKey(key);
        final Entry entry = lruEntries.get(key);
        if (entry == null || entry.currentEditor != null) {
            return false;
        }

        for (int i = 0; i < valueCount; i++) {
            final File file = entry.getCleanFile(i);
            if (!file.delete()) {
                throw new IOException("failed to delete " + file);
            }
            size -= entry.lengths[i];
            entry.lengths[i] = 0;
        }

        redundantOpCount++;
        journalWriter.append(REMOVE + ' ' + key + '\n');
        lruEntries.remove(key);

        if (journalRebuildRequired()) {
            executorService.submit(cleanupCallable);
        }

        return true;
    }

    /**
     * Returns true if this cache has been closed.
     */
    public boolean isClosed() {
        return journalWriter == null;
    }

    private void checkNotClosed() {
        if (journalWriter == null) {
            throw new IllegalStateException("cache is closed");
        }
    }

    /**
     * Force buffered operations to the filesystem.
     */
    public synchronized void flush() throws IOException {
        checkNotClosed();
        trimToSize();
        journalWriter.flush();
    }

    /**
     * Closes this cache. Stored values will remain on the filesystem.
     */
    @Override
    public synchronized void close() throws IOException {
        if (journalWriter == null) {
            return; // already closed
        }
        for (final Entry entry : new ArrayList<Entry>(lruEntries.values())) {
            if (entry.currentEditor != null) {
                entry.currentEditor.abort();
            }
        }
        trimToSize();
        journalWriter.close();
        journalWriter = null;
    }

    private void trimToSize() throws IOException {
        while (size > maxSize) {
            // Map.Entry<String, Entry> toEvict = lruEntries.eldest();
            final Map.Entry<String, Entry> toEvict = lruEntries.entrySet().iterator().next();
            remove(toEvict.getKey());
        }
    }

    /**
     * Closes the cache and deletes all of its stored values. This will delete
     * all files in the cache directory including files that weren't created by
     * the cache.
     */
    public void delete() throws IOException {
        close();
        deleteContents(directory);
    }

    private void validateKey(final String key) {
        if (key.contains(" ") || key.contains("\n") || key.contains("\r")) {
            throw new IllegalArgumentException("keys must not contain spaces or newlines: \"" + key
                    + "\"");
        }
    }

    private static String inputStreamToString(final InputStream in) throws IOException {
        return readFully(new InputStreamReader(in, UTF_8));
    }

    /**
     * A snapshot of the values for an entry.
     */
    public final class Snapshot implements Closeable {
        private final String key;

        private final long sequenceNumber;

        private final InputStream[] ins;

        private Snapshot(final String key, final long sequenceNumber, final InputStream[] ins) {
            this.key = key;
            this.sequenceNumber = sequenceNumber;
            this.ins = ins;
        }

        /**
         * Returns an editor for this snapshot's entry, or null if either the
         * entry has changed since this snapshot was created or if another edit
         * is in progress.
         */
        public Editor edit() throws IOException {
            return DiskLruCache.this.edit(key, sequenceNumber);
        }

        /**
         * Returns the unbuffered stream with the value for {@code index}.
         */
        public InputStream getInputStream(final int index) {
            return ins[index];
        }

        /**
         * Returns the string value for {@code index}.
         */
        public String getString(final int index) throws IOException {
            return inputStreamToString(getInputStream(index));
        }

        @Override
        public void close() {
            for (final InputStream in : ins) {
                closeQuietly(in);
            }
        }
    }

    /**
     * Edits the values for an entry.
     */
    public final class Editor {
        private final Entry entry;

        private boolean hasErrors;

        private Editor(final Entry entry) {
            this.entry = entry;
        }

        /**
         * Returns an unbuffered input stream to read the last committed value,
         * or null if no value has been committed.
         */
        public InputStream newInputStream(final int index) throws IOException {
            synchronized (DiskLruCache.this) {
                if (entry.currentEditor != this) {
                    throw new IllegalStateException();
                }
                if (!entry.readable) {
                    return null;
                }
                return new FileInputStream(entry.getCleanFile(index));
            }
        }

        /**
         * Returns the last committed value as a string, or null if no value has
         * been committed.
         */
        public String getString(final int index) throws IOException {
            final InputStream in = newInputStream(index);
            return in != null ? inputStreamToString(in) : null;
        }

        /**
         * Returns a new unbuffered output stream to write the value at
         * {@code index}. If the underlying output stream encounters errors when
         * writing to the filesystem, this edit will be aborted when
         * {@link #commit} is called. The returned output stream does not throw
         * IOExceptions.
         */
        public OutputStream newOutputStream(final int index) throws IOException {
            synchronized (DiskLruCache.this) {
                if (entry.currentEditor != this) {
                    throw new IllegalStateException();
                }
                return new FaultHidingOutputStream(new FileOutputStream(entry.getDirtyFile(index)));
            }
        }

        /**
         * Sets the value at {@code index} to {@code value}.
         */
        public void set(final int index, final String value) throws IOException {
            Writer writer = null;
            try {
                writer = new OutputStreamWriter(newOutputStream(index), UTF_8);
                writer.write(value);
            } finally {
                closeQuietly(writer);
            }
        }

        /**
         * Commits this edit so it is visible to readers. This releases the edit
         * lock so another edit may be started on the same key.
         */
        public void commit() throws IOException {
            if (hasErrors) {
                completeEdit(this, false);
                remove(entry.key); // the previous entry is stale
            } else {
                completeEdit(this, true);
            }
        }

        /**
         * Aborts this edit. This releases the edit lock so another edit may be
         * started on the same key.
         */
        public void abort() throws IOException {
            completeEdit(this, false);
        }

        private class FaultHidingOutputStream extends FilterOutputStream {
            private FaultHidingOutputStream(final OutputStream out) {
                super(out);
            }

            @Override
            public void write(final int oneByte) {
                try {
                    out.write(oneByte);
                } catch (final IOException e) {
                    hasErrors = true;
                }
            }

            @Override
            public void write(final byte[] buffer, final int offset, final int length) {
                try {
                    out.write(buffer, offset, length);
                } catch (final IOException e) {
                    hasErrors = true;
                }
            }

            @Override
            public void close() {
                try {
                    out.close();
                } catch (final IOException e) {
                    hasErrors = true;
                }
            }

            @Override
            public void flush() {
                try {
                    out.flush();
                } catch (final IOException e) {
                    hasErrors = true;
                }
            }
        }
    }

    private final class Entry {
        private final String key;

        /** Lengths of this entry's files. */
        private final long[] lengths;

        /** True if this entry has ever been published */
        private boolean readable;

        /** The ongoing edit or null if this entry is not being edited. */
        private Editor currentEditor;

        /**
         * The sequence number of the most recently committed edit to this
         * entry.
         */
        private long sequenceNumber;

        private Entry(final String key) {
            this.key = key;
            lengths = new long[valueCount];
        }

        public String getLengths() throws IOException {
            final StringBuilder result = new StringBuilder();
            for (final long size : lengths) {
                result.append(' ').append(size);
            }
            return result.toString();
        }

        /**
         * Set lengths using decimal numbers like "10123".
         */
        private void setLengths(final String[] strings) throws IOException {
            if (strings.length != valueCount) {
                throw invalidLengths(strings);
            }

            try {
                for (int i = 0; i < strings.length; i++) {
                    lengths[i] = Long.parseLong(strings[i]);
                }
            } catch (final NumberFormatException e) {
                throw invalidLengths(strings);
            }
        }

        private IOException invalidLengths(final String[] strings) throws IOException {
            throw new IOException("unexpected journal line: " + Arrays.toString(strings));
        }

        public File getCleanFile(final int i) {
            return new File(directory, key + "." + i);
        }

        public File getDirtyFile(final int i) {
            return new File(directory, key + "." + i + ".tmp");
        }
    }
}
分享到:
评论

相关推荐

    AppCache,Android应用程序磁盘缓存.zip

    总结起来,"AppCache,Android应用程序磁盘缓存.zip"是一个关于如何在Android平台上利用HTML5的AppCache特性和Android本地磁盘缓存技术的开源项目。它可以帮助开发者构建能够离线运行的应用,提升应用的响应速度和...

    Android实现新闻列表的磁盘缓存.zip

    "Android实现新闻列表的磁盘缓存"项目就是针对这个问题提供的一种解决方案。它结合了MVP(Model-View-Presenter)架构模式、RxJava响应式编程库、Retrofit网络请求库以及Picasso图片加载库,构建了一个高效的数据...

    android网络加载图片对图片资源进行优化并且实现内存双缓存磁盘缓存.pdf

    Android 网络加载图片优化与内存双缓存磁盘缓存实现 在 Android 应用程序中,加载图片是非常常见的操作,然而,如果不对图片资源进行优化,将会导致应用程序占用大量内存,甚至崩溃。为了解决这个问题,本文将介绍...

    Fuse,用kotlin编写的android的简单通用lru内存/磁盘缓存.zip

    这个库受到了Jake Wharton的内存缓存`LruCache`和磁盘缓存`DiskLruCache`的启发,旨在简化Android应用程序中的数据缓存过程。 **LRU缓存原理:** LRU是一种常用的缓存淘汰策略,当缓存满时,会优先淘汰最近最少使用...

    Android 下基于 Lru 算法的磁盘缓存.zip

    针对这些问题,用户可以尝试一些基本的解决方法,如清除应用缓存和数据、降低屏幕亮度、关闭没有使用的连接和传感器、限制后台运行的应用、删除不需要的文件和应用等。 随着Android系统的不断发展,其功能和性能也...

    安卓图片加载缓存相关-Android图片二级缓存.zip

    这个压缩包"安卓图片加载缓存相关-Android图片二级缓存.zip"显然包含了关于如何实现高效图片加载和二级缓存的源码示例。这里我们将深入探讨图片加载缓存的概念、二级缓存的工作原理以及相关技术。 图片加载缓存主要...

    android缓存框架.rar

    在Android开发中,缓存框架是提升应用性能和用户体验的关键技术。"android缓存框架.rar"这个压缩包可能包含了两个核心文件:ACache.java和DataCache.java,它们很可能是自定义实现的Android缓存机制。这里我们将深入...

    安卓Android源码——图片异步缓存两层缓存.zip

    本资源“安卓Android源码——图片异步缓存两层缓存.zip”聚焦于如何高效地实现图片的异步加载和两层缓存策略,以提升用户体验并减少内存消耗。下面我们将深入探讨这一主题。 首先,图片异步加载是移动应用中常用的...

    Android之优化增强的缓存机制(SimpleCache)_图片缓存.zip

    - Android中的缓存通常分为内存缓存和磁盘缓存两部分。内存缓存对性能提升显著,但受内存限制,当系统需要更多内存时会被清理。磁盘缓存则持久化存储,但访问速度相对较慢。 3. **SimpleCache介绍** - ...

    Android应用缓存机制

    Android系统提供了多种缓存策略和技术,包括内存缓存、磁盘缓存、SQLite数据库缓存以及文件系统缓存等。下面将详细介绍这些缓存机制。 一、内存缓存 内存缓存,也称为LRU(Least Recently Used)缓存,是Android中...

    Android应用源码之图片异步缓存两层缓存.zip

    2. 磁盘缓存:磁盘缓存通常使用哈希表或数据库来索引,以便快速查找。图片以二进制文件形式存储,需要考虑磁盘I/O效率。磁盘缓存不受应用生命周期影响,但读写速度比内存慢,适合存储长期使用的图片。 实现这样的两...

    Android http缓存实现

    - **HttpURLConnection**:Android原生提供了HttpURLConnection,但默认不开启磁盘缓存。开发者需要手动配置,例如设置setUseCaches(true)并指定缓存目录。 ``` HttpURLConnection connection = ...

    android图片缓存有关的项目

    - Glide:一款高效的Android图片加载库,内置了内存和磁盘缓存,支持图片压缩、裁剪、动画等功能。 - Picasso:同样是一个流行的选择,提供了简单易用的API进行图片加载和缓存管理。 - Fresco:Facebook开源的...

    Android应用源码之Android 图片缓存、加载器.zip

    2. **磁盘缓存**:磁盘缓存通常使用LRU(Least Recently Used)策略来管理,当内存缓存不足以保存所有图片时,新的图片会被写入磁盘。下次需要同一图片时,如果不在内存中,会先查找磁盘缓存,从而降低对网络的依赖...

    Android的DiskLruCache磁盘缓存

    Android 磁盘缓存 DiskLruCache类 调用封装 缓存Bitmap String 相关的博客地址:http://blog.csdn.net/tangzhide/article/details/52489982

    复件 数据缓存.rar

    Android提供了`DiskLruCache`类作为磁盘缓存的一种解决方案。此外,Android的`SQLite`数据库也可以用于存储大量的结构化数据,作为长期的磁盘缓存。 3. **Volley库中的缓存**: Google推出的Volley网络请求库内建...

    Android app缓存清理实现

    缓存是为了提高数据读取速度,将常用但加载较慢的数据存储在本地,当用户再次需要这些数据时,可以从缓存中快速获取,避免了网络请求或者磁盘I/O的开销。然而,如果不及时清理,缓存可能会占用大量存储空间,甚至...

    DiskLruCache磁盘缓存java源码

    总结来说,DiskLruCache是Android开发中实现高效磁盘缓存的一种有效工具,通过理解其工作原理和源码,可以更好地利用它来提升应用性能,减少网络请求,提供流畅的用户体验。在实际开发中,结合内存缓存如LRUCache,...

Global site tag (gtag.js) - Google Analytics