jemalloc
jemalloc是一个内存分配实现库。主要提供malloc, calloc, realloc, free这几个函数实现。这几个函数都是C标准函数,在标准库中都有实现,如glibc, VC等都提供了C标准库的实现,其中都有这几个函数的实现。jemalloc是非标准库提供的实现,类似的还有tcmalloc等。glibc中的实现实际上就是ptmalloc。
jemalloc安装
# autoconf
# mkdir obj
# cd obj
# ../configure --prefix=/usr/local/jemalloc-1.0.0
# make
# make install
├── bin
│ └── pprof
├── include
│ └── jemalloc
│ ├── jemalloc_defs.h
│ └── jemalloc.h
├── lib
│ ├── libjemalloc_pic.a
│ ├── libjemalloc.so -> libjemalloc.so.0
│ └── libjemalloc.so.0
└── share
└── man
└── man3
└── jemalloc.3
7 directories, 7 files
jemalloc安装后,只有两个头文件:jemalloc.h,jemalloc_defs.h。我们只会用到jemalloc.h。
默认情况下安装
支持小块内存
#define JEMALLOC_TINY
支持线程缓存层
#define JEMALLOC_TCACHE
支持延迟锁定
#define JEMALLOC_LAZY_LOCK
# ../configure --enable-debug --prefix=/usr/local/jemalloc-1.0.0
# ../configure --enable-prof --prefix=/usr/local/jemalloc-1.0.0
运行时配置
运行时配置包括/etc/jemalloc.conf配置文件,环境变量
/etc/jemalloc.conf
环境变量
JEMALLOC_OPTIONS
#include <stdio.h> #include <jemalloc.h> int main() { char *ptr = (char *) malloc(111); printf("ptr@%p\n", ptr); return 0; }
编译
# gcc -I/usr/local/jemalloc-1.0.0/include/jemalloc -L/usr/local/jemalloc-1.0.0/lib malloc_test.c -o malloc_test -ljemalloc
运行
# ./malloc_test
./malloc_test: error while loading shared libraries: libjemalloc.so.0: cannot open shared object file: No such file or directory
这里报没找到libjemalloc.so.0错误
# ldd malloc_test
linux-gate.so.1 => (0x00670000)
libjemalloc.so.0 => not found
libc.so.6 => /lib/libc.so.6 (0x00ae3000)
/lib/ld-linux.so.2 (0x80039000)
通过-Wl指定连接选项--rpath:
# gcc -Wl,--rpath=/usr/local/jemalloc-1.0.0/lib -I/usr/local/jemalloc-1.0.0/include/jemalloc -L/usr/local/jemalloc-1.0.0/lib malloc_test.c -o malloc_test -ljemalloc
# ./malloc_test
ptr@0xb6804040
jemalloc源代码分析
jemalloc源代码并没有直接提供configure或者Makefile来配置,构建和安装。jemalloc提供了autogen.sh工具用来配置并生成Makefile。不过这种方式无法指定自定义安装目录,默认安装在/usr/local,/usr/local/bin,/usr/local/include,/usr/local/lib,/usr/local/share,/usr/local/share/man下。
可以通过autoconf来生成configure配置。这样我们可以通过configure指定自定义安装目录。
jemalloc.h.in,jemalloc_defs.h.in
/* include/jemalloc/jemalloc_defs.h. Generated from jemalloc_defs.h.in by configure. */ #ifndef JEMALLOC_DEFS_H_ #define JEMALLOC_DEFS_H_ /* * If JEMALLOC_PREFIX is defined, it will cause all public APIs to be prefixed. * This makes it possible, with some care, to use multiple allocators * simultaneously. * * In many cases it is more convenient to manually prefix allocator function * calls than to let macros do it automatically, particularly when using * multiple allocators simultaneously. Define JEMALLOC_MANGLE before * #include'ing jemalloc.h in order to cause name mangling that corresponds to * the API prefixing. */ /* #undef JEMALLOC_PREFIX */ #if (defined(JEMALLOC_PREFIX) && defined(JEMALLOC_MANGLE)) /* #undef JEMALLOC_P */ #endif /* * Hyper-threaded CPUs may need a special instruction inside spin loops in * order to yield to another virtual CPU. */ #define CPU_SPINWAIT __asm__ volatile("pause") /* Defined if __attribute__((...)) syntax is supported. */ #define JEMALLOC_HAVE_ATTR #ifdef JEMALLOC_HAVE_ATTR # define JEMALLOC_ATTR(s) __attribute__((s)) #else # define JEMALLOC_ATTR(s) #endif /* * JEMALLOC_DEBUG enables assertions and other sanity checks, and disables * inline functions. */ /* #undef JEMALLOC_DEBUG */ /* JEMALLOC_STATS enables statistics calculation. */ /* #undef JEMALLOC_STATS */ /* JEMALLOC_PROF enables allocation profiling. */ /* #undef JEMALLOC_PROF */ /* Use libunwind for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_LIBUNWIND */ /* Use libgcc for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_LIBGCC */ /* * JEMALLOC_TINY enables support for tiny objects, which are smaller than one * quantum. */ #define JEMALLOC_TINY /* * JEMALLOC_TCACHE enables a thread-specific caching layer for small objects. * This makes it possible to allocate/deallocate objects without any locking * when the cache is in the steady state. */ #define JEMALLOC_TCACHE /* * JEMALLOC_DSS enables use of sbrk(2) to allocate chunks from the data storage * segment (DSS). */ /* #undef JEMALLOC_DSS */ /* JEMALLOC_SWAP enables mmap()ed swap file support. */ /* #undef JEMALLOC_SWAP */ /* Support memory filling (junk/zero). */ /* #undef JEMALLOC_FILL */ /* Support optional abort() on OOM. */ /* #undef JEMALLOC_XMALLOC */ /* Support SYSV semantics. */ /* #undef JEMALLOC_SYSV */ /* Support lazy locking (avoid locking unless a second thread is launched). */ #define JEMALLOC_LAZY_LOCK /* Determine page size at run time if defined. */ /* #undef DYNAMIC_PAGE_SHIFT */ /* One page is 2^STATIC_PAGE_SHIFT bytes. */ #define STATIC_PAGE_SHIFT 12 /* TLS is used to map arenas and magazine caches to threads. */ /* #undef NO_TLS */ /* sizeof(void *) == 2^LG_SIZEOF_PTR. */ #define LG_SIZEOF_PTR 2 /* sizeof(int) == 2^LG_SIZEOF_INT. */ #define LG_SIZEOF_INT 2 #endif /* JEMALLOC_DEFS_H_ */
分配区
arena_t**arenas;
unsignednarenas;
arenas是一个arena_t类型的双指针,表示一组分配区,arenas中的每个元素都是一个arena_t类型的指针,它指向一个分配区。
narenas表示分配区数。
分配内存
malloc
JEMALLOC_ATTR(malloc) JEMALLOC_ATTR(visibility("default")) void * JEMALLOC_P(malloc)(size_t size) { void *ret; #ifdef JEMALLOC_PROF prof_thr_cnt_t *cnt; #endif if (malloc_init()) { ret = NULL; goto OOM; } if (size == 0) { #ifdef JEMALLOC_SYSV if (opt_sysv == false) #endif size = 1; #ifdef JEMALLOC_SYSV else { # ifdef JEMALLOC_XMALLOC if (opt_xmalloc) { malloc_write("<jemalloc>: Error in malloc(): " "invalid size 0\n"); abort(); } # endif ret = NULL; goto RETURN; } #endif } #ifdef JEMALLOC_PROF if (opt_prof) { if ((cnt = prof_alloc_prep(size)) == NULL) { ret = NULL; goto OOM; } if (prof_promote && (uintptr_t)cnt != (uintptr_t)1U && size <= small_maxclass) { ret = imalloc(small_maxclass+1); if (ret != NULL) arena_prof_promoted(ret, size); } else ret = imalloc(size); } else #endif ret = imalloc(size); OOM: if (ret == NULL) { #ifdef JEMALLOC_XMALLOC if (opt_xmalloc) { malloc_write("<jemalloc>: Error in malloc(): " "out of memory\n"); abort(); } #endif errno = ENOMEM; } #ifdef JEMALLOC_SYSV RETURN: #endif #ifdef JEMALLOC_PROF if (opt_prof && ret != NULL) prof_malloc(ret, cnt); #endif return (ret); }
函数malloc主要调用了两个函数,malloc_init和imalloc。
malloc_init
static inline bool malloc_init(void) { if (malloc_initialized == false) return (malloc_init_hard()); return (false); }
malloc_init函数初始化成功返回false, 失败返回true。这个返回有点奇怪,不过比较符合linux风格,false表示0,true表示1,这在linux中,函数调用成功的话返回0,返回非0通常表示函数调用失败。
malloc_init函数很简单,首先判断是否已经初始化,malloc_initialized初始化为false, 如果还没有初始化,调用malloc_init_hard初始化,初始化后malloc_initialized为true。
malloc_init_hard
static bool malloc_init_hard(void) { unsigned i; int linklen; char buf[PATH_MAX + 1]; const char *opts; arena_t *init_arenas[1]; malloc_mutex_lock(&init_lock); if (malloc_initialized || malloc_initializer == pthread_self()) { /* * Another thread initialized the allocator before this one * acquired init_lock, or this thread is the initializing * thread, and it is recursively allocating. */ malloc_mutex_unlock(&init_lock); return (false); } if (malloc_initializer != (unsigned long)0) { /* Busy-wait until the initializing thread completes. */ do { malloc_mutex_unlock(&init_lock); CPU_SPINWAIT; malloc_mutex_lock(&init_lock); } while (malloc_initialized == false); return (false); } #ifdef DYNAMIC_PAGE_SHIFT /* Get page size. */ { long result; result = sysconf(_SC_PAGESIZE); assert(result != -1); pagesize = (unsigned)result; /* * We assume that pagesize is a power of 2 when calculating * pagesize_mask and lg_pagesize. */ assert(((result - 1) & result) == 0); pagesize_mask = result - 1; lg_pagesize = ffs((int)result) - 1; } #endif for (i = 0; i < 3; i++) { unsigned j; /* Get runtime configuration. */ switch (i) { case 0: if ((linklen = readlink("/etc/jemalloc.conf", buf, sizeof(buf) - 1)) != -1) { /* * Use the contents of the "/etc/jemalloc.conf" * symbolic link's name. */ buf[linklen] = '\0'; opts = buf; } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; case 1: if ((opts = getenv("JEMALLOC_OPTIONS")) != NULL) { /* * Do nothing; opts is already initialized to * the value of the JEMALLOC_OPTIONS * environment variable. */ } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; case 2: if (JEMALLOC_P(malloc_options) != NULL) { /* * Use options that were compiled into the * program. */ opts = JEMALLOC_P(malloc_options); } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; default: /* NOTREACHED */ assert(false); buf[0] = '\0'; opts = buf; } for (j = 0; opts[j] != '\0'; j++) { unsigned k, nreps; bool nseen; /* Parse repetition count, if any. */ for (nreps = 0, nseen = false;; j++, nseen = true) { switch (opts[j]) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': nreps *= 10; nreps += opts[j] - '0'; break; default: goto MALLOC_OUT; } } MALLOC_OUT: if (nseen == false) nreps = 1; for (k = 0; k < nreps; k++) { switch (opts[j]) { case 'a': opt_abort = false; break; case 'A': opt_abort = true; break; #ifdef JEMALLOC_PROF case 'b': if (opt_lg_prof_bt_max > 0) opt_lg_prof_bt_max--; break; case 'B': if (opt_lg_prof_bt_max < LG_PROF_BT_MAX) opt_lg_prof_bt_max++; break; #endif case 'c': if (opt_lg_cspace_max - 1 > opt_lg_qspace_max && opt_lg_cspace_max > LG_CACHELINE) opt_lg_cspace_max--; break; case 'C': if (opt_lg_cspace_max < PAGE_SHIFT - 1) opt_lg_cspace_max++; break; case 'd': if (opt_lg_dirty_mult + 1 < (sizeof(size_t) << 3)) opt_lg_dirty_mult++; break; case 'D': if (opt_lg_dirty_mult >= 0) opt_lg_dirty_mult--; break; #ifdef JEMALLOC_PROF case 'e': opt_prof_active = false; break; case 'E': opt_prof_active = true; break; case 'f': opt_prof = false; break; case 'F': opt_prof = true; break; #endif #ifdef JEMALLOC_TCACHE case 'g': if (opt_lg_tcache_gc_sweep >= 0) opt_lg_tcache_gc_sweep--; break; case 'G': if (opt_lg_tcache_gc_sweep + 1 < (sizeof(size_t) << 3)) opt_lg_tcache_gc_sweep++; break; case 'h': opt_tcache = false; break; case 'H': opt_tcache = true; break; #endif #ifdef JEMALLOC_PROF case 'i': if (opt_lg_prof_interval >= 0) opt_lg_prof_interval--; break; case 'I': if (opt_lg_prof_interval + 1 < (sizeof(uint64_t) << 3)) opt_lg_prof_interval++; break; #endif #ifdef JEMALLOC_FILL case 'j': opt_junk = false; break; case 'J': opt_junk = true; break; #endif case 'k': /* * Chunks always require at least one * header page, plus one data page. */ if ((1U << (opt_lg_chunk - 1)) >= (2U << PAGE_SHIFT)) opt_lg_chunk--; break; case 'K': if (opt_lg_chunk + 1 < (sizeof(size_t) << 3)) opt_lg_chunk++; break; #ifdef JEMALLOC_PROF case 'l': opt_prof_leak = false; break; case 'L': opt_prof_leak = true; break; #endif #ifdef JEMALLOC_TCACHE case 'm': if (opt_lg_tcache_maxclass >= 0) opt_lg_tcache_maxclass--; break; case 'M': if (opt_lg_tcache_maxclass + 1 < (sizeof(size_t) << 3)) opt_lg_tcache_maxclass++; break; #endif case 'n': opt_narenas_lshift--; break; case 'N': opt_narenas_lshift++; break; #ifdef JEMALLOC_SWAP case 'o': opt_overcommit = false; break; case 'O': opt_overcommit = true; break; #endif case 'p': opt_stats_print = false; break; case 'P': opt_stats_print = true; break; case 'q': if (opt_lg_qspace_max > LG_QUANTUM) opt_lg_qspace_max--; break; case 'Q': if (opt_lg_qspace_max + 1 < opt_lg_cspace_max) opt_lg_qspace_max++; break; #ifdef JEMALLOC_PROF case 's': if (opt_lg_prof_sample > 0) opt_lg_prof_sample--; break; case 'S': if (opt_lg_prof_sample + 1 < (sizeof(uint64_t) << 3)) opt_lg_prof_sample++; break; case 'u': opt_prof_udump = false; break; case 'U': opt_prof_udump = true; break; #endif #ifdef JEMALLOC_SYSV case 'v': opt_sysv = false; break; case 'V': opt_sysv = true; break; #endif #ifdef JEMALLOC_XMALLOC case 'x': opt_xmalloc = false; break; case 'X': opt_xmalloc = true; break; #endif #ifdef JEMALLOC_FILL case 'z': opt_zero = false; break; case 'Z': opt_zero = true; break; #endif default: { char cbuf[2]; cbuf[0] = opts[j]; cbuf[1] = '\0'; malloc_write( "<jemalloc>: Unsupported character " "in malloc options: '"); malloc_write(cbuf); malloc_write("'\n"); } } } } } /* Register fork handlers. */ if (pthread_atfork(jemalloc_prefork, jemalloc_postfork, jemalloc_postfork) != 0) { malloc_write("<jemalloc>: Error in pthread_atfork()\n"); if (opt_abort) abort(); } if (ctl_boot()) { malloc_mutex_unlock(&init_lock); return (true); } if (opt_stats_print) { /* Print statistics at exit. */ if (atexit(stats_print_atexit) != 0) { malloc_write("<jemalloc>: Error in atexit()\n"); if (opt_abort) abort(); } } if (chunk_boot()) { malloc_mutex_unlock(&init_lock); return (true); } if (base_boot()) { malloc_mutex_unlock(&init_lock); return (true); } #ifdef JEMALLOC_PROF prof_boot0(); #endif if (arena_boot()) { malloc_mutex_unlock(&init_lock); return (true); } #ifdef JEMALLOC_TCACHE tcache_boot(); #endif if (huge_boot()) { malloc_mutex_unlock(&init_lock); return (true); } /* * Create enough scaffolding to allow recursive allocation in * malloc_ncpus(). */ narenas = 1; arenas = init_arenas; memset(arenas, 0, sizeof(arena_t *) * narenas); /* * Initialize one arena here. The rest are lazily created in * choose_arena_hard(). */ arenas_extend(0); if (arenas[0] == NULL) { malloc_mutex_unlock(&init_lock); return (true); } #ifndef NO_TLS /* * Assign the initial arena to the initial thread, in order to avoid * spurious creation of an extra arena if the application switches to * threaded mode. */ arenas_map = arenas[0]; #endif malloc_mutex_init(&arenas_lock); #ifdef JEMALLOC_PROF if (prof_boot1()) { malloc_mutex_unlock(&init_lock); return (true); } #endif /* Get number of CPUs. */ malloc_initializer = pthread_self(); malloc_mutex_unlock(&init_lock); ncpus = malloc_ncpus(); malloc_mutex_lock(&init_lock); if (ncpus > 1) { /* * For SMP systems, create more than one arena per CPU by * default. */ opt_narenas_lshift += 2; } /* Determine how many arenas to use. */ narenas = ncpus; if (opt_narenas_lshift > 0) { if ((narenas << opt_narenas_lshift) > narenas) narenas <<= opt_narenas_lshift; /* * Make sure not to exceed the limits of what base_alloc() can * handle. */ if (narenas * sizeof(arena_t *) > chunksize) narenas = chunksize / sizeof(arena_t *); } else if (opt_narenas_lshift < 0) { if ((narenas >> -opt_narenas_lshift) < narenas) narenas >>= -opt_narenas_lshift; /* Make sure there is at least one arena. */ if (narenas == 0) narenas = 1; } #ifdef NO_TLS if (narenas > 1) { static const unsigned primes[] = {1, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263}; unsigned nprimes, parenas; /* * Pick a prime number of hash arenas that is more than narenas * so that direct hashing of pthread_self() pointers tends to * spread allocations evenly among the arenas. */ assert((narenas & 1) == 0); /* narenas must be even. */ nprimes = (sizeof(primes) >> LG_SIZEOF_INT); parenas = primes[nprimes - 1]; /* In case not enough primes. */ for (i = 1; i < nprimes; i++) { if (primes[i] > narenas) { parenas = primes[i]; break; } } narenas = parenas; } #endif #ifndef NO_TLS next_arena = 0; #endif /* Allocate and initialize arenas. */ arenas = (arena_t **)base_alloc(sizeof(arena_t *) * narenas); if (arenas == NULL) { malloc_mutex_unlock(&init_lock); return (true); } /* * Zero the array. In practice, this should always be pre-zeroed, * since it was just mmap()ed, but let's be sure. */ memset(arenas, 0, sizeof(arena_t *) * narenas); /* Copy the pointer to the one arena that was already initialized. */ arenas[0] = init_arenas[0]; malloc_initialized = true; malloc_mutex_unlock(&init_lock); return (false); }
malloc_init_hard根据CPU数初始化arenas,arenas中的每个元素都是一个arena_t类型的指针,有多少个CPU,就分配多少个元素(arena_t类型的指针),但初始时,只会初始化一个arena(分配区)。
首先是获取运行时配置,根据这个配置,开始后面的初始化。为当前线程调用fork注册几个handle函数:jemalloc_prefork,jemalloc_postfork。调用ctl_boot初始化一个ctl互斥锁。调用chunk_boot,根据opt_lg_chunk设置chunk大小,chunk大小掩码,chunk页数,如果开启了stats或者prof,初始化一个chunks互斥锁,如果开启了swap,调用chunk_swap_boot,初始化一个swap互斥锁,并初始化swap,如果开启了dss,调用chunk_dss_boot,初始化一个dss互斥锁,并初始化dss。调用base_boot初始化一个base互斥锁。如果开启了prof,调用prof_boot0初始化prof。调用arena_boot,初始化arena。如果开启了tcache,调用tcache_boot,初始化tcache。调用huge_boot,初始化huge。初始化一个arena。如果开启了tls,将初始arena分配给初始线程。初始化一个arenas互斥锁。如果开启了prof,调用prof_boot1初始化prof。设置malloc_initializer为当前线程,并获取CPU数。
调用base_alloc分配和初始化arenas。
最后会将malloc_initialized设置为true,表示初始化好了。
imalloc
JEMALLOC_INLINE void * imalloc(size_t size) { assert(size != 0); if (size <= arena_maxclass) return (arena_malloc(size, false)); else return (huge_malloc(size, false)); }
void * arena_malloc(size_t size, bool zero) { assert(size != 0); assert(QUANTUM_CEILING(size) <= arena_maxclass); if (size <= small_maxclass) { #ifdef JEMALLOC_TCACHE tcache_t *tcache; if ((tcache = tcache_get()) != NULL) return (tcache_alloc_small(tcache, size, zero)); else #endif return (arena_malloc_small(choose_arena(), size, zero)); } else { #ifdef JEMALLOC_TCACHE if (size <= tcache_maxclass) { tcache_t *tcache; if ((tcache = tcache_get()) != NULL) return (tcache_alloc_large(tcache, size, zero)); else { return (arena_malloc_large(choose_arena(), size, zero)); } } else #endif return (arena_malloc_large(choose_arena(), size, zero)); } }
void * huge_malloc(size_t size, bool zero) { void *ret; size_t csize; extent_node_t *node; /* Allocate one or more contiguous chunks for this request. */ csize = CHUNK_CEILING(size); if (csize == 0) { /* size is large enough to cause size_t wrap-around. */ return (NULL); } /* Allocate an extent node with which to track the chunk. */ node = base_node_alloc(); if (node == NULL) return (NULL); ret = chunk_alloc(csize, &zero); if (ret == NULL) { base_node_dealloc(node); return (NULL); } /* Insert node into huge. */ node->addr = ret; node->size = csize; malloc_mutex_lock(&huge_mtx); extent_tree_ad_insert(&huge, node); #ifdef JEMALLOC_STATS huge_nmalloc++; huge_allocated += csize; #endif malloc_mutex_unlock(&huge_mtx); #ifdef JEMALLOC_FILL if (zero == false) { if (opt_junk) memset(ret, 0xa5, csize); else if (opt_zero) memset(ret, 0, csize); } #endif return (ret); }
calloc
JEMALLOC_ATTR(malloc) JEMALLOC_ATTR(visibility("default")) void * JEMALLOC_P(calloc)(size_t num, size_t size) { void *ret; size_t num_size; #ifdef JEMALLOC_PROF prof_thr_cnt_t *cnt; #endif if (malloc_init()) { num_size = 0; ret = NULL; goto RETURN; } num_size = num * size; if (num_size == 0) { #ifdef JEMALLOC_SYSV if ((opt_sysv == false) && ((num == 0) || (size == 0))) #endif num_size = 1; #ifdef JEMALLOC_SYSV else { ret = NULL; goto RETURN; } #endif /* * Try to avoid division here. We know that it isn't possible to * overflow during multiplication if neither operand uses any of the * most significant half of the bits in a size_t. */ } else if (((num | size) & (SIZE_T_MAX << (sizeof(size_t) << 2))) && (num_size / size != num)) { /* size_t overflow. */ ret = NULL; goto RETURN; } #ifdef JEMALLOC_PROF if (opt_prof) { if ((cnt = prof_alloc_prep(num_size)) == NULL) { ret = NULL; goto RETURN; } if (prof_promote && (uintptr_t)cnt != (uintptr_t)1U && num_size <= small_maxclass) { ret = icalloc(small_maxclass+1); if (ret != NULL) arena_prof_promoted(ret, num_size); } else ret = icalloc(num_size); } else #endif ret = icalloc(num_size); RETURN: if (ret == NULL) { #ifdef JEMALLOC_XMALLOC if (opt_xmalloc) { malloc_write("<jemalloc>: Error in calloc(): out of " "memory\n"); abort(); } #endif errno = ENOMEM; } #ifdef JEMALLOC_PROF if (opt_prof && ret != NULL) prof_malloc(ret, cnt); #endif return (ret); }
realloc
JEMALLOC_ATTR(visibility("default")) void * JEMALLOC_P(realloc)(void *ptr, size_t size) { void *ret; #ifdef JEMALLOC_PROF size_t old_size; prof_thr_cnt_t *cnt, *old_cnt; #endif if (size == 0) { #ifdef JEMALLOC_SYSV if (opt_sysv == false) #endif size = 1; #ifdef JEMALLOC_SYSV else { if (ptr != NULL) { #ifdef JEMALLOC_PROF if (opt_prof) { old_size = isalloc(ptr); old_cnt = prof_cnt_get(ptr); cnt = NULL; } #endif idalloc(ptr); } #ifdef JEMALLOC_PROF else if (opt_prof) { old_size = 0; old_cnt = NULL; cnt = NULL; } #endif ret = NULL; goto RETURN; } #endif } if (ptr != NULL) { assert(malloc_initialized || malloc_initializer == pthread_self()); #ifdef JEMALLOC_PROF if (opt_prof) { old_size = isalloc(ptr); old_cnt = prof_cnt_get(ptr); if ((cnt = prof_alloc_prep(size)) == NULL) { ret = NULL; goto OOM; } if (prof_promote && (uintptr_t)cnt != (uintptr_t)1U && size <= small_maxclass) { ret = iralloc(ptr, small_maxclass+1); if (ret != NULL) arena_prof_promoted(ret, size); } else ret = iralloc(ptr, size); } else #endif ret = iralloc(ptr, size); #ifdef JEMALLOC_PROF OOM: #endif if (ret == NULL) { #ifdef JEMALLOC_XMALLOC if (opt_xmalloc) { malloc_write("<jemalloc>: Error in realloc(): " "out of memory\n"); abort(); } #endif errno = ENOMEM; } } else { #ifdef JEMALLOC_PROF if (opt_prof) { old_size = 0; old_cnt = NULL; } #endif if (malloc_init()) { #ifdef JEMALLOC_PROF if (opt_prof) cnt = NULL; #endif ret = NULL; } else { #ifdef JEMALLOC_PROF if (opt_prof) { if ((cnt = prof_alloc_prep(size)) == NULL) ret = NULL; else { if (prof_promote && (uintptr_t)cnt != (uintptr_t)1U && size <= small_maxclass) { ret = imalloc(small_maxclass+1); if (ret != NULL) { arena_prof_promoted(ret, size); } } else ret = imalloc(size); } } else #endif ret = imalloc(size); } if (ret == NULL) { #ifdef JEMALLOC_XMALLOC if (opt_xmalloc) { malloc_write("<jemalloc>: Error in realloc(): " "out of memory\n"); abort(); } #endif errno = ENOMEM; } } #ifdef JEMALLOC_SYSV RETURN: #endif #ifdef JEMALLOC_PROF if (opt_prof) prof_realloc(ret, cnt, ptr, old_size, old_cnt); #endif return (ret); }
释放内存
JEMALLOC_ATTR(visibility("default")) void JEMALLOC_P(free)(void *ptr) { if (ptr != NULL) { assert(malloc_initialized || malloc_initializer == pthread_self()); #ifdef JEMALLOC_PROF if (opt_prof) prof_free(ptr); #endif idalloc(ptr); } }
相关推荐
jemalloc库是用来替代系统自带的malloc和free函数,实现快速申请内存。这个是windows版本。 解压后,运行build_dll.bat编译出一个dll,把dll和lib文件放在自己的工程目录下,然后用以下方法导出函数: extern "C" __...
本话题将详述在CentOS7操作系统上安装MySQL 5.7.17以及使用jemalloc进行内存优化的过程,这对于提升MySQL性能具有显著作用。 首先,让我们了解MySQL 5.7.17。这是MySQL数据库管理系统的一个版本,它在5.7系列中引入...
jemalloc是一款高效的内存分配器,由 Jaegeuk "Jason" Kim 和 Todd Miller 开发,主要用于改善多线程环境下的内存管理效率。它被广泛应用于各种软件项目,包括Firefox浏览器、Redis数据库以及许多其他高性能服务。...
在配置Redis时,可能需要指定jemalloc作为默认的内存分配器,例如,添加`--with-jemalloc`选项。 6. **配置Redis**:根据实际需求编辑`redis.conf`配置文件,设置端口、日志位置、数据持久化策略等参数。 7. **...
jemalloc是一款高效的内存分配器,由 Jaegeun Kim 和 Jason Orendorff 开发,最初为Mozilla Firefox浏览器设计,现在广泛应用于各种软件项目,包括Redis等高性能数据库系统。jemalloc-5.0.1.tar.gz是jemalloc的一个...
jemalloc是一款高效的内存分配器,由 Jaegeuk Tony Lee 和 Jason Orendorff 在 FreeBSD 项目中开发,后来被广泛应用于各种操作系统,包括Linux和FreeBSD,以及许多开源软件项目,如MongoDB、Firefox等。它优化了多...
jemalloc是内存分配器的一种,它在Redis中被默认使用。在深入探讨jemalloc之前,我们需要先了解内存分配器的作用以及在多核处理器系统中的性能挑战。 内存分配器负责管理程序运行时所使用的内存资源,其基本任务是...
### jemalloc内存分配器利用技术详解 #### 概述 本文档主要介绍了jemalloc内存分配器的相关知识及其潜在的安全漏洞。jemalloc是一款被广泛应用于软件项目中的用户级内存分配器,以其高性能著称。它在多个平台上的...
jemalloc原理分析 jemalloc 是一个高性能的内存分配器,广泛应用于多种领域,包括 JVM 等。了解 jemalloc 的原理和设计思想对于深入理解其工作机制和实现高效的内存管理非常重要。 jemalloc 的核心概念 jemalloc ...
jemalloc 是一款高效、可扩展的内存分配器,由 FreeBSD 开发团队的 Jason Evans 创建。在许多高性能系统和应用程序中,jemalloc 被广泛采用,因为它能够有效地管理内存,减少碎片,提高多线程环境下的并发性能。在这...
安装parcona-mysql用到的依赖包,jemalloc-3.6.0-1.el7.x86_64,jemalloc-3.6.0-1.el7.x86_64
JeMalloc 是一款内存分配器,与其它内存分配器相比,它最大的优势在于多线程情况下的高性能以及内存碎片的减少。 手动变异安装nginx,或者tengine的时候需要配合安装管理内存的工具模块 jemalloc,源资源在国内不好...
jemalloc-2.2.5
jemalloc-4.4.0.tar.bz2
jemalloc是一款高效的内存分配器,尤其在多线程环境下表现出色。它由P.J. Plauger和Facebook团队开发,并广泛应用于许多开源项目和产品,包括Firefox、Redis、MongoDB等。jemalloc-4.4.0是jemalloc的一个特定版本,...
jemalloc 是一个高效且优化内存碎片的内存分配器,它最初由 FreeBSD 操作系统的开发者设计并广泛用于该系统。在 Windows 平台上,jemalloc 的 port 版本使得开发者可以在 Win32 环境中利用其优势。jemalloc 的核心...
Android平台上的jemalloc是一个高效的内存分配器,专为解决C/C++程序中的内存管理问题而设计。jemalloc是由Joyent公司开发的,并被广泛应用于许多高性能系统,包括Firefox浏览器、Redis数据库以及Android系统。在...
《myRedisNetDemo-jemalloc 20180301:从Redis源码到独立网络库的封装与jemalloc内存管理》 在IT领域,优化和效率是永远的主题,尤其是在高性能数据库服务中,这正是`myRedisNetDemo-jemalloc 20180301`项目的核心...
jemalloc-3.6.0-1.el7.art.x86_64.rpm 安装percona-mysql数据库依赖包