glibc2.30代码阅读笔记

前言

九阳神功如果学会了,乾坤大挪移那就也能信手拈来。这波啊,这波是万恶之源。

读,就硬读。

看完之后发现跟2.29差别并不大,当然也可能是我看漏了。

整体

一定要有一些整体思想和约束上的把握,看下面代码有问题的话就翻上来看,如下:

  1. 对于小堆块,采用缓存类的操作来提速,所以是LIFO操作;而大堆块都是为了更加高效利用,所以都是FIFO的操作。
  2. 一块堆空间称之为一个chunk,用户视角和内存视角看到的位置是不一样的,用户只关注数据的增删改查,而内存关注堆块和堆块之间的相互联系。
  3. 你以为的malloc(1),实际上给你的不止1个字节大小。系统为了更高效的运行,会做一些叫做对齐(align)的处理,对齐到机器字长,机器字长其实也就对应着你程序本身的32位/64位这个称呼。
  4. 你malloc之后的chunk,你free掉之后,不会马上还给真实的系统,因为如果这样,系统就顶不住了。所以都是先放在一个叫bins的地方里面先存着,垃圾箱,名字多生动。fastbin和tcache可不在这个bins里。
  5. 只要一个chunk在某条bin的链上,那么它的fd和bk永远是指着一写东西的,要么是另一个放进来的chunk,要么就是bin链的头部,永远都有用的。
  6. fd和bk表示的是一种时间关系,smallbins以及fastbins里的同个链,如果bin1先进去的,bin2后进去的,那么bin2的fd就指向bin1(单链的fastbin就只有fd),bin1的bk就指向bin2,跟他们地址并没关系。这个规则在laregbin链的时候并不完全适用。
  7. fd/bk_nextsize表示的是一种大小关系,因为smallbin什么的,bin链里面的大小都是固定的。而largebin的bin链里面,可能会出现好几种大小的chunk,这个时候相同size的chunk会在同一条小链,每个小链的头部之间会通过fd/bk_nextsize指针表示size的大小关系,fd/bk表示一种位置关系。
  8. largebin的bin链头的bk永远指向bin链中size最小的,fd指向最大的。fd_nextsize的指向是比自己小的小链,bk_nextsize指向比自己大的小链这种关系是循环的,比如最小小链的fd_nextsize指向的是最大小链。

数据结构

fd/bk_nextsize表示的是一种大小关系,因为smallbin什么的,bin链里面的大小都是固定的。而largebin的bin链里面,可能会出现好几种大小的chunk,这种大小关系需要用到这两个额外的指针维护。相同大小的chunk会构成一组小链,每个小链的头部会有fd/bk_nextsize,fd_nextsize指向size比自己小的小链的头部,bk_nextsize指向size比自己大的小链的头部,最大sizebk_nextsize指向最小链头,同理最小的fd_nextsize指向最大链头。

malloc_chunk

/*
  This struct declaration is misleading (but accurate and necessary).
  It declares a "view" into memory allowing access to necessary
  fields at known offsets from a given base. See explanation below.
*/
struct malloc_chunk {

  INTERNAL_SIZE_T      mchunk_prev_size;  /* Size of previous chunk (if free).  */
  INTERNAL_SIZE_T      mchunk_size;       /* Size in bytes, including overhead. */

  struct malloc_chunk* fd;         /* double links -- used only if free. */
  struct malloc_chunk* bk;

  /* Only used for large blocks: pointer to next larger size.  */
  struct malloc_chunk* fd_nextsize; /* double links -- used only if free. */
  struct malloc_chunk* bk_nextsize;
};

官方都告诉你了,这个结构其实有一定的误导性,你看个乐就行了。内存处理的时候,有用的地方确实都长这个样子,不过在这之后的连续空间其实也是有用的,不然你malloc(500)的时候,那剩下内存都在那里呢。

malloc_state

arena,管理堆结构永远滴神,mstate也是它。

struct malloc_state
{
  /* Serialize access.  */
  __libc_lock_define (, mutex);

  /* Flags (formerly in max_fast).  */
  int flags;

  /* Set if the fastbin chunks contain recently inserted free blocks.  */
  /* Note this is a bool but not all targets support atomics on booleans.  */
  int have_fastchunks;

  /* Fastbins */
  mfastbinptr fastbinsY[NFASTBINS];

  /* Base of the topmost chunk -- not otherwise kept in a bin */
  mchunkptr top;

  /* The remainder from the most recent split of a small request */
  mchunkptr last_remainder;

  /* Normal bins packed as described above */
  mchunkptr bins[NBINS * 2 - 2];

  /* Bitmap of bins */
  unsigned int binmap[BINMAPSIZE];

  /* Linked list */
  struct malloc_state *next;

  /* Linked list for free arenas.  Access to this field is serialized
     by free_list_lock in arena.c.  */
  struct malloc_state *next_free;

  /* Number of threads attached to this arena.  0 if the arena is on
     the free list.  Access to this field is serialized by
     free_list_lock in arena.c.  */
  INTERNAL_SIZE_T attached_threads;

  /* Memory allocated from the system in this arena.  */
  INTERNAL_SIZE_T system_mem;
  INTERNAL_SIZE_T max_system_mem;
};

tcache_entry

一个tcache_entry的样子,可以理解成类似于fastbin的存在,next跟fd作用相同,key则是更大的链

/* We overlay this structure on the user-data portion of a chunk when
   the chunk is stored in the per-thread cache.  */
typedef struct tcache_entry
{
  struct tcache_entry *next;
  /* This field exists to detect double frees.  */
  struct tcache_perthread_struct *key;
} tcache_entry;

tcache_perthread_struct

整个tcache的管理结构,每个线程一个,有点类似于malloc_state,默认64个tcache_entry链,counts是用来记录每条链里有多少个chunk的。我就简称它叫tcache了。

后面还有两个变量在线程里的默认值。

/* There is one of these for each thread, which contains the
   per-thread cache (hence "tcache_perthread_struct").  Keeping
   overall size low is mildly important.  Note that COUNTS and ENTRIES
   are redundant (we could have just counted the linked list each
   time), this is for performance reasons.  */
typedef struct tcache_perthread_struct
{
  uint16_t counts[TCACHE_MAX_BINS];
  tcache_entry *entries[TCACHE_MAX_BINS];
} tcache_perthread_struct;
static __thread bool tcache_shutting_down = false;
static __thread tcache_perthread_struct *tcache = NULL;

malloc_par

暂时对其作用表示不明确。目前只知道会生成一个mp_实例来记录一些tcache的重要信息。

struct malloc_par
{
  /* Tunable parameters */
  unsigned long trim_threshold;
  INTERNAL_SIZE_T top_pad;
  INTERNAL_SIZE_T mmap_threshold;
  INTERNAL_SIZE_T arena_test;
  INTERNAL_SIZE_T arena_max;

  /* Memory map support */
  int n_mmaps;
  int n_mmaps_max;
  int max_n_mmaps;
  /* the mmap_threshold is dynamic, until the user sets
     it manually, at which point we need to disable any
     dynamic behavior. */
  int no_dyn_threshold;

  /* Statistics */
  INTERNAL_SIZE_T mmapped_mem;
  INTERNAL_SIZE_T max_mmapped_mem;

  /* First address handed out by MORECORE/sbrk.  */
  char *sbrk_base;

#if USE_TCACHE
  /* Maximum number of buckets to use.  */
  size_t tcache_bins;
  size_t tcache_max_bytes;
  /* Maximum number of chunks in each bucket.  */
  size_t tcache_count;
  /* Maximum number of chunks to remove from the unsorted list, which
     aren't used to prefill the cache.  */
  size_t tcache_unsorted_limit;
#endif
};

一些宏

处理size的宏

/* conversion from malloc headers to user pointers, and back */

#define chunk2mem(p)   ((void*)((char*)(p) + 2*SIZE_SZ))
#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))

/* The smallest possible chunk */
#define MIN_CHUNK_SIZE        (offsetof(struct malloc_chunk, fd_nextsize))

/* The smallest size we can malloc is an aligned minimal chunk */

#define MINSIZE  \
  (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))

/* Check if m has acceptable alignment */

#define aligned_OK(m)  (((unsigned long)(m) & MALLOC_ALIGN_MASK) == 0)

#define misaligned_chunk(p) \
  ((uintptr_t)(MALLOC_ALIGNMENT == 2 * SIZE_SZ ? (p) : chunk2mem (p)) \
   & MALLOC_ALIGN_MASK)

/* pad request bytes into a usable size -- internal version */

#define request2size(req)                                         \
  (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE)  ?             \
   MINSIZE :                                                      \
   ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)

chunk2mem(p):内存视角向用户视角的转换。

mem2chunk(mem):用户视角向内存视角的转换。

MIN_CHUNK_SIZE:理论上最小的堆块大小。

MINSIZE:用户真正能开辟出来的最小堆块。

misaligned_chunk(p):在_int_free的时候用作检测

request2size(req):把用户申请的大小转化成真正可被内存操作的大小。

处理chunk的宏

/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
#define PREV_INUSE 0x1

/* extract inuse bit of previous chunk */
#define prev_inuse(p)       ((p)->mchunk_size & PREV_INUSE)


/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
#define IS_MMAPPED 0x2

/* check for mmap()'ed chunk */
#define chunk_is_mmapped(p) ((p)->mchunk_size & IS_MMAPPED)


/* size field is or'ed with NON_MAIN_ARENA if the chunk was obtained
   from a non-main arena.  This is only set immediately before handing
   the chunk to the user, if necessary.  */
#define NON_MAIN_ARENA 0x4

/* Check for chunk from main arena.  */
#define chunk_main_arena(p) (((p)->mchunk_size & NON_MAIN_ARENA) == 0)

/* Mark a chunk as not being on the main arena.  */
#define set_non_main_arena(p) ((p)->mchunk_size |= NON_MAIN_ARENA)


/*
   Bits to mask off when extracting size

   Note: IS_MMAPPED is intentionally not masked off from size field in
   macros for which mmapped chunks should never be seen. This should
   cause helpful core dumps to occur if it is tried by accident by
   people extending or adapting this malloc.
 */
#define SIZE_BITS (PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)

/* Get size, ignoring use bits */
#define chunksize(p) (chunksize_nomask (p) & ~(SIZE_BITS))

/* Like chunksize, but do not mask SIZE_BITS.  */
#define chunksize_nomask(p)         ((p)->mchunk_size)

/* Ptr to next physical malloc_chunk. */
#define next_chunk(p) ((mchunkptr) (((char *) (p)) + chunksize (p)))

/* Size of the chunk below P.  Only valid if !prev_inuse (P).  */
#define prev_size(p) ((p)->mchunk_prev_size)

/* Set the size of the chunk below P.  Only valid if !prev_inuse (P).  */
#define set_prev_size(p, sz) ((p)->mchunk_prev_size = (sz))

/* Ptr to previous physical malloc_chunk.  Only valid if !prev_inuse (P).  */
#define prev_chunk(p) ((mchunkptr) (((char *) (p)) - prev_size (p)))

/* Treat space at ptr + offset as a chunk */
#define chunk_at_offset(p, s)  ((mchunkptr) (((char *) (p)) + (s)))

/* extract p's inuse bit */
#define inuse(p)                                  \
  ((((mchunkptr) (((char *) (p)) + chunksize (p)))->mchunk_size) & PREV_INUSE)

/* set/clear chunk as being inuse without otherwise disturbing */
#define set_inuse(p)                                  \
  ((mchunkptr) (((char *) (p)) + chunksize (p)))->mchunk_size |= PREV_INUSE

#define clear_inuse(p)                                  \
  ((mchunkptr) (((char *) (p)) + chunksize (p)))->mchunk_size &= ~(PREV_INUSE)


/* check/set/clear inuse bits in known places */
#define inuse_bit_at_offset(p, s)                          \
  (((mchunkptr) (((char *) (p)) + (s)))->mchunk_size & PREV_INUSE)

#define set_inuse_bit_at_offset(p, s)                          \
  (((mchunkptr) (((char *) (p)) + (s)))->mchunk_size |= PREV_INUSE)

#define clear_inuse_bit_at_offset(p, s)                          \
  (((mchunkptr) (((char *) (p)) + (s)))->mchunk_size &= ~(PREV_INUSE))


/* Set size at head, without disturbing its use bit */
#define set_head_size(p, s)  ((p)->mchunk_size = (((p)->mchunk_size & SIZE_BITS) | (s)))

/* Set size/use field */
#define set_head(p, s)       ((p)->mchunk_size = (s))

/* Set size at footer (only when chunk is not in use) */
#define set_foot(p, s)       (((mchunkptr) ((char *) (p) + (s)))->mchunk_prev_size = (s))

prev_inuse(p)/chunk_is_mmapped(p)/chunk_main_arena(p):提取chunk的三个标记位(flags),以后就简称他们P/M/A了。

SIZE_BITS:取了三个标记位的和,二进制实际上就是0b111,后面会用到的,学过计网其实可以把它当成某种意义上的掩码,毕竟源代码里也称呼人家叫MASK。

chunksize(p),chunksize_nomask(p) :提取chunk里的size,一个去掉了MASK位,一个没去掉。

next_chunk(p):根据chunksize找到下一个(相邻更高地址)chunk的头。

prev_size(p):提取chunk里的pre_size。

set_prev_size(p, sz):这个就是设置chunk的pre_size为sz。

prev_chunk(p):根据pre_size找到上一个(相邻更低地址)chunk的头。

chunk_at_offset(p, s):根据偏移值offset找到对应位置的chunk。

inuse(p):下一个chunk的P位如果为1,就视当前的chunk是 被用户使用状态的。

set_inuse(p)/clear_inuse(p):设置/清楚下一个chunk的P位。

inuse_bit_at_offset(p, s)/set_inuse_bit_at_offset(p, s)/clear_inuse_bit_at_offset(p, s):设置P位的offset版本。

set_head_size(p, s):根据你输入的size,带着三个flag生成一个新size。

set_head(p, s) :不带flag生成size。

set_foot(p, s) :设置下一个chunk的pre_size。

处理fastbins的宏

/*
   Fastbins

    An array of lists holding recently freed small chunks.  Fastbins
    are not doubly linked.  It is faster to single-link them, and
    since chunks are never removed from the middles of these lists,
    double linking is not necessary. Also, unlike regular bins, they
    are not even processed in FIFO order (they use faster LIFO) since
    ordering doesn't much matter in the transient contexts in which
    fastbins are normally used.

    Chunks in fastbins keep their inuse bit set, so they cannot
    be consolidated with other free chunks. malloc_consolidate
    releases all chunks in fastbins and consolidates them with
    other free chunks.
 */

typedef struct malloc_chunk *mfastbinptr;
#define fastbin(ar_ptr, idx) ((ar_ptr)->fastbinsY[idx])

/* offset 2 to use otherwise unindexable first 2 bins */
#define fastbin_index(sz) \
  ((((unsigned int) (sz)) >> (SIZE_SZ == 8 ? 4 : 3)) - 2)

fastbin(ar_ptr, idx):根据idx获得fastbins的对应bin链

fastbin_inde:根据大小获得fastbins的对应idx

#define REMOVE_FB(fb, victim, pp)            \
  do                            \
    {                            \
      victim = pp;                    \
      if (victim == NULL)                \
    break;                        \
    }                            \
  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
     != victim);                    \

放在了_int_malloc里的宏,我给取出来放到这里了

处理bins的宏

bins是一个一维数组,存的都是各种双向链表的头节点。

typedef struct malloc_chunk *mbinptr;

/* addressing -- note that bin_at(0) does not exist */
#define bin_at(m, i) \
  (mbinptr) (((char *) &((m)->bins[((i) - 1) * 2]))                  \
             - offsetof (struct malloc_chunk, fd))

/* analog of ++bin */
#define next_bin(b)  ((mbinptr) ((char *) (b) + (sizeof (mchunkptr) << 1)))

/* Reminders about list directionality within bins */
#define first(b)     ((b)->fd)
#define last(b)      ((b)->bk)
/*
    ...
    Bin 1 is the unordered list
    ...
*/
#define NBINS             128
#define NSMALLBINS         64
#define SMALLBIN_WIDTH    MALLOC_ALIGNMENT
#define SMALLBIN_CORRECTION (MALLOC_ALIGNMENT > 2 * SIZE_SZ)
#define MIN_LARGE_SIZE    ((NSMALLBINS - SMALLBIN_CORRECTION) * SMALLBIN_WIDTH)

#define in_smallbin_range(sz)  \
  ((unsigned long) (sz) < (unsigned long) MIN_LARGE_SIZE)

#define smallbin_index(sz) \
  ((SMALLBIN_WIDTH == 16 ? (((unsigned) (sz)) >> 4) : (((unsigned) (sz)) >> 3))\
   + SMALLBIN_CORRECTION)

#define largebin_index_32(sz)                                                \
  (((((unsigned long) (sz)) >> 6) <= 38) ?  56 + (((unsigned long) (sz)) >> 6) :\
   ((((unsigned long) (sz)) >> 9) <= 20) ?  91 + (((unsigned long) (sz)) >> 9) :\
   ((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
   ((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
   ((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
   126)

#define largebin_index_32_big(sz)                                            \
  (((((unsigned long) (sz)) >> 6) <= 45) ?  49 + (((unsigned long) (sz)) >> 6) :\
   ((((unsigned long) (sz)) >> 9) <= 20) ?  91 + (((unsigned long) (sz)) >> 9) :\
   ((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
   ((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
   ((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
   126)

// XXX It remains to be seen whether it is good to keep the widths of
// XXX the buckets the same or whether it should be scaled by a factor
// XXX of two as well.
#define largebin_index_64(sz)                                                \
  (((((unsigned long) (sz)) >> 6) <= 48) ?  48 + (((unsigned long) (sz)) >> 6) :\
   ((((unsigned long) (sz)) >> 9) <= 20) ?  91 + (((unsigned long) (sz)) >> 9) :\
   ((((unsigned long) (sz)) >> 12) <= 10) ? 110 + (((unsigned long) (sz)) >> 12) :\
   ((((unsigned long) (sz)) >> 15) <= 4) ? 119 + (((unsigned long) (sz)) >> 15) :\
   ((((unsigned long) (sz)) >> 18) <= 2) ? 124 + (((unsigned long) (sz)) >> 18) :\
   126)

#define largebin_index(sz) \
  (SIZE_SZ == 8 ? largebin_index_64 (sz)                                     \
   : MALLOC_ALIGNMENT == 16 ? largebin_index_32_big (sz)                     \
   : largebin_index_32 (sz))

#define bin_index(sz) \
  ((in_smallbin_range (sz)) ? smallbin_index (sz) : largebin_index (sz))

bin_at(m, i):这里的m是malloc_state结构体,通过i为索引值,找到对应的bin的地址。

next_bin(b):找到下一个bin的地址。

first(b)/last(b):取chunk的fd/bk的地址

in_smallbin_range(sz):判断size是否在smallbin的范围内。

smallbin_index(sz):根据size找到对应的bin的idx。

largebin_index(sz):上面那几个乱七八糟的宏都是为它服务的,就是找到largebin对应的idx,怎么算的我也懒得算了。

bin_index(sz):找到size对应的idx,把上面的都合并了。

处理binmap的宏


/* Conservatively use 32 bits per map word, even if on 64bit system */
#define BINMAPSHIFT      5
#define BITSPERMAP       (1U << BINMAPSHIFT)
#define BINMAPSIZE       (NBINS / BITSPERMAP)

#define idx2block(i)     ((i) >> BINMAPSHIFT)
#define idx2bit(i)       ((1U << ((i) & ((1U << BINMAPSHIFT) - 1))))

#define mark_bin(m, i)    ((m)->binmap[idx2block (i)] |= idx2bit (i))
#define unmark_bin(m, i)  ((m)->binmap[idx2block (i)] &= ~(idx2bit (i)))
#define get_binmap(m, i)  ((m)->binmap[idx2block (i)] & idx2bit (i))

BINMAPSIZE :64位里面为4.

idx2block(i) :根据idx找到对应的binmap区域

idx2bit(i):根据idx找到对应的bit值

处理tcache的宏

/* We want 64 entries.  This is an arbitrary limit, which tunables can reduce.  */
# define TCACHE_MAX_BINS        64
# define MAX_TCACHE_SIZE    tidx2usize (TCACHE_MAX_BINS-1)

/* Only used to pre-fill the tunables.  */
# define tidx2usize(idx)    (((size_t) idx) * MALLOC_ALIGNMENT + MINSIZE - SIZE_SZ)

/* When "x" is from chunksize().  */
# define csize2tidx(x) (((x) - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT)
/* When "x" is a user-provided size.  */
# define usize2tidx(x) csize2tidx (request2size (x))

/* With rounding and alignment, the bins are...
   idx 0   bytes 0..24 (64-bit) or 0..12 (32-bit)
   idx 1   bytes 25..40 or 13..20
   idx 2   bytes 41..56 or 21..28
   etc.  */

/* This is another arbitrary limit, which tunables can change.  Each
   tcache bin will hold at most this number of chunks.  */
# define TCACHE_FILL_COUNT 7

/* Maximum chunks in tcache bins for tunables.  This value must fit the range
   of tcache->counts[] entries, else they may overflow.  */
# define MAX_TCACHE_COUNT UINT16_MAX

tidx2usize(idx):tcache里的idx转size。

csize2tidx(x):tcache里的size转idx。内存视角

usize2tidx(x):同上,用户视角。

TCACHE_FILL_COUNT:每条entry链的最多数量限制。

辅助函数

checked_request2size

/* Check if REQ overflows when padded and aligned and if the resulting value
   is less than PTRDIFF_T.  Returns TRUE and the requested size or MINSIZE in
   case the value is less than MINSIZE on SZ or false if any of the previous
   check fail.  */
static inline bool
checked_request2size (size_t req, size_t *sz) __nonnull (1)
{
  if (__glibc_unlikely (req > PTRDIFF_MAX))
    return false;
  *sz = request2size (req);
  return true;
}

检测的同时,把req通过宏修改成内存真正需要处理的大小。

alloc_perturb/free_perturb

static int perturb_byte;

static void
alloc_perturb (char *p, size_t n)
{
  if (__glibc_unlikely (perturb_byte))
    memset (p, perturb_byte ^ 0xff, n);
}

static void
free_perturb (char *p, size_t n)
{
  if (__glibc_unlikely (perturb_byte))
    memset (p, perturb_byte, n);
}

都是用来填充数据的,不过alloc这个基本上都不怎么发挥作用。

free这个一般还能发挥点作用,用来清空数据。

当年没有牌面的它只能当一个苦逼的宏,后来被pwn师傅们玩坏了,摇身一变了函数,果然,宏果然是有极限的,我不做宏了。

unlink的作用就是把chunk从其所在的bin的链上卸下来。虽然说的是卸下来,不过强调的其实是卸下来之后对应的那条bin链的重组,而不是说卸下来返回给你一个指针。

各种判断代码都是一些曾经被玩坏了的安全检测机制。

/* Take a chunk off a bin list.  */
static void
unlink_chunk (mstate av, mchunkptr p)
{
  // 检测当前chunk的size是否等于下一个chunk的pre_size
  if (chunksize (p) != prev_size (next_chunk (p)))
    malloc_printerr ("corrupted size vs. prev_size");

  // 取俩临时变量fd,bk
  mchunkptr fd = p->fd;
  mchunkptr bk = p->bk;

  // 我前面的后面是不是我/我后面的前面是不是我
  if (__builtin_expect (fd->bk != p || bk->fd != p, 0))
    malloc_printerr ("corrupted double-linked list");

  // 把chunk从链上卸下来,链上进行重组。
  fd->bk = bk;
  bk->fd = fd;

  // 这个地方是对largebin的处理,等把malloc里的双循环看明白了再回来看
  if (!in_smallbin_range (chunksize_nomask (p)) && p->fd_nextsize != NULL)
    {
      if (p->fd_nextsize->bk_nextsize != p
      || p->bk_nextsize->fd_nextsize != p)
    malloc_printerr ("corrupted double-linked list (not small)");

      if (fd->fd_nextsize == NULL)
    {
      if (p->fd_nextsize == p)
        fd->fd_nextsize = fd->bk_nextsize = fd;
      else
        {
          fd->fd_nextsize = p->fd_nextsize;
          fd->bk_nextsize = p->bk_nextsize;
          p->fd_nextsize->bk_nextsize = fd;
          p->bk_nextsize->fd_nextsize = fd;
        }
    }
      else
      {
          p->fd_nextsize->bk_nextsize = p->bk_nextsize;
          p->bk_nextsize->fd_nextsize = p->fd_nextsize;
      }
    }
}

malloc

__libc_malloc

__libc_malloc,malloc的时候唯一指定调用函数。

void *
__libc_malloc (size_t bytes)
{
  mstate ar_ptr;
  void *victim;

  _Static_assert (PTRDIFF_MAX <= SIZE_MAX / 2,
                  "PTRDIFF_MAX is not more than half of SIZE_MAX");

  // 原子性强制读取__malloc_hook的值
  void *(*hook) (size_t, const void *)
    = atomic_forced_read (__malloc_hook);
  // 只要不为NULL,就当成函数地址执行。
  if (__builtin_expect (hook != NULL, 0))
    return (*hook)(bytes, RETURN_ADDRESS (0));

#if USE_TCACHE
  /* int_free also calls request2size, be careful to not pad twice.  */
  size_t tbytes;
  if (!checked_request2size (bytes, &tbytes))
    {
      __set_errno (ENOMEM);
      return NULL;
    }
  // 得到tcache的idx
  size_t tc_idx = csize2tidx (tbytes);

  // 没有tcache就初始化
  MAYBE_INIT_TCACHE ();

  DIAG_PUSH_NEEDS_COMMENT;
  // idx没超范围,tcache存在,对应entry的counts大于0,就会调用tcache_get
  if (tc_idx < mp_.tcache_bins
      && tcache
      && tcache->counts[tc_idx] > 0)
    {
      return tcache_get (tc_idx);
    }
  DIAG_POP_NEEDS_COMMENT;
#endif

  if (SINGLE_THREAD_P)
    {
      // 单线程的时候,调用_int_malloc,malloc的核心逻辑。
      victim = _int_malloc (&main_arena, bytes);
      assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||
          &main_arena == arena_for_chunk (mem2chunk (victim)));
      return victim;
    }

  // 子线程先搞个mstate结构。
  arena_get (ar_ptr, bytes);

  // 子线程也用_int_malloc来获取内存。
  victim = _int_malloc (ar_ptr, bytes);
  /* Retry with another arena only if we were able to find a usable arena
     before.  */
  if (!victim && ar_ptr != NULL)
    {
      LIBC_PROBE (memory_malloc_retry, 1, bytes);
      ar_ptr = arena_get_retry (ar_ptr, bytes);
      victim = _int_malloc (ar_ptr, bytes);
    }

  if (ar_ptr != NULL)
    __libc_lock_unlock (ar_ptr->mutex);

  assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||
          ar_ptr == arena_for_chunk (mem2chunk (victim)));
  return victim;
}
libc_hidden_def (__libc_malloc)

初始处理部分

static void *
_int_malloc (mstate av, size_t bytes)
{
    //...
    if (!checked_request2size (bytes, &nb))
    {
        __set_errno (ENOMEM);
        return NULL;
    }

    /* There are no usable arenas.  Fall back to sysmalloc to get a chunk from
     mmap.  */
    if (__glibc_unlikely (av == NULL))
    {
        void *p = sysmalloc (nb, av);
        if (p != NULL)
            alloc_perturb (p, bytes);
        return p;
    }
    //...
}

把bytes转化成了内存真正需要处理的大小nb。

arena开辟失败了,调用sysmalloc,让系统分配一块空间。

fastbins部分

static void *
_int_malloc (mstate av, size_t bytes)
{
    //...
    #define REMOVE_FB(fb, victim, pp)            \
    //...
    // 在fastbin的范围内。
    if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
    {
        // 根据所需大小获取idx
        idx = fastbin_index (nb);
        // 找fastbin里对应的bin链
        mfastbinptr *fb = &fastbin (av, idx);
        mchunkptr pp;
        // 取bin链头部
        victim = *fb;

        if (victim != NULL)
        {
            // 单线程情况,解链
            if (SINGLE_THREAD_P)
                *fb = victim->fd;
            else
                REMOVE_FB (fb, pp, victim);

            // 取出来的chunk做一次size的检测,取出来的chunk的size必须可以解析成对应bin链的idx。
            if (__glibc_likely (victim != NULL))
            {
                size_t victim_idx = fastbin_index (chunksize (victim));
                if (__builtin_expect (victim_idx != idx, 0))
                    malloc_printerr ("malloc(): memory corruption (fast)");
                check_remalloced_chunk (av, victim, nb);
                #if USE_TCACHE
                /* While we're here, if we see other chunks of the same size,
         stash them in the tcache.  */
                // 获取tcache里entry的idx
                size_t tc_idx = csize2tidx (nb);
                if (tcache && tc_idx < mp_.tcache_bins)
                {
                    mchunkptr tc_victim;
                    // 从刚解完链的fastbins的bin链取个chunk
                    // 终止条件是要么entry满了,要么bin链空了。
                    /* While bin not empty and tcache not full, copy chunks.  */
                    while (tcache->counts[tc_idx] < mp_.tcache_count
                           && (tc_victim = *fb) != NULL)
                    {
                        // 单线程的时候,bin链头部再次解链。
                        if (SINGLE_THREAD_P)
                            *fb = tc_victim->fd;
                        else
                        {
                            REMOVE_FB (fb, pp, tc_victim);
                            if (__glibc_unlikely (tc_victim == NULL))
                                break;
                        }
                        // 把刚才得到的chunk放到tcache的指定entry。
                        tcache_put (tc_victim, tc_idx);
                    }
                }
                #endif
                // 切换到用户视角,返回指针,结束流程
                void *p = chunk2mem (victim);
                alloc_perturb (p, bytes);
                return p;
            }
        }
    }
    //...
}

可以发现,从fastbins里分配的时候,这里会出现一次把fastbins里对应bin链取光全部放到tcache对应entry的过程。

smallbins部分

static void *
_int_malloc (mstate av, size_t bytes)
{
    //...
    /*
     If a small request, check regular bin.  Since these "smallbins"
     hold one size each, no searching within bins is necessary.
     (For a large request, we need to wait until unsorted chunks are
     processed to find best fit. But for small ones, fits are exact
     anyway, so we can check now, which is faster.)
   */

    // 符合smallbins的大小
    if (in_smallbin_range (nb))
    {   
        // 找到idx和对应的bin链位置
        idx = smallbin_index (nb);
        bin = bin_at (av, idx);

        // 判断bin链不为空
        if ((victim = last (bin)) != bin)
        {
            // 从bin链的bk取一个chunk
            bck = victim->bk;
            // 取的chunk的fd必须指向正确
            if (__glibc_unlikely (bck->fd != victim))
                malloc_printerr ("malloc(): smallbin double linked list corrupted");
            // 根据nb,将victim的对应空间相邻的下一个chunk的P位置1
            set_inuse_bit_at_offset (victim, nb);
            // 把victim从bin链卸下来。
            bin->bk = bck;
            bck->fd = bin;

            if (av != &main_arena)
                set_non_main_arena (victim);
            check_malloced_chunk (av, victim, nb);

            #if USE_TCACHE
            /* While we're here, if we see other chunks of the same size,
         stash them in the tcache.  */
            size_t tc_idx = csize2tidx (nb);
            if (tcache && tc_idx < mp_.tcache_bins)
            {
                mchunkptr tc_victim;

                /* While bin not empty and tcache not full, copy chunks over.  */
                // 从bin链的bk开始取,放到tcache的对应entry,和fastbin同样的待遇。
                // 终止条件同样是要么entry满了,要么bin链空了。
                while (tcache->counts[tc_idx] < mp_.tcache_count
                       && (tc_victim = last (bin)) != bin)
                {
                    if (tc_victim != 0)
                    {
                        bck = tc_victim->bk;
                        set_inuse_bit_at_offset (tc_victim, nb);
                        if (av != &main_arena)
                            set_non_main_arena (tc_victim);
                        bin->bk = bck;
                        bck->fd = bin;

                        tcache_put (tc_victim, tc_idx);
                    }
                }
            }
            #endif
            void *p = chunk2mem (victim);
            alloc_perturb (p, bytes);
            return p;
        }
    }
    //...
}    

跟fastbins的表现很类似,差别就在一fastbins是LIFO,而smallbins是FIFO,所以smallbins是从bin链的bk开始遍历的。

过渡部分

/*
     If this is a large request, consolidate fastbins before continuing.
     While it might look excessive to kill all fastbins before
     even seeing if there is space available, this avoids
     fragmentation problems normally associated with fastbins.
     Also, in practice, programs tend to have runs of either small or
     large requests, but less often mixtures, so consolidation is not
     invoked all that often in most programs. And the programs that
     it is called frequently in otherwise tend to fragment.
   */
  // laregbins大小的时候,如果存在fastbin,则先调用malloc_consolidate清理掉
  else
    {
      idx = largebin_index (nb);
      if (atomic_load_relaxed (&av->have_fastchunks))
        malloc_consolidate (av);
    }

  /*
     Process recently freed or remaindered chunks, taking one only if
     it is exact fit, or, if this a small request, the chunk is remainder from
     the most recent non-exact fit.  Place other traversed chunks in
     bins.  Note that this step is the only place in any routine where
     chunks are placed in bins.

     The outer loop here is needed because we might not realize until
     near the end of malloc that we should have consolidated, so must
     do so and retry. This happens at most once, and only when we would
     otherwise need to expand memory to service a "small" request.
   */

// 这里是tcache存在,且idx未超过限制,会记录tcache_nb为nb
#if USE_TCACHE
  INTERNAL_SIZE_T tcache_nb = 0;
  size_t tc_idx = csize2tidx (nb);
  if (tcache && tc_idx < mp_.tcache_bins)
    tcache_nb = nb;
  int return_cached = 0;

  tcache_unsorted_count = 0;
#endif

这里中间那段注释是针对下面的双循环而言的,都是干货奥

注意,这个双循环是任何例程中,把chunk放进bins的唯一指定位置

双循环的外循环是一个死循环,不过最多跑两次也就出来了

双循环(1)

这个太长了,只能一段一段写了。

内循环进入了unsortedbin里

for (;; )
    int iters = 0;
    // victim取unsortedbin的bk,判断不能链头,bins里的取chunk都是bk遍历,因为FIFO嘛
    while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))
    {  
        // 取victim的bk,victim的size以及nextchunk
        bck = victim->bk;
        size = chunksize (victim);
        mchunkptr next = chunk_at_offset (victim, size);

        // 一些检测,看来是被搞怕了
        // victim和nextchunk的size不能太小
        // nextchunk的presize需要等于victim的size
        // victim的bk的fd必须为victim
        // nexthcunk的P位必须为1
        if (__glibc_unlikely (size <= 2 * SIZE_SZ)
            || __glibc_unlikely (size > av->system_mem))
            malloc_printerr ("malloc(): invalid size (unsorted)");
        if (__glibc_unlikely (chunksize_nomask (next) < 2 * SIZE_SZ)
            || __glibc_unlikely (chunksize_nomask (next) > av->system_mem))
            malloc_printerr ("malloc(): invalid next size (unsorted)");
        if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) != size))
            malloc_printerr ("malloc(): mismatching next->prev_size (unsorted)");
        if (__glibc_unlikely (bck->fd != victim)
            || __glibc_unlikely (victim->fd != unsorted_chunks (av)))
            malloc_printerr ("malloc(): unsorted double linked list corrupted");
        if (__glibc_unlikely (prev_inuse (next)))
            malloc_printerr ("malloc(): invalid next->prev_inuse (unsorted)");
//...

双循环(2)

// ...
/*
        If a small request, try to use last remainder if it is the
        only chunk in unsorted bin.  This helps promote locality for
        runs of consecutive small requests. This is the only
        exception to best-fit, and applies only when there is
        no exact fit for a small chunk.
*/

// nb在smallbins范围内,bck是unsortedbin的bin链头,victim是last_remainder,victim的size足够nb使用,才进入下面的逻辑。
if (in_smallbin_range (nb) &&
    bck == unsorted_chunks (av) &&
    victim == av->last_remainder &&
    (unsigned long) (size) > (unsigned long) (nb + MINSIZE))
{
    /* split and reattach remainder */
    // 切一块last_remainder
    // remainder的size减小,指针根据nb下移调整,unsortedbin的fd和bk也相应更新
    remainder_size = size - nb;
    remainder = chunk_at_offset (victim, nb);
    unsorted_chunks (av)->bk = unsorted_chunks (av)->fd = remainder;
    // 更新arena
    av->last_remainder = remainder;
    // remainder的fd和bk更新指向unsortedbin的链头
    remainder->bk = remainder->fd = unsorted_chunks (av);
    // 如果remainder的size不再smallbin了,当成laregbin,处理一下额外的两个指针。
    if (!in_smallbin_range (remainder_size))
    {
        remainder->fd_nextsize = NULL;
        remainder->bk_nextsize = NULL;
    }
    // 处理一下victim的头部,remainder的头部和remainder的nextchunk
    set_head (victim, nb | PREV_INUSE |
              (av != &main_arena ? NON_MAIN_ARENA : 0));
    set_head (remainder, remainder_size | PREV_INUSE);
    set_foot (remainder, remainder_size);

    check_malloced_chunk (av, victim, nb);

    // 内存视角切换到用户视角,返回指针
    void *p = chunk2mem (victim);
    alloc_perturb (p, bytes);
    return p;
}
/* remove from unsorted list */
// 这怎么又检查了一遍
if (__glibc_unlikely (bck->fd != victim))
    malloc_printerr ("malloc(): corrupted unsorted chunks 3");
// victim从unsortedbin上卸链
unsorted_chunks (av)->bk = bck;
bck->fd = unsorted_chunks (av);

// 如果victim的size和所需的nb刚好相等
if (size == nb)
{
    // 把nextchunk的P置1
    set_inuse_bit_at_offset (victim, size);
    if (av != &main_arena)
        set_non_main_arena (victim);
    #if USE_TCACHE
    /* Fill cache first, return to user only if cache fills.
         We may return one of these chunks later.  */
    // tcache_nb是过渡部分记录的,这里判断对应的entry没装满
    if (tcache_nb
        && tcache->counts[tc_idx] < mp_.tcache_count)
    {
        // 把 victim放进tcache的对应entry,然后开启下一次while循环
        tcache_put (victim, tc_idx);
        return_cached = 1;
        continue;
    }
    // tcache的对应entry满了
    else
    {
        #endif
        // 把victim的用户视角指针返回
        check_malloced_chunk (av, victim, nb);
        void *p = chunk2mem (victim);
        alloc_perturb (p, bytes);
        return p;
        #if USE_TCACHE
    }
    #endif
}
// ...

nb是一开始出malloc的参数处理后的值,别忘了

带四个判断条件的if的意思是:unsortedbin链上只有一个chunk,还正好是last_remainder,大小还够用。

这里关于tcache的处理其实对应的条件是tcache为空,unsortedbin里的chunk即使正好满足nb,也不会给用户,而是先放进entry。

双循环(3)

// ...
/* place chunk in bin */

if (in_smallbin_range (size))
{
    // victim的size在smallbin范围下,获取idx,找到对应的bin链头bck,fwd是链头的fd所指
    victim_index = smallbin_index (size);
    bck = bin_at (av, victim_index);
    fwd = bck->fd;
}

else
{  // size在laregbin范围,同样获取idx和链头bck以及链头的fd为fwd。
    victim_index = largebin_index (size);
    bck = bin_at (av, victim_index);
    fwd = bck->fd;

    // bin链不为空
    /* maintain large bins in sorted order */
    if (fwd != bck)
    {   
        /* Or with inuse bit to speed comparisons */
        size |= PREV_INUSE;
        /* if smaller than smallest, bypass loop below */
        assert (chunk_main_arena (bck->bk));

        // size小于bk指向 
        if ((unsigned long) (size)
            < (unsigned long) chunksize_nomask (bck->bk))
        {
            // fwd变成bin头,bck变成bk所指
            fwd = bck;
            bck = bck->bk;

            victim->fd_nextsize = fwd->fd;
            victim->bk_nextsize = fwd->fd->bk_nextsize;
            fwd->fd->bk_nextsize = victim->bk_nextsize->fd_nextsize = victim;
        }
        else
        {
            assert (chunk_main_arena (fwd));
            while ((unsigned long) size < chunksize_nomask (fwd))
            {
                fwd = fwd->fd_nextsize;
                assert (chunk_main_arena (fwd));
            }

            if ((unsigned long) size
                == (unsigned long) chunksize_nomask (fwd))
                /* Always insert in the second position.  */
                fwd = fwd->fd;
            else
            {
                victim->fd_nextsize = fwd;
                victim->bk_nextsize = fwd->bk_nextsize;
                if (__glibc_unlikely (fwd->bk_nextsize->fd_nextsize != fwd))
                    malloc_printerr ("malloc(): largebin double linked list corrupted (nextsize)");
                fwd->bk_nextsize = victim;
                victim->bk_nextsize->fd_nextsize = victim;
            }
            bck = fwd->bk;
            if (bck->fd != fwd)
                malloc_printerr ("malloc(): largebin double linked list corrupted (bk)");
        }
    }
    // bin链为空,直接把victim插入即可
    else
        victim->fd_nextsize = victim->bk_nextsize = victim;
}

mark_bin (av, victim_index);
victim->bk = bck;
victim->fd = fwd;
fwd->bk = victim;
bck->fd = victim;

#if USE_TCACHE
/* If we've processed as many chunks as we're allowed while
     filling the cache, return one of the cached ones.  */
++tcache_unsorted_count;
if (return_cached
      && mp_.tcache_unsorted_limit > 0
    && tcache_unsorted_count > mp_.tcache_unsorted_limit)
{
    return tcache_get (tc_idx);
}
#endif

#define MAX_ITERS       10000
if (++iters >= MAX_ITERS)
    break;
//...
}

#if USE_TCACHE
/* If all the small chunks we found ended up cached, return one now.  */
if (return_cached)
{
    return tcache_get (tc_idx);
}
#endif

这里largebin的部分,就不在里面写注释了,不然就乱套了,首先要根据size找到对应的largebin的bin链,解下来分多种个情况:

  1. bin链为空:直接把victim插进去,fd_nextsize和bk_nextsize指向自身。
  2. 链不为空,size小于bin链bk所指的size即最小值:这个时候victim就是bin链上最小的,要作为bk的第一个,这个时候比它还小的按照规则实际上应该是最大的,所以fd_nextsize指向了bin的fd。同理bin的fd的bk_nextsize就是比最大的还要大的,其实表示的是最小的,表示的是原来的bin的bk。然后对应位置反向设置一下。
  3. 链不为空,size大于等于原链的最小值:先从fd即最大的沿着fd_nextsize链开始取,也就是不断减小大小直至size不小于链的大小,如果size正好和小链相同,fwd会取小链里的fd
  4. 链不为空,size大于等于原链的最小值,不等于找到的小链的size,此时的victim比fwd大,比fwd->bk_nextsize小,所以fd/bk_nextsize按这个规则布置,然后同样对应位置的反向设置。

解下来的这几行是针对bk和fd的设置。

之后是tcache的部分,这里的return_cached是来自双循环(2)部分的,mp_.tcache_unsorted_limit在哪里能更改我没有找到,所以这个地方如何触发我还在思考,注释里说得尽可能处理了尽可能多的chunk不是很清楚。

再之后已经出了while循环,之前如果通过tcache处理过但是到了这里还是没得到chunk,就从这里的tcache取出来一个。

largebin请求

/*
         If a large request, scan through the chunks of current bin in
         sorted order to find smallest that fits.  Use the skip list for this.
       */

if (!in_smallbin_range (nb))
{
    // 的到largebin的bin链
    bin = bin_at (av, idx);

    /* skip scan if empty or largest chunk is too small */
    // bin链为空或者fd指向即最大的chunk都不满足nb,就不进入
    if ((victim = first (bin)) != bin
        && (unsigned long) chunksize_nomask (victim)
        >= (unsigned long) (nb))
    {
        //获得最小的小链头
        victim = victim->bk_nextsize;
        // 一直通过bk_nextsize遍历,直至链的大小能够满足即大于等于nb
        while (((unsigned long) (size = chunksize (victim)) <
                (unsigned long) (nb)))
            victim = victim->bk_nextsize;

        /* Avoid removing the first entry for a size so that the skip
                 list does not have to be rerouted.  */
        // 如果victim不是最小链的链头,并且size和对应fd相等,就取其fd。
        if (victim != last (bin)
            && chunksize_nomask (victim)
            == chunksize_nomask (victim->fd))
            victim = victim->fd;

        // 记录剩余size,将victim解链。
        remainder_size = size - nb;
        unlink_chunk (av, victim);

        /* Exhaust */
        // 剩余size太小了,进行的简单设置。
        if (remainder_size < MINSIZE)
        {
            set_inuse_bit_at_offset (victim, size);
            if (av != &main_arena)
                set_non_main_arena (victim);
        }
        /* Split */
        else
        {
            // 剩余部分没那么小,就插入到unsortedbin里fd上,这里因为不确定unsorted是否为空,所以是一次完整的插入过程。
            remainder = chunk_at_offset (victim, nb);
            /* We cannot assume the unsorted list is empty and therefore
                     have to perform a complete insert here.  */
            bck = unsorted_chunks (av);
            fwd = bck->fd;
            if (__glibc_unlikely (fwd->bk != bck))
                malloc_printerr ("malloc(): corrupted unsorted chunks");
            remainder->bk = bck;
            remainder->fd = fwd;
            bck->fd = remainder;
            fwd->bk = remainder;
            if (!in_smallbin_range (remainder_size))
            {
                remainder->fd_nextsize = NULL;
                remainder->bk_nextsize = NULL;
            }
            set_head (victim, nb | PREV_INUSE |
                      (av != &main_arena ? NON_MAIN_ARENA : 0));
            set_head (remainder, remainder_size | PREV_INUSE);
            set_foot (remainder, remainder_size);
        }
        // 把victim取出来
        check_malloced_chunk (av, victim, nb);
        void *p = chunk2mem (victim);
        alloc_perturb (p, bytes);
        return p;
    }
}

largebin里取东西的过程意外地没那么复杂,还是要亲自跑一跑代码来解决我内心中的N多疑惑了。

binmap部分

/*
         Search for a chunk by scanning bins, starting with next largest
         bin. This search is strictly by best-fit; i.e., the smallest
         (with ties going to approximately the least recently used) chunk
         that fits is selected.

         The bitmap avoids needing to check that most blocks are nonempty.
         The particular case of skipping all bins during warm-up phases
         when no chunks have been returned yet is faster than it might look.
*/

// 因为对应的largebin链没有满足的,所以就去找下一个largebin。
++idx;
bin = bin_at (av, idx);
block = idx2block (idx);
map = av->binmap[block];
bit = idx2bit (idx);

for (;; )
{
    /* Skip rest of block if there are no more set bits in this block.  */
    // 如果没有再比我大的bin,或者对应的bin为空
    if (bit > map || bit == 0)
    {
        // 一直取直到下一块map不为空
        do
        {
            // block越界了,就跳转到topchunk部分
            if (++block >= BINMAPSIZE) /* out of bins */
                goto use_top;
        }
        while ((map = av->binmap[block]) == 0);

        // 找到所需的bin,bit设为1
        bin = bin_at (av, (block << BINMAPSHIFT));
        bit = 1;
    }

    /* Advance to bin with set bit. There must be one. */
    // 找到对应的bit位置和对应的bin链头
    while ((bit & map) == 0)
    {
        bin = next_bin (bin);
        bit <<= 1;
        assert (bit != 0);
    }

    // 取bin链最小小链头
    /* Inspect the bin. It is likely to be non-empty */
    victim = last (bin);

    /*  If a false alarm (empty bin), clear the bit. */
    // bin链是空的,取设置binmap并去找下一个bin链
    if (victim == bin)
    {
        av->binmap[block] = map &= ~bit; /* Write through */
        bin = next_bin (bin);
        bit <<= 1;
    }

    // bin链不为空
    else
    {
        size = chunksize (victim);

        /*  We know the first chunk in this bin is big enough to use. */
        // 这里我没明白,为什么bin链最小的就能满足条件。
        assert ((unsigned long) (size) >= (unsigned long) (nb));

        remainder_size = size - nb;

        /* unlink */
        // 把victim解链
        unlink_chunk (av, victim);

        /* Exhaust */
        // 剩余部分太小了,简单处理
        if (remainder_size < MINSIZE)
        {
            set_inuse_bit_at_offset (victim, size);
            if (av != &main_arena)
                set_non_main_arena (victim);
        }

        /* Split */
        // 剩余部分足够,插入到unsortedbin的fd
        else
        {
            remainder = chunk_at_offset (victim, nb);

            /* We cannot assume the unsorted list is empty and therefore
                     have to perform a complete insert here.  */
            bck = unsorted_chunks (av);
            fwd = bck->fd;
            if (__glibc_unlikely (fwd->bk != bck))
                malloc_printerr ("malloc(): corrupted unsorted chunks 2");
            remainder->bk = bck;
            remainder->fd = fwd;
            bck->fd = remainder;
            fwd->bk = remainder;

            /* advertise as last remainder */
            // 这里不一样,如果nb满足smallbin范围,会把剩余部分设置为last_remainder
            if (in_smallbin_range (nb))
                av->last_remainder = remainder;
            // 剩余部分的一些基本设置
            if (!in_smallbin_range (remainder_size))
            {
                remainder->fd_nextsize = NULL;
                remainder->bk_nextsize = NULL;
            }
            set_head (victim, nb | PREV_INUSE |
                      (av != &main_arena ? NON_MAIN_ARENA : 0));
            set_head (remainder, remainder_size | PREV_INUSE);
            set_foot (remainder, remainder_size);
        }
        check_malloced_chunk (av, victim, nb);
        void *p = chunk2mem (victim);
        alloc_perturb (p, bytes);
        return p;
    }
}

binmap这种结构是用来记录对应位置的bin是否不为空,以便于加速查找largebin的,binmap里一共4个元素,每个四字节32bits,所以可以记录128个bin每个是否为空的信息,map相当于把128分成四块,bit则表示对应位置的bin。

这部分其实有点乱套,我还没怎么掌握,不过要记得第一次的last_remainder产生就是发生在这里的

topchunk部分

use_top:
/*
         If large enough, split off the chunk bordering the end of memory
         (held in av->top). Note that this is in accord with the best-fit
         search rule.  In effect, av->top is treated as larger (and thus
         less well fitting) than any other available chunk since it can
         be extended to be as large as necessary (up to system
         limitations).

         We require that av->top always exists (i.e., has size >=
         MINSIZE) after initialization, so if it would otherwise be
         exhausted by current request, it is replenished. (The main
         reason for ensuring it exists is that we may need MINSIZE space
         to put in fenceposts in sysmalloc.)
       */

// 获得topchunk和size
victim = av->top;
size = chunksize (victim);

// topchunk的size不能过大
if (__glibc_unlikely (size > av->system_mem))
    malloc_printerr ("malloc(): corrupted top size");

// size满足nb的请求
if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
{
    // 这个很容易理解,就是在topchunk中切出一块返还给用户。
    remainder_size = size - nb;
    remainder = chunk_at_offset (victim, nb);
    av->top = remainder;
    set_head (victim, nb | PREV_INUSE |
              (av != &main_arena ? NON_MAIN_ARENA : 0));
    set_head (remainder, remainder_size | PREV_INUSE);

    check_malloced_chunk (av, victim, nb);
    void *p = chunk2mem (victim);
    alloc_perturb (p, bytes);
    return p;
}

/* When we are using atomic ops to free fast chunks we can get
         here for all block sizes.  */
// 不满足请求但是存在fastbin
else if (atomic_load_relaxed (&av->have_fastchunks))
{
    // fastbin的大合并,这里才是真正可能会触发第二次循环的所在。
    malloc_consolidate (av);
    /* restore original bin index */
    if (in_smallbin_range (nb))
        idx = smallbin_index (nb);
    else
        idx = largebin_index (nb);
}

/*
         Otherwise, relay to handle system-dependent cases
       */
// topchunk不够大,也没有fastbin,就用sysmalloc求系统分配一块空间返还
else
{
    void *p = sysmalloc (nb, av);
    if (p != NULL)
        alloc_perturb (p, bytes);
    return p;
}

free

__libc_free

free的调用函数

void
__libc_free (void *mem)
{
  mstate ar_ptr;
  mchunkptr p;                          /* chunk corresponding to mem */

  // 和malloc_hook同样的操作
  void (*hook) (void *, const void *)
    = atomic_forced_read (__free_hook);
  if (__builtin_expect (hook != NULL, 0))
    {
      (*hook)(mem, RETURN_ADDRESS (0));
      return;
    }

  // 参数为0什么也不干
  if (mem == 0)                              /* free(0) has no effect */
    return;

  // 从用户视角切换到内存视角指针
  p = mem2chunk (mem);

  // M位的chunk的额外处理方式
  if (chunk_is_mmapped (p))                       /* release mmapped memory. */
    {
      /* See if the dynamic brk/mmap threshold needs adjusting.
     Dumped fake mmapped chunks do not affect the threshold.  */
      if (!mp_.no_dyn_threshold
          && chunksize_nomask (p) > mp_.mmap_threshold
          && chunksize_nomask (p) <= DEFAULT_MMAP_THRESHOLD_MAX
      && !DUMPED_MAIN_ARENA_CHUNK (p))
        {
          mp_.mmap_threshold = chunksize (p);
          mp_.trim_threshold = 2 * mp_.mmap_threshold;
          LIBC_PROBE (memory_mallopt_free_dyn_thresholds, 2,
                      mp_.mmap_threshold, mp_.trim_threshold);
        }
      munmap_chunk (p);
      return;
    }

  // 没有tcache就初始化
  MAYBE_INIT_TCACHE ();
  // 申请一个mstate
  ar_ptr = arena_for_chunk (p);
  // 调用__int_free,free的核心逻辑。
  _int_free (ar_ptr, p, 0);
}
libc_hidden_def (__libc_free)

初始处理部分

static void
_int_free (mstate av, mchunkptr p, int have_lock)
{
    //...
    // 获得去掉MASK位的size
    size = chunksize (p);

    /* Little security check which won't hurt performance: the
     allocator never wrapps around at the end of the address space.
     Therefore we can exclude some size values which might appear
     here by accident or by "design" from some intruder.  */
    if (__builtin_expect ((uintptr_t) p > (uintptr_t) -size, 0)
        || __builtin_expect (misaligned_chunk (p), 0))
        malloc_printerr ("free(): invalid pointer");

    /* We know that each chunk is at least MINSIZE bytes in size or a
     multiple of MALLOC_ALIGNMENT.  */
    if (__glibc_unlikely (size < MINSIZE || !aligned_OK (size)))
        malloc_printerr ("free(): invalid size");

    check_inuse_chunk(av, p);
    //...
}

一个很简单的安全检查,不过我怎么没看懂什么意思。

tcache部分

static void
_int_free (mstate av, mchunkptr p, int have_lock)
{
    //...
    #if USE_TCACHE
    {   
        // 根据size获得对应的idx
        size_t tc_idx = csize2tidx (size);
        // tcache存在,且idx未超范围
        if (tcache != NULL && tc_idx < mp_.tcache_bins)
        {   
            // 从entry里获得用户视角的chunk
            /* Check to see if it's already in the tcache.  */
            tcache_entry *e = (tcache_entry *) chunk2mem (p);

            /* This test succeeds on double free.  However, we don't 100%
       trust it (it also matches random payload data at a 1 in
       2^<size_t> chance), so verify it's not an unlikely
       coincidence before aborting.  */
            // 检测entry的key是否等于tcache
            if (__glibc_unlikely (e->key == tcache))
            {
                tcache_entry *tmp;
                LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
                // 遍历idx对应entry链里的每个chunk
                for (tmp = tcache->entries[tc_idx];
                     tmp;
                     tmp = tmp->next)
                    // 和free的指针相同,视为double free。
                    if (tmp == e)
                        malloc_printerr ("free(): double free detected in tcache 2");
                /* If we get here, it was a coincidence.  We've wasted a
           few cycles, but don't abort.  */
            }

            // entry没有装满,调用put
            if (tcache->counts[tc_idx] < mp_.tcache_count)
            {  
                tcache_put (p, tc_idx);
                return;
            }
        }
    }
    #endif
    //...
}

这里的判断是基于每次tcache调用put放入chunk的时候,都会让key指向tcache,key其实类似于bk。所以free的时候,如果发现ptr的key和tcache相等,会遍历指定的entry找一找是否存在同一个chunk,来判断是否进行了double free。

这里没有直接就报错的原因,估计是因为考虑到用户放在key位置的数据正好是tcache。不过,u1s1,你整这么复杂,干脆咱就别用tcache了奥。

fastbins部分

static void 
_int_free (mstate av, mchunkptr p, int have_lock) 
{    
    //...
    /*
        If eligible, place chunk on a fastbin so it can be found
        and used quickly in malloc.
      */
    // 大小满足fastbins范围
    if ((unsigned long)(size) <= (unsigned long)(get_max_fast ())
        // 判断是否紧挨着topchunk
        #if TRIM_FASTBINS
        /*
        If TRIM_FASTBINS set, don't place chunks
        bordering top into fastbins
          */
        && (chunk_at_offset(p, size) != av->top)
        #endif

       ) {
        // 一些小检测,即下一个chunk不能太小或太大
        if (__builtin_expect (chunksize_nomask (chunk_at_offset (p, size))
                              <= 2 * SIZE_SZ, 0)
            || __builtin_expect (chunksize (chunk_at_offset (p, size))
                                 >= av->system_mem, 0))
        {
            bool fail = true;
            /* We might not have a lock at this point and concurrent modifications
           of system_mem might result in a false positive.  Redo the test after
           getting the lock.  */
            // 并发处理
            if (!have_lock)
            {
                __libc_lock_lock (av->mutex);
                fail = (chunksize_nomask (chunk_at_offset (p, size)) <= 2 * SIZE_SZ
                        || chunksize (chunk_at_offset (p, size)) >= av->system_mem);
                __libc_lock_unlock (av->mutex);
            }

            if (fail)
                malloc_printerr ("free(): invalid next size (fast)");
        }

        free_perturb (chunk2mem(p), size - 2 * SIZE_SZ);

        // 设置arena的fastbins的flag位
        atomic_store_relaxed (&av->have_fastchunks, true);
        // 根据size获得idx和bin链。
        unsigned int idx = fastbin_index(size);
        fb = &fastbin (av, idx);

        /* Atomically link P to its fastbin: P->FD = *FB; *FB = P;  */
        // 这个写法好诡异,不过old肯定是取得bin链中的第一个元素
        mchunkptr old = *fb, old2;

        if (SINGLE_THREAD_P)
        {
            /* Check that the top of the bin is not the record we are going to
           add (i.e., double free).  */
            // 做double free的检测,old不能跟ptr是一样的
            if (__builtin_expect (old == p, 0))
                malloc_printerr ("double free or corruption (fasttop)");
            // 通过检测后,就把ptr插进bin作为新的bin链头
            p->fd = old;
            *fb = p;
        }
        else
            do
            {
                /* Check that the top of the bin is not the record we are going to
             add (i.e., double free).  */
                if (__builtin_expect (old == p, 0))
                    malloc_printerr ("double free or corruption (fasttop)");
                p->fd = old2 = old;
            }
        while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
               != old2);

        /* Check that size of fastbin chunk at the top is the same as
           size of the chunk that we are adding.  We can dereference OLD
           only if we have the lock, otherwise it might have already been
           allocated again.  */
        // 这个地方表示迷惑。
        if (have_lock && old != NULL
            && __builtin_expect (fastbin_index (chunksize (old)) != idx, 0))
            malloc_printerr ("invalid fastbin entry (free)");
    }
    //...
}

一涉及到fastbins总是有一些奇怪的多线程操作,这里还有几个坑,看看之后能不能解决吧。

合并部分

// 没进fastbins,并且不是带M位的chunk
else if (!chunk_is_mmapped(p)) {

    /* If we're single-threaded, don't lock the arena.  */
    if (SINGLE_THREAD_P)
      have_lock = true;

    if (!have_lock)
      __libc_lock_lock (av->mutex);
    // 根据size找到下一个chunk
    nextchunk = chunk_at_offset(p, size);

    /* Lightweight tests: check whether the block is already the
       top block.  */
    // ptr不能是topchunk
    if (__glibc_unlikely (p == av->top))
      malloc_printerr ("double free or corruption (top)");
    /* Or whether the next chunk is beyond the boundaries of the arena.  */
    // nextchunk太离谱,比topchunk的范围还远,认为free的地方太过了。
    if (__builtin_expect (contiguous (av)
              && (char *) nextchunk
              >= ((char *) av->top + chunksize(av->top)), 0))
    malloc_printerr ("double free or corruption (out)");
    /* Or whether the block is actually not marked used.  */
    // nextchunk的P位为0,整明ptr不是inuse状态。
    if (__glibc_unlikely (!prev_inuse(nextchunk)))
      malloc_printerr ("double free or corruption (!prev)");

    // nextchunk的size
    nextsize = chunksize(nextchunk);
    // 跟之前的fastbins检测类似,size不能太小或者太大。
    if (__builtin_expect (chunksize_nomask (nextchunk) <= 2 * SIZE_SZ, 0)
    || __builtin_expect (nextsize >= av->system_mem, 0))
      malloc_printerr ("free(): invalid next size (normal)");

    free_perturb (chunk2mem(p), size - 2 * SIZE_SZ);

    /* consolidate backward */
    // 去检测地址上比ptr低的堆块,如果ptr的P置0,则上一个chunk也是free的,把ptr调整到上一个chunk那里,size重新计算。
    if (!prev_inuse(p)) {
      prevsize = prev_size (p);
      size += prevsize;
      p = chunk_at_offset(p, -((long) prevsize));
      if (__glibc_unlikely (chunksize(p) != prevsize))
        malloc_printerr ("corrupted size vs. prev_size while consolidating");
      // 调整后的ptr会被解链
      unlink_chunk (av, p);
    }

    // nextchunk不是topchunk
    if (nextchunk != av->top) {
      /* get and clear inuse bit */
      // 得到nextchunk的下一个堆块的P位,来判断nextchunk是不是处于inuse的。
      nextinuse = inuse_bit_at_offset(nextchunk, nextsize);

      /* consolidate forward */
      // 判断nextchunk不是inuse,把nextchunk解链,size重新计算,ptr的位置不用动
      if (!nextinuse) {
    unlink_chunk (av, nextchunk);
    size += nextsize;
      } else
    // nextchunk是inuse的,把nextchunk的P置0,
    clear_inuse_bit_at_offset(nextchunk, 0);
          /*
    Place the chunk in unsorted chunk list. Chunks are
    not placed into regular bins until after they have
    been given one chance to be used in malloc.
      */
      // bck和fwd分别得到unsortedbin的bin链头和fd的chunk
      bck = unsorted_chunks(av);
      fwd = bck->fd;
      // fwd的bk必须指向bin链头
      if (__glibc_unlikely (fwd->bk != bck))
    malloc_printerr ("free(): corrupted unsorted chunks");
      // 把p插进unsortedbin链里,构造出p里的指针
      p->fd = fwd;
      p->bk = bck;
      // 如果是laregbin的范围,再把额外的两个指针清理出来
      if (!in_smallbin_range(size))
    {
      p->fd_nextsize = NULL;
      p->bk_nextsize = NULL;
    }
      // 还是把p插进unsortedbin链里的操作,这里是构造出bin链里其他的指针。
      bck->fd = p;
      fwd->bk = p;

      // 设置合并后ptr的size,设置p的nextchunk的pre_size
      set_head(p, size | PREV_INUSE);
      set_foot(p, size);

      check_free_chunk(av, p);
    }

    /*
      If the chunk borders the current high end of memory,
      consolidate into top
    */

    // 如果nextchunk正好是topchunk,调整ptr的size,以及arena,成为了新的topchunk
    else {
      size += nextsize;
      set_head(p, size | PREV_INUSE);
      av->top = p;
      check_chunk(av, p);
    }
}

从这里面可以看出来合并后的chunk,都会被先插入进unsortedbin链里,同时规则和fastbins的规则是一样的,指的是是后插进去chunk的fd指向先插进去的chunk这个规则,unsortedbin的链头的fd始终指向最新插进来的chunk

同时可以看出,合并插入后会对ptr的nextchunk的pre_size产生影响

剩余部分

/*
      If freeing a large space, consolidate possibly-surrounding
      chunks. Then, if the total unused topmost memory exceeds trim
      threshold, ask malloc_trim to reduce top.

      Unless max_fast is 0, we don't know if there are fastbins
      bordering top, so we cannot tell for sure whether threshold
      has been reached unless fastbins are consolidated.  But we
      don't want to consolidate on each free.  As a compromise,
      consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
      is reached.
    */
    // 当size达到阈值,才会触发malloc_consolidate操作
    if ((unsigned long)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) {
      if (atomic_load_relaxed (&av->have_fastchunks))
    // fastbin会被完全清空,里面的chunk会尽可能合并然后全部插进unsortedbin
    malloc_consolidate(av);

      // 主arena下,topchunk的size未超过阈值,用systrim来缩小topchunk
      if (av == &main_arena) {
#ifndef MORECORE_CANNOT_TRIM
    if ((unsigned long)(chunksize(av->top)) >=
        (unsigned long)(mp_.trim_threshold))
      systrim(mp_.top_pad, av);
#endif
      // 不是arena下,用得到的heap_info结构的heap_trim来缩小
      } else {
    /* Always try heap_trim(), even if the top chunk is not
       large, because the corresponding heap might go away.  */
    heap_info *heap = heap_for_ptr(top(av));

    assert(heap->ar_ptr == av);
    heap_trim(heap, mp_.top_pad);
      }
    }

    if (!have_lock)
      __libc_lock_unlock (av->mutex);
  }
  /*
    If the chunk was allocated via mmap, release via munmap().
  */

  else {
    // mmap的空间就用munmap来处理
    munmap_chunk (p);
  }
}

这里又挖了三个坑,systrim,heao_info,heap_trim

不过这个部分,size不超过阈值是不会触发滴。

tcache

MAYBE_INIT_TCACHE

没有tcache的时候才初始化,这不是废话么。

# define MAYBE_INIT_TCACHE() \
  if (__glibc_unlikely (tcache == NULL)) \
    tcache_init();

tcache_init

用来初始化tcache管理结构的。

static void
tcache_init(void)
{
  mstate ar_ptr;
  void *victim = 0;
  const size_t bytes = sizeof (tcache_perthread_struct);

  if (tcache_shutting_down)
    return;

  arena_get (ar_ptr, bytes);
  // 开辟一块放tcache的空间。
  victim = _int_malloc (ar_ptr, bytes);
  if (!victim && ar_ptr != NULL)
    {
      ar_ptr = arena_get_retry (ar_ptr, bytes);
      victim = _int_malloc (ar_ptr, bytes);
    }

  // 解锁,多线程操作。
  if (ar_ptr != NULL)
    __libc_lock_unlock (ar_ptr->mutex);

  /* In a low memory situation, we may not be able to allocate memory
     - in which case, we just keep trying later.  However, we
     typically do this very early, so either there is sufficient
     memory, or there isn't enough memory to do non-trivial
     allocations anyway.  */
  // 得到tcache结构,并把里面的东西先空。
  if (victim)
    {
      tcache = (tcache_perthread_struct *) victim;
      memset (tcache, 0, sizeof (tcache_perthread_struct));
    }

}

tcache_put/tcache_get

两个作用相反的函数,put就是把chunk放到entry里,get就是从entry里取出来。原来是直接用这两个函数来操作tcache的,基本等于裸奔版本的fastbin,被虐得体无完肤。

/* Caller must ensure that we know tc_idx is valid and there's room
   for more chunks.  */
static __always_inline void
tcache_put (mchunkptr chunk, size_t tc_idx)
{
  // 切换到内存视角
  tcache_entry *e = (tcache_entry *) chunk2mem (chunk);

  /* Mark this chunk as "in the tcache" so the test in _int_free will
     detect a double free.  */
  // key指向自己的tcache
  e->key = tcache;

  // 根据idx放到对应的entry的头部。
  e->next = tcache->entries[tc_idx];
  tcache->entries[tc_idx] = e;
  // counts加1,表示多存进来了一个
  ++(tcache->counts[tc_idx]);
}

/* Caller must ensure that we know tc_idx is valid and there's
   available chunks to remove.  */
static __always_inline void *
tcache_get (size_t tc_idx)
{
  // 根据idx得到对应entry的头部chunk
  tcache_entry *e = tcache->entries[tc_idx];
  tcache->entries[tc_idx] = e->next;
  // counts减1,表示释放出去一个
  --(tcache->counts[tc_idx]);
  // key指向NULL
  e->key = NULL;
  return (void *) e;
}

这里的tc_idx没有任何检测,是会越界的奥

tcache_thread_shutdown

static void
tcache_thread_shutdown (void)
{
  int i;
  tcache_perthread_struct *tcache_tmp = tcache;

  if (!tcache)
    return;

  /* Disable the tcache and prevent it from being reinitialized.  */
  tcache = NULL;
  tcache_shutting_down = true;

  /* Free all of the entries and the tcache itself back to the arena
     heap for coalescing.  */
  for (i = 0; i < TCACHE_MAX_BINS; ++i)
    {
      while (tcache_tmp->entries[i])
    {
      tcache_entry *e = tcache_tmp->entries[i];
      tcache_tmp->entries[i] = e->next;
      __libc_free (e);
    }
    }

  __libc_free (tcache_tmp);
}

malloc_consolidate

/* 
------------------------- malloc_consolidate -------------------------

  malloc_consolidate is a specialized version of free() that tears
  down chunks held in fastbins.  Free itself cannot be used for this
  purpose since, among other things, it might place chunks back onto
  fastbins.  So, instead, we need to use a minor variant of the same
  code.
*/

static void malloc_consolidate(mstate av)
{
  mfastbinptr*    fb;                 /* current fastbin being consolidated */
  mfastbinptr*    maxfb;              /* last fastbin (for loop control) */
  mchunkptr       p;                  /* current chunk being consolidated */
  mchunkptr       nextp;              /* next chunk to consolidate */
  mchunkptr       unsorted_bin;       /* bin header */
  mchunkptr       first_unsorted;     /* chunk to link to */

  /* These have same use as in free() */
  mchunkptr       nextchunk;
  INTERNAL_SIZE_T size;
  INTERNAL_SIZE_T nextsize;
  INTERNAL_SIZE_T prevsize;
  int             nextinuse;
  // 清空arena的fastbins的flag
  atomic_store_relaxed (&av->have_fastchunks, false);

  // 得到unsortedbin
  unsorted_bin = unsorted_chunks(av);

  /*
    Remove each chunk from fast bin and consolidate it, placing it
    then in unsorted bin. Among other reasons for doing this,
    placing in unsorted bin avoids needing to calculate actual bins
    until malloc is sure that chunks aren't immediately going to be
    reused anyway.
  */
  // fb和maxfb被用在下面的while循环里,处理fastbins的每条bin链
  maxfb = &fastbin (av, NFASTBINS - 1);
  fb = &fastbin (av, 0);

  do {
    p = atomic_exchange_acq (fb, NULL);
    // 第二个内部的while循环,相当于遍历每条bin链里的chunk
    if (p != 0) {
      do {
    {
      // 取完ptr检测一下size是否符合idx
      unsigned int idx = fastbin_index (chunksize (p));
      if ((&fastbin (av, idx)) != fb)
        malloc_printerr ("malloc_consolidate(): invalid chunk size");
    }

    check_inuse_chunk(av, p);
    // 顺着fd的到下一个chunk的ptr
    nextp = p->fd;

    /* Slightly streamlined version of consolidation code in free() */
    // 注释说的很明确了,相当于free里的合并部分的精简版。
    size = chunksize (p);
    nextchunk = chunk_at_offset(p, size);
    nextsize = chunksize(nextchunk);

    if (!prev_inuse(p)) {
      prevsize = prev_size (p);
      size += prevsize;
      p = chunk_at_offset(p, -((long) prevsize));
      if (__glibc_unlikely (chunksize(p) != prevsize))
        malloc_printerr ("corrupted size vs. prev_size in fastbins");
      unlink_chunk (av, p);
    }

    if (nextchunk != av->top) {
      nextinuse = inuse_bit_at_offset(nextchunk, nextsize);

      if (!nextinuse) {
        size += nextsize;
        unlink_chunk (av, nextchunk);
      } else
        clear_inuse_bit_at_offset(nextchunk, 0);

      first_unsorted = unsorted_bin->fd;
      unsorted_bin->fd = p;
      first_unsorted->bk = p;

      if (!in_smallbin_range (size)) {
        p->fd_nextsize = NULL;
        p->bk_nextsize = NULL;
      }

      set_head(p, size | PREV_INUSE);
      p->bk = unsorted_bin;
      p->fd = first_unsorted;
      set_foot(p, size);
    }

    else {
      size += nextsize;
      set_head(p, size | PREV_INUSE);
      av->top = p;
    }

      } while ( (p = nextp) != 0);

    }
  } while (fb++ != maxfb);
}

一个针对fastbins的所有元素的free过程,把每条bin链的每个元素,能合并的合并,能取得取,全部丢进unsortedbin里


connect 1037178204@qq.com

文章标题:glibc2.30代码阅读笔记

本文作者:t1an5t

发布时间:2020-02-14, 20:26:20

最后更新:2020-02-29, 21:44:12

原始链接:http://yoursite.com/glibc2.30%E4%BB%A3%E7%A0%81%E9%98%85%E8%AF%BB/

版权声明: "署名-非商用-相同方式共享 4.0" 转载请保留原文链接及作者。

目录