2012年7月11日 星期三

check cross-page block-linking

TranslationBlock has four members which are physical page related:
  1. struct TranslationBlock *phys_hash_next     /* next matching tb for physical address. */ 
    1. tb_phys_hash[] -> so called global TB mapping table, index by physical page address[2:17]
    2. phys_hash_next: next TB in the link list of this entry
  1. struct TranslationBlock *page_next; /*first and second physical page containing code. The lower bit  of the pointer tells the index in page_next[] */
    1. defined in tb_alloc_page() called in tb_link_page() called in tb_gen_code().
    2. page_next is used for a link list of TranslationBlocks in the SAME page.
    3. PageDesc *p->first_tb points to the LAST TB in this page(the latest encountered.)
    4. Why we have two page_next[]? Because, this TB could span across two pages, therefore, we need to page_next, one for the first page, and the next for the second.
    5. Question: is it possible that more than two TB in the same page has two page_next? YES, when two translation blocks are overlapped.

  1. tb_page_addr_t page_addr[2]
    1. address of two pages. Stupid question: is it possible one TB across two different physical page? YES, although virtual addresses of pages are consecutive, they may not be  consecutive in physical pages.

    /* Circular List of TBs jumping to this one. This is a circular list using
       the two least significant bits of the pointers to tell what is
       the next pointer: 0 = jmp_next[0], 1 = jmp_next[1], 2 =
       jmp_first (tb itself)*/
  1. struct TranslationBlock *jmp_next[2];
  2. struct TranslationBlock *jmp_first;
    1. Initialized in tb_link_page(), which is called in tb_gen_code(); Initially, it points to itself which set low bits to 2.
    2. tb_add_jump(tb, n, tb_next) sets a jump from tb[n] to tb_next to tb;  tb[n]----tb_next.
    3. Then set tb->jmp_next[n]  pointed to tb_next->jmp_first.
    4. Then set tb_next->jmp_first to tb, and set low bits to n;
    5. Illustration:


=====================================================================

tb_remove(ptb, tb, next_offset)

  • Remove tb in hash tb link list.
  • called in tb_phys_invalidate()

tb_page_remove(ptb, tb)

  • Remove tb from page tb link list.
  • called  in tb_phys_invalidate()

tb_jmp_remove(tb, n)

  • Remove tb from tb jump circular list.
  • called in tb_phys_invalidate()

tb_phys_invalidate(tb, page_addr)

  • called from
    • tb_invalidate_phys_page_range(start, end, is_cpu_write_access)
    • tb_invalidate_phys_page(addr, pc, puc)
      • Only in USER MODE.
    • check_watchpoint(offset, len_mask, flags)
    • cpu_io_recompile(env, retaddr)

tb_invalidate_phys_page_range(start, end, is_cpu_write_access)

  • called from 
    • tb_invalidate_phys_range(start, end, is_cpu_write_access)
      • only called in linux-user/mmap.c
    • tb_invalidate_phys_page_fast(start, len)
      • use code_bitmap to quickly check whether there has tb in this range.
      • called from notdirty_mem_write(opague, ram_addr, val, size)
    • tb_invalidate_phys_addr(addr)
      • only if TARGET_HAS_ICE
    • cpu_physical_memory_rw(addr, buf, len, is_write)
    • cpu_physical_memory_unmap(buffer, len, is_write, access_len)
    • stl_phys_notdirty(addr, val)
      • tb_invalidate_phys_page_range is called if in_migration
    • stl_phys_internal(addr, val, endian)
    • stw_phys_internal(addr, val, endian)
      • called from stw_phys(), stw_le_phy()/*little endian*/, stw_be_phys()
===============================================================
























沒有留言:

張貼留言