2013年1月25日 星期五

Some interesting slides/papers about trace optimization in JVM

http://researcher.watson.ibm.com/researcher/files/us-pengwu/challeng-potential-trace-compilation.pdf

http://researcher.watson.ibm.com/researcher/files/us-pengwu/oopsla111-wu.pdf

http://researcher.watson.ibm.com/researcher/files/us-pengwu/UIUC-Seminar-Scripting-Languages-05-03.pdf

SINOF: A dynamic-static combined framework for dynamic binary translation
http://dl.acm.org/citation.cfm?id=2350593&CFID=174278273&CFTOKEN=47796794

Similar to permanent code cache, previous compiled blocks are saved and loaded by future runs.
Saved blocks are analyzed and optimized by runtime profiling information.
1. What kind of analysis and optimization they used?
2. What kind of information do they collect at runtime?
3. What's the benefit?
First, they use their own IR, and explain why they don't use LLVM or UQBT IR.
Second, in their evaluation, both guest ISA and host ISA are IA32! But they achieved on average 1.38X normalized by native execution time.


A low-overhead dynamic optimization framework for multicores
http://dl.acm.org/citation.cfm?id=2370899&CFID=174278273&CFTOKEN=47796794
Don't know what they do from abstract.
A very short paper (2-page), but I still have no idea what they did.

沒有留言:

張貼留言