New Benchmark for Java, XML and Web Applications
Steve Blackburn gave a preview of the talk he is giving in the USA next week at OOPSLA. It is about a new set of benchmarks for assessing modern computer architectures and applications, using Java. This is important for XML applications typically used for the web. He argues that tests from the days of Fortran are not good enough. The benchmarks are available as free open source.
One problem I can see with this is that this is not just an academic exercise; benchmarks can make a particular product look good or bad. This will upset some people and some process to handle complaints and make decisions is needed. The benchmarks will need someone promoting them like a product, which costs money.
One problem I can see with this is that this is not just an academic exercise; benchmarks can make a particular product look good or bad. This will upset some people and some process to handle complaints and make decisions is needed. The benchmarks will need someone promoting them like a product, which costs money.
Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C+ +, and Fortran. SPEC, the dominant purveyor of benchmarks, compounded this problem by institutionalizing these methodologies for their Java benchmark suite. This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks.
We demonstrate that the complex interactions of (1) architecture, (2) compiler, (3) virtual machine, (4) memory management, and (5) application require more extensive evaluation than C, C++, and Fortran which stress (4) much less, and do not require (3). We use and introduce new value, time-series, and statistical metrics for static and dynamic properties such as code complexity, code size, heap composition, and pointer mutations. No benchmark suite is definitive, but these metrics show that DaCapo improves over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper takes a step towards improving methodologies for choosing and evaluating benchmarks to foster innovation in system design and implementation for Java and other managed languages.
BIO: Steve Blackburn is a Research Fellow at ANU. His interest lies in the intersection of modern object oriented languages and modern architectures. He designed and maintains the MMTk memory management toolkit with Perry Cheng and Kathryn McKinley, and is on the steering committee and core team of the Jikes RVM research JVM. In addition to active involvement in the academic research community, he maintains strong pragmatic focus through collaborations with IBM Research and Microsoft Research.
From: The DaCapo Benchmarks: Java Benchmarking Development and Analysis, Steve Blackburn , DCS SEMINAR SERIES, ANU, 2006-10-20
0 Comments:
Post a Comment
Links to this post:
Create a Link or bookmark with Digg, del.icio.us, Newsvine or News Feed
<< Home