A number of techniques parallelize Java source code or bytecodes to improve the execution-time performance of the application. The parallelization typically is achieved through Java language-level support for multithreading. Thus, these techniques maintain the portability of the transformed parallel Java programs. Most of these techniques exploit implicit parallelism in Java programs for parallel execution on shared-memory multiprocessor systems using Java multithreading and synchronization primitives. Some approaches extend the Java language itself to support parallel and distributed Java programs. The performance improvement obtained when using these techniques depends on the amount of parallelism that can be exploited in the application program. TheHigh Performance Javaproject [Bik and Gannon 1997; Bik and Gannon 1998] exploits implicit parallelism in loops and multiway recursive methods to generate parallel code using the standard Java multithreading mechanism. TheJAVAR [Bik and Gannon 1997] tool, which is a source-to-source restructuring compiler, relies on explicit annotations in a sequential Java program to transform a sequential Java source code into a corresponding parallel code. The transformed program can be compiled into bytecodes using any standard Java compiler. The JAVAB[Bik and Gannon 1998] tool, on the other hand, works directly on Java bytecodes to automatically detect and exploitimplicit loop parallelism. Since the parallelism is expressed in Java itself using Java’s thread libraries and synchronization primitives, the parallelized bytecodes can be executed on any platform with a JVM implementation that supports native threads. TheJava Speculative Multithreading (JavaSpMT) parallelization technique [Kazi and Lilja 2000] uses a speculative thread pipelining execution model to exploit implicit loop-level parallelism on shared-memory multiprocessors for general-purpose Java application programs. Its support of control speculation combined with its run-time datadependence checking allows JavaSpMT to parallelize a wide variety of loop constructs, including do–while loops. JavaSpMT is implemented using the standard Java multithreading mechanism. The parallelism is expressed by a Java source-to-source transformation. The Do!project [Launay and Pazat 1997] provides a parallel framework embedded in Java to ease parallel and distributed programming in Java. The parallel framework supports both data parallelism and control (or task) parallelism. The framework provides a model for parallel programming and a library of generic classes. Relevant library classes can be extended for a particular application or new framework classes can be defined for better tuning of the application. Tiny Data-Parallel Java[Ichisugi and Roudier 1997] is a Java language extension for data-parallel programming. The language defines data-parallel classes whose methods are executed on a large number of virtual processors. An extensible Java preprocessor, EPP, is used to translate the data-parallel code into standard Java code using the Java thread libraries and synchronization primitives. The preprocessor can produce Java code for multiprocessor and distributed systems as well. However, the Tiny DataParallel Java language does not yet have sufficient language features to support high-performance parallel programs. DPJ[Ivannikov et al. 1997] defines a parallel framework through a Java class library for the development of dataparallel programs.