Latest News

April 22, 2017
Alan Kaminsky published a paper in the Cryptology ePrint Archive.

Additional Information

Parallel Computing:





RIT Links


Parallel Computing Expertise

Parallel Java 2 Library — Textbooks — Programming

Parallel Java 2 Library

Dissatisfied with existing parallel programming libraries like MPI and OpenMP, I developed the Parallel Java Library and its successor, the Parallel Java 2 Library. Parallel Java 2 is an API and middleware for parallel programming in 100% Java on multicore parallel computers, cluster parallel computers, hybrid multicore cluster parallel computers, and GPU accelerated parallel computers. Parallel Java 2 is unique in being the only Java library that integrates the multithreaded, message passing, and GPU accelerated styles of parallel programming into a single unified API. The library also includes Parallel Java Map Reduce, a map-reduce framework for big data parallel computing. The Parallel Java 2 Library is free software, licensed under the GNU General Public License.

I use Parallel Java 2 in my cryptography research and in my research on combinatorial optimization problems. Others have used Parallel Java to do page rank calculations, ocean ecosystem modeling, salmon population modeling and analysis, medication scheduling for patients in long term care facilities, three-dimensional complex-valued fast Fourier transforms for electronic structure analysis and X-ray crystallography, and Monte Carlo simulation of electricity and gas markets. Parallel Java was also incorporated into the IQM open source Java image processing application.



I wrote the textbook Building Parallel Programs: SMPs, Clusters, and Java (Cengage Course Technology, 2010) based on Parallel Java. I wrote the textbook BIG CPU, BIG DATA: Solving the World‘s Toughest Computational Problems with Parallel Computing (CreateSpace, 2016) based on Parallel Java 2. I have been teaching parallel programming in Java since 2005, and this material is now available in textbook form. The books includes numerous complete examples of parallel programs.


I am familiar with the Message Passing Interface (MPI) for cluster parallel programming in Fortran, C, and C++; with OpenMP for multicore parallel programming in Fortran, C, and C++; and with NVIDIA Corporation’s CUDA for parallel programming in C and C++ on graphics processing units (GPUs).