Published on Apr 02, 2024
The explosion of java over the last year has been driven largely by its in role in bringing a new generation of interactive web pages to World Wide Web. Undoubtedly various features of the languages-compactness, byte code portability, security, and so on-make it particularly attractive as an implementation languages for applets embedded in web pages.
But it is clear that the ambition of the Java development team go well beyond enhancing the functionality of HTML documents.
"Java is designed to meet the challenges of application development on the context of heterogeneous, network-wide distributed environments. Paramount among these chalanges is secure delivery of applications that consume the minimum of systems resources, can run on any hardware and software platform, can be extended dynamically."
Several of these concerns are mirrored in developments in the High Performance Computing world over a number of years. A decade ago the focus of interest in the parallel computing community was on parallel hardware. A parallel computer was typically built from specialized processors through a proprietary high-performance communication switch. If the machine also had to be programmed in a proprietary language, that was an acceptable price for the benefits of using a supercomputer. This attitude was not sustainable as one parallel architecture gave way to another, and cost of porting software became exorbitant. For several years now, portability across platforms had been a central concern in parallel computing.
HPJava is a programming language extended from Java to support parallel programming, especially (but not exclusively) data parallel programming on message passing and distributed memory systems, from multi-processor systems to workstation clusters.
Although it has a close relationship with HPF, the design of HPJava does not inherit the HPF programming model. Instead the language introduces a high-level structured SPMD programming style--the HPspmd model. A program written in this kind of language explicitly coordinates well-defined process groups.
These cooperate in a loosely synchronous manner, sharing logical threads of control. As in a conventional distributed-memory SPMD program, only a process owning a data item such as an array element is allowed to access the item directly. The language provides special constructs that allow programmers to meet this constraint conveniently.
Besides the normal variables of the sequential base language, the language model introduces classes of global variables that are stored collectively across process groups. Primarily, these are distributed arrays. They provide a global name space in the form of globally subscripted arrays, with assorted distribution patterns. This helps to relieve programmers of error-prone activities such as the local-to-global, global-to-local subscript translations which occur in data parallel applications.
In addition to special data types the language provides special constructs to facilitate both data parallel and task parallel programming. Through these constructs, different processors can either work simultaneously on globally addressed data, or independently execute complex procedure.
Are you interested in this topic.Then mail to us immediately to get the full report.
email :- contactv2@gmail.com |