Wednesday, July 9, 2008

Application Virtualization

The term “application virtualization” describes the process of compiling applications into machine- independent byte code that can subsequently be executed on any system that provides the appropriate virtual machine as an execution environment. The best known example of this approach to virtualization is the byte code produced by the compilers for the Java programming language (http://java.sun.com/), although this concept was actually pioneered by the UCSD P-System in the late 1970s (www.threedee.com/jcm/psystem), for which the most popular compiler was the UCSD Pascal compiler. Microsoft has even adopted a similar approach in the Common Language Runtime (CLR) used by .NET applications, where code written in languages that support the CLR are transformed, at compile time, into CIL (Common Intermediate Language, formerly known as MSIL, Microsoft Intermediate Language). Like any byte code, CIL provides a platform-independent instruction set that can be executed in any environment supporting the .NET Framework.

Application virtualization is a valid use of the term “virtualization” because applications compiled into byte code become logical entities that can be executed on different physical systems with different characteristics, operating systems, and even processor architectures.
Taken from : William Von Hagen "Professional Xen Virtualization" 2008

What Is Vir tualization?

Virtualization is simply the logical separation of the request for some service from the physical resources that actually provide that service. In practical terms, virtualization provides the ability to run applications, operating systems, or system services in a logically distinct system environment that is independent of a specific physical computer system. Obviously, all of these have to be running on a certain computer system at any given time, but virtualization provides a level of logical abstraction that liberates applications, system services, and even the operating system that supports them from being tied to a specific piece of hardware. Virtualization’s focus on logical operating environments rather than physical ones makes applications, services, and instances of an operating system portable across different physical computer systems.

The classic example of virtualization that most people are already familiar with is virtual memory, which enables a computer system to appear to have more memory than is physically installed on that system. Virtual memory is a memory-management technique that enables an operating system to see and use noncontiguous segments of memory as a single, contiguous memory space. Virtual memory is traditionally implemented in an operating system by paging, which enables the operating system to use a file or dedicated portion of some storage device to save pages of memory that are not actively in use.

Known as a “paging file” or “swap space,” the system can quickly transfer pages of memory to and from this area as the operating system or running applications require access to the contents of those pages. Modern operating systems such as UNIX-like operating systems (including Linux, the *BSD operating systems, and Mac OS X) and Microsoft Windows all use some form of virtual memory to enable the operating system and applications to access more data than would fit into physical memory.
Taken from : William Von Hagen "Professional Xen Virtualization" 2008

Introduction to Virtualization Techniques

With server virtualization, you can create multiple virtual servers on a single physical server. Each virtual server has its own set of virtual hardware on which operating systems and applications are loaded. IBM systems with virtualization can prioritize system resources and allocate them dynamically to the virtual servers that need them most at any given time—all based on business priorities.

Virtualization was first introduced by IBM in the 1960s to allow the partitioning of large mainframe environments. IBM has continued to innovate around server virtualization and has extended it from the mainframe to the IBM Power Systems, IBM System p, and IBM System i™ product lines. In the industry-standard environment, VMware, Microsoft® Virtual Server, and Xen offerings are available for IBM System x and IBM BladeCenter systems. Today, IBM server virtualization technologies are at the forefront in helping businesses with consolidation, cost management, and business resiliency.

IBM recognized the importance of virtualization with the development of the System/360 Model 67 mainframe. The Model 67 virtualized all of the hardware interfaces through the Virtual Machine Monitor, or VMM. In the early days of computing, the operating system was called the supervisor. With the ability to run operating systems on other operating systems, the term hypervisor resulted (a term coined in the 1970s). Logical partitioning has been available on the mainframe since the 1980s. The Power team began taking advantage of the mainframe partitioning skills and knowledge about 10 years ago and brought forth Dynamic LPARs with POWER4™ and then Advanced POWER Virtualization with POWER5™ in 2004 (which was re-branded to PowerVM™ in 2008).

There are several types of virtualization.1 In this chapter, we describe them in order to position the relative strengths of each and relate them to the systems virtualization offerings from IBM and IBM Business Partners.

Source : IBM Systems Virtualization : System, Application, Software

Tuesday, June 3, 2008

Building Windows Clusters

Hardware
Before starting, you have to have following hardware and software. You have at least two computers with Windows NT, SP6 or Windows 2000 networked with some sort of LAN equipment (hub, switch etc.). Ensure during the Windows set up phase that TCP/IP, and NETBUI are installed, and that the network is started, with all the network cards detected and the correct drivers installed. We will call these two computers as Windows cluster. Ok, now you need some sort of software that will help you to develop, deploy and execute application over this cluster. This software is the core what makes a Windows cluster possible.

Software
The Message Passing Interface (MPI) is an evolving de facto standard for supporting cluster computing based on message passing. There are several implementations of this standard. In this article, we will use MPICH, which is freely available, and you can download it from here for windows clustering, and find related documentation here. Please read Quick Start.pdf and manual before starting following steps.
Step 1: Download and unzip nt-mpich-1.3.0-a.zip onto any folder (for example C:\NT-MPICH) and share this folder with write permission.
Step 2: Copy all files with .dll extension from C:\NT-MPICH\lib to folder C:\Windows\system32
Step 3: Install the Cluster Manager Service on each host you want to use for remote execution of MPI processes. For installation, start rcluma-install.bat (located in subdirectory C:\NT-MPICH\bin) by double-clicking from local or network-drive. You must have administrator rights on the hosts to install the service.
Step 4: Follow step 1 and 2 for each node in the cluster (we will name each computer in the cluster as node)
Step 5: Now Start RexecShell (from folder C:\NT-MPICH\bin) by double-clicking it. Open the configuration dialog by pressing F2. The distribution contains a precompiled example MPI program named cpi.exe (located in NT-MPICH/bin). Choose it as the actual program. Make sure that each host can reach cpi.exe at the specified path. Choose ch_wsock as active plug-in. Select the hosts to compute on. On the tab 'Account', enter your username, domain and password, which need to be valid on each host chosen. Press OK to confirm your selections. The Start Button (from Window RexecShell) is now enabled and can be pressed to start cpi.exe on all chosen hosts. The output will be displayed in separate windows.

Source : http://www.devbuilder.org/article/24