Home » Cloud

Category Archives: Cloud

Virtualization

 Virtualization

 Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources.

You probably know a little about virtualization if you have ever divided your hard drive into different partitions. A partition is the logical division of a hard disk drive to create, in effect, two separate hard drives.

Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.

In 2005, virtualization software was adopted faster than anyone imagined, including the experts. There are three areas of IT where virtualization is making headroads, network virtualization ,  storage virtualization and server virtualization .

  • Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts, much like your partitioned hard drive makes it easier to manage your files.
  • Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs).
  • Server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.

Virtualization can be viewed as part of an overall trend in enterprise IT that includesautonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads.

 

How to Virtualize

The primary action in setting up a virtual server is selecting and installing the virtualization layer. Here are some of the more popular options.

      • Xen 3.0:

Xen is a lightweight open source hypervisor (less than 50,000 lines of code) which runs on Intel or AMD x86 and 64-bit processors, with or without virtualization technologies. It supports up to 32-way SMP (Simultaneous Multi Processing) and requires a modification of the client operating system, which means it will run Linux but not Windows clients. Although the original Xen hypervisor works only with Linux clients, XenSource, the company behind the Xen project, released XenEnterprise, a version that supports Windows Server and Solaris guests as well.

      • Windows Virtual Server 2005 R2:

Microsoft initially charged for its virtualization technology, and it was limited to Windows servers. With Windows Server 2003R2, customers can run up to four operating systems on a physical server. On April 3, Microsoft announced it was making Virtual Server a free download, and it extended support to clients running nine versions of Red Hat and SUSE Linux.

      • VMware Server:

VMware (EMC) is by far the largest vendor of virtualization technology for x86 platforms. In early 2006, the company released VMware Server, a replacement for GSX Server, which is a single server virtualization platform for Linux and Windows. More than 100,000 downloads of this free product were made in the first week alone. VMware Server has all the features of the GSX Server, and adds support for virtual SMP, Intel Virtualization Technology and 64-bit guest operating systems.

      • VMware ESX Server:

Although its entry-level product is now free, VMware still charges for its enterprise-class ESX Server. ESX server runs on x86-based servers and supports Linux (Red Hat and SUSE), Windows (Server and XP), Novell NetWare and FreeBSD 4.9 clients.

      • Virtual Iron:

Virtual Iron is another company offering Xen-based products. It has four products: two free single server versions, an enterprise version and one for clusters. In addition to the Xen hypervisor, Virtual Iron also includes management tools and an administrative interface.

      • IBM Virtualization Engine Platform:

This platform encompasses the entire line of IBM servers. As well as the usual hypervisor for server partitioning, it includes virtual I/O and virtual Ethernet, a workload manager and management console.

Advertisements

Clustering

What is clustering & why it is required

Clustering is the use of multiple computers,typically PCs or UNIX workstations, multiple storage devices, and redundant interconnections , to form what appears to users as a single highly available system. Cluster computing can be used for load balancing as well as for high availability . Cluster computing is used as a relatively low-cost form of parallel processing machine for scientific and other applications that lend themselves to parallel operations

Computer cluster technology puts clusters of systems together to provide better system reliability and performance. Cluster server systems connect a group of servers together in order to jointly provide processing service for the clients in the network.

Cluster operating systems divide the tasks amongst the available servers. Clusters of systems or workstations, on the other hand, connect a group of systems together to jointly share a critically demanding computational task. theoretically, a cluster operating system should provide seamless optimization in every case.

At the present time, cluster server and workstation systems are mostly used in High Availability applications and in scientific applications such as numerical computations.

Clusters can offer:-

  • High performance
  • Large capacity
  • High availability
  • Incremental growth
  • Clusters Used for
  • Scientific computing
  • Making movies
  • Commercial servers(web/database/etc)

Requirements
The main requirements that a clustering algorithm should satisfy are:

  • scalability
  • dealing with different types of attributes
  • discovering clusters with arbitrary shape
  • minimal requirements for domain knowledge to determine input parameters
  • ability to deal with noise and outliers
  • insensitivity to order of input records
  • high dimensionality
  • interpretability and usability

Cloud Computing

Cloud computing is basically an Internet-based network made up of large numbers of servers – mostly based on open standards, modular and inexpensive.  Clouds contain vast amounts of information and provide a variety of services to large numbers of people. The benefits of cloud computing are Reduced Data Leakage, Decrease evidence acquisition time, they eliminate or reduce service downtime, they Forensic readiness, they Decrease evidence transfer time The main factor to be discussed is security of cloud computing, which is a risk factor involved in major computing fields.

  • Cloud computing is Internet– (“CLOUD-“) based development and use of computer technology (“COMPUTING”)

  • Cloud computing is a general term for anything that involves delivering hosted services over the Internet.        


  • What is Cloud Computing?

    Cloud computing is the access to computers and their functionality via the Internet or a local area network. Users of a cloud request this access from a set of web services that manage a pool of computing resources (i.e., machines, network, storage, operating systems, application development environments, application programs). When granted, a fraction of the resources in the pool is dedicated to the requesting user until he or she releases them.

    It is called “cloud computing” because the user cannot actually see or specify the physical location and organization of the equipment hosting the resources they are ultimately allowed to use. That is, the resources are drawn from a “cloud” of resources when they are granted to a user and returned to the cloud when they are released.

    A “cloud” is a set of machines and web services that implement cloud computing.

    What is the Relationship Between Virtualization and Cloud Computing?

    Virtualization is the ability to run “virtual machines” on top of a “hypervisor.” A virtual machine (VM) is a software implementation of a machine (i.e., a computer) that executes programs like a physical machine. Each VM includes its own kernel, operating system, supporting libraries and applications. A hypervisor provides a uniform abstraction of the underlying physical machine.

    Multiple VMs can execute simultaneously on a single hypervisor. The decoupling of the VM from the underlying physical hardware allows the same VM to be started on different physical machines. Thus virtualization is seen as an enabler for cloud computing, allowing the cloud computing provider the necessary flexibility to move and allocate the computing resources requested by the user wherever the physical resources are available.

    How Are Clouds Classified?

    Given the broad definition of the term “cloud,” the current taxonomy differentiates clouds both in terms of cloud service offerings and cloud types. When categorizing cloud service offerings, we often refer to clouds in terms of “service style” depending on the portion of the software stack delivered as a service.

    The most common service styles are referred to by the acronyms *Iaas, Paas, and Saas.

    Cloud “types” (including public, private, and hybrid) refer to the nature of access and control with respect to use and provisioning of virtual and physical resources.

    What Are the Most Popular Cloud Service Styles?

    Infrastructure as a Service (IaaS)

    IaaS clouds provide access to collections of virtualized computer hardware resources, including machines, network, and storage. With IaaS, users assemble their own virtual cluster on which they are responsible for installing, maintaining, and executing their own software stack.

    Platform as a Service (PaaS)

    PaaS style clouds provide access to a programming or runtime environment with scalable compute and data structures embedded in it. With PaaS, users develop and execute their own applications within an environment offered by the service provider.

    Software as a Service (SaaS)

    SaaS style clouds deliver access to collections of software application programs. SaaS providers offer users access to specific application programs controlled and executed on the provider’s infrastructure. SaaS is often referred to as “Software on Demand.”

Networking

INTRODUCTION

A computer network exists where two or more computers are linked together to share data and/or hardware or software resources. In turn, this may facilitate electronic human-to-human communications, e-business, and alternative working practices such as teleworking.

Computer networking is a large and technically very complex topic upon which a great deal of online information is widely available. The scope of this section is therefore constrained to a largely non-technical overview of the practicalities involved in connecting computers together to form a local area network (LAN) or a personal area network (PAN) using the most commonly available wired and wireless technologies. Connecting a computer to the Internet is covered separately in the Internet pages, whilst networking-related security issues receive attention in the security pages.

When they were first introduced in the late 1970s and 1980s, most personal computers were entirely stand-alone devices. These days of course this is no longer the case, with it being unusual to find a computer that is not at least occasionally connected to a LAN or the wider Internet. Indeed, due to the growth of their interconnection as communications devices, most computers could now best be described as “interpersonal” rather than merely “personal”.

CLIENT-SERVER COMPUTING

Most computer networks are based around a “client-server” model in which the majority of the computers on the network — the “clients” — have their useful capabilities extended via connection to one or more “server” computers that provide them with greater functionality. Typically a server on a local area network (LAN) may provide its clients with additional services such as private and/or sharable storage space, access to software applications that run across the network, access to shared peripherals (most commonly printers), and access to wider networks (most notably the Internet).

It is important to realise that the term “client-server” refers purely to the relationship between two computers on a network, and not necessarily to their hardware specification. Most network servers are fairly big, powerful computers with large storage capacities. However, this need not be the case, and indeed when first establishing many home or even small business networks it is not uncommon for older and less powerful personal computers to be pressed into service as servers.

Network clients are often categorised as being either “thick” or “thin”. A thick client refers to a computer — such as a typical modern PC — that has significant functionality (such as the ability to run complex software applications) even when not connected to the network. In contrast a thin client refers to a networked computing device than can perform little or no useful independent action without network connectivity. A thin client may, for example, be a computer which only ever runs software as a service applications accessed over the Internet.

Whilst client-server networks are the most common, it is perfectly possible to have a network with no server computer. Such usually small networks are described as “peer-to-peer” indicating the equal status of all computers on the network. Peer-to-peer networks most commonly exist for very simple purposes such as to facilitate the sharing of files, a printer or an Internet connection between two or more computers (see also the example networks pictured at the end of this section).

NETWORK BUILDING BLOCKS

The basic building blocks of any network are at least two computing devices with wired or wireless points of connection — known as “network ports” — and usually at least one “network hub” or “network switch” that permits their interconnection. Both hubs and switches are hardware devices featuring multiple network ports for connecting computers or the different “segments” that make up larger networks.

The difference between a network hub and a network switch is that a network switch has more internal “intelligence”. This allows the network switch to inspect the “packets” of data that it receives and to send them on only to the intended recipient. In contact, network hubs simply broadcast all received data to all connected computers. On all but the smallest networks, the use of network switches rather than hubs hence increases network performance as the network traffic is managed intelligently. Network hubs are becoming obsolete — even for small networks — as the cost of network switches continues to fall.

On a small wireless network, the network switch (often both wired and wireless) is often integrated into a device commonly termed a “wireless access point” or “wireless router” (see below). In this context it should be noted that the terms “hub”, “switch” and “router” are often liberally bandied around in a technically dubious manner in the popular marketing and labelling of network hardware. This is largely irrelevant in most practical contexts, but can rightly annoy anybody with more specialist technical networking knowledge.

CONNECTION TECHNOLOGIES

Wired computer networks are most commonly based on a standard known as Ethernet. This uses UTP (unshielded twisted pair) cables, and today almost all most new personal computers come with a UTP Ethernet network port as standard. Many printers now also feature an Ethernet port, allowing them to be easily shared amongst the users of a network.

Most wireless computer networks are currently based around one of two technology standards known as WiFi and Bluetooth. Of these, Bluetooth is a low power, short-range wireless technology largely used to interconnect computing devices into a personal area network (PAN). As the name suggests, a personal area network exists around an individual, and typically includes devices such as a laptop, mobile phone, headset, digital camera, personal digital assistant (PDA) or other form of mobile computer.

Wireless local area networks (WLANs) almost always use WiFi. This incorporates a set of standards for wireless local area networking which are certified by an organization known as the Wi-Fi Alliance.

Wi-Fi is based on a set of wireless networking technologies known as 802.11. These include 802.11b, 802.11a, 802.11g and 802.11n (all of which are in common usage). The 802.11b standard supports data transfers at 11Mbps (megabits per second), whilst 802.11a — which for various reasons came later! — and 802.11g support 54Mbps, and 802.11n a theoretical 100Mbps.

At present the range of Wi-Fi network transmission is about 30-40 metres indoors and up to about 100 metres or so outdoors. This said, some building construction types can significantly impede or even totally block Wi-Fi signals. A forthcoming standard known as 802.11y is intended to boost outdoor range to up to 5000 metres.

Any two devices with a wireless network connection can establish a so-called “ad-hoc” wireless network. However, most usually a wireless network is established using a piece of hardware known as a “wireless access point” (also termed a wireless router) that facilitates the wireless communications between the different devices on the network. To communicate with the access point, all computers on the network must either have an internal wireless network card (as built-in to most modern laptops and mobile computers), or an external wireless adapter. The later most usually plugs in to a USB socket and is about the size of a USB memory stick.

Wireless access points commonly feature an ADSL modem to facilitate a broadband connection to the Internet for all computers on the network (and as discussed in the Internet section). Such devices are typically termed “wireless ADSL routers”, and usually also incorporate one or more Ethernet ports to allow the creation of a network of both wired and wireless devices. Where wireless access points are made available to facilitate public Internet access — either free or for a fee — they are usually termed “wireless hotspots”.

A third wireless technology is WiMax. Created by the WiMax Forum, and standing for “Worldwide Interoperability for Microwave Access”, WiMax is based on the IEEE standard known as 802.16 and facilitates highspeed wireless network links to both fixed and mobile devices. To cite the WiMax Forum, WiMax was created “to deliver non-line-of-sight (LoS) connectivity between a subscriber station and base station”, and as an alternative to broadband/DSL link (as discussed in the Internet section). The range of a WiMax wireless connection is around three to ten kilometers. To use WiMAX, a mobile or desktop computer needs to be connected to a WiMax base station or network card.

NETWORK STORAGE

Increasingly on small LANs, hardware devices known as network attached storage (NAS) drives are connected to provide network storage capacity. NAS units are effectively just large external hard disk drives (as discussed in the storage section) with one or more Ethernet ports to facilitate their connection to a network.

NAS drives are usually configured by via web browser access from a computer on the network. Some NAS units can also be connected as “standard” external hard disk drives via a USB, firewire or E-SATA connector. NAS drives are particularly handy for providing network storage on small peer-to-peer networks. And on client-server networks where enough server-based storage is available, they can be usefully be employed as a back-up storage facility.

SUMMARY: TWO SIMPLE NETWORK EXAMPLES

Networking is a complex topic that dominates the computer industry. As an illustrative summary, the following provides two examples of how some basic network hardware could be connected to create two simple networks in a SoHo (small office home office) environment.

In the first example, four personal computers are connected with Ethernet cables to a network switch that is also connected to a network server. An ADSL modem is also connected to the resultant wired client-server network to facilitate access to the Internet (and as explained in the Internet section), as is a network printer,

In the second example, a wireless ADSL router (wireless access point) is connected wirelessly to two laptops, and also by an Ethernet cable to one desktop computer, hence permitting all three network users broadband Internet access. A NAS drive is additionally connected by an Ethernet cable to this peer-to-peer network to provide network storage. This sort of network is increasingly typical in many homes, with the NAS drive often used to store audio and video files that can then be “streamed” to any connected device.