RAID Levels

RAID is an acronym first defined by David A. Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987 to describe a redundant array of inexpensive disks,[1] a technology that allowed computer users to achieve high levels of storage reliability from low-cost and less reliable PC-class disk-drive components, via the technique of arranging the devices into arrays for redundancy. Marketers representing industry RAID manufacturers later reinvented the term to describe a redundant array of independent disks as a means of dissociating a "low cost" expectation from RAID technology. In this article we will give you an idea about the different RAID levels. Feel free to comment the article on the bottom of it. Be aware that you need to register in order to leave your comment.




Configure DHCP

A DHCP server automates the network configuration process for clients. In detail, DHCP allows a Linux computer to serve dynamic IP addresses. It supports the configuration of a range of IP addresses and allows you to reserve a specific IP address, based on the hardware address associated with a client’s network card. It can assign more information such as the gateway and DNS IP address to every system that requests an IP address.




Upstart, RCs Scripts, and Services

If you haven’t installed a new version of Linux lately, you might be in for a shock. There is no /etc/inittab configuration file in Ubuntu releases. Upstart, the replacement for the System V init program, is designed to meet the demands of the latest plug-and-play hotplug environments. During the boot process, Upstart is especially helpful with filesystems mounted on portable and network devices.




Synchronizing Servers with RYSNC and SSH on a cluster

This article describes one method of automating the copying of data and configuration files from one server to another in thoery. If you have enough knowledge on Linux systems it will be a matter of a few hours to deploy this scenario. In its simplest form, synchronizing the data on two (or more) servers is just a matter of copying files from one server to another. One server acts as a primary repository for data, and changes to the data can only be made on this server (in a high-availability configuration, only one server owns a resource at any given point in time). A regularly scheduled copy utility then sends the data after it has been changed on the primary server to the backup server so it is ready to take ownership of the resource if the primary server crashes. In a cluster configuration all nodes need to access and modify shared data (all cluster nodes offer the same services), so you will probably not use this method of data synchronization on the nodes inside the cluster. You can, however, use the method of data synchronization described in this article on highly available server pairs to copy data and configuration files that change infrequently.




[1]
Items 1 - 4 of 4 displayed.

Back to Home page

Subcategories