Monday, 23 February 2009

linux - faillog

Faillog on linux

To view faillog (which resides in /var/log) and is a binary

Run faillog –u username;


etst001 # faillog -u brinap
Username Failures Maximum Latest
brinap 0 0




MAN PAGE

faillog - display faillog records or set login failure limits

Synopsis faillog [options]

Description
faillog formats the contents of the failure log from /var/log/faillog database. It also can be used for maintains failure counters and limits. Run faillog without arguments display only list of user faillog records who have ever had a login failure.

Options
The options which apply to the faillog command are:
-a, --all
Display faillog records for all users.
-h, --help
Display help message and exit.
-l, --lock-time SEC
Lock account to SEC seconds after failed login.
-m, --maximum MAX
Set maximum number of login failures after the account is disabled to MAX. Selecting MAX value of 0 has the effect of not placing a limit on the number of failed logins. The maximum failure count should always be 0 for root to prevent a denial of services attack against the system.
-r, --reset
Reset the counters of login failures or one record if used with the -u LOGINoption. Write access to /var/log/faillog is required for this option.
-t, --time DAYS
Display faillog records more recent than DAYS. The -t flag overrides the use of -u.
-u, --user LOGIN
Display faillog record or maintains failure counters and limits (if used with -l, -m or -r options) only for user with LOGIN.
Caveats
faillog only prints out users with no successful login since the last failure. To print out a user who has had a successful login since their last failure, you must explicitly request the user with the -u flag, or print out all users with the -a flag.

Files
/var/log/faillog
Failure logging file.








PROBLEM: How do I deny login access/lock accounts for users who mis type their passwords more than a set number of times?

SOLUTION:
The faillog mechanism is designed to log login failures. By typing "faillog" at your shell prompt, you can obtain a list of users who have repeatedly tried to login and have failed.
The "faillog" is reset upon successful login - so once a user succeeds their failure count is set to 0.
You can limit the maximum number of login failures (system wide) by using the "faillog -m" option (specify the maximum number of failures allowed).
You can reset the fail count for a user (once a user has been "locked out" this is required to "unlock" him/her) using the "faillog -u username -r" command.

EXAMPLE:
Set the maximum number of login failures to 5:
At a terminal prompt, type faillog -m 5 and press ENTER.
Reset the login failures for the user "geeko":
At a terminal prompt, type faillog -u geeko -r and press ENTER.

Tuesday, 17 February 2009

Restoring the root filesystem on Solaris

Restoring theroot (/) File System

To restore the / (root) file system, boot from the Solaris CD-ROM and then run ufsrestore.

If / (root), /usr, or the /var file system is unusable because of some type of corruption the system will not boot.

The following procedure demonstrates how to restore the / (root) file system which is assumed to be on boot disk c0t0d0s0.

1. Insert the Solaris 8 Software CD 1, and boot the CD-ROM with the single-user mode option.

ok boot cdrom -s

2. Create the new file system structure.

# newfs /dev/rdsk/c0t0d0s0

3. Mount the file system to an empty mount point directory, /a and change to that directory.

# mount /dev/dsk/c0t0d0s0 /a
# cd /a

4. Restore the / (root) file system from its backup tape.

# ufsrestore rf /dev/rmt/0

Note – Remember to always restore a file system starting with the level 0 backup tape and continuing with the next lowest level tape up through the highest level tape.

5. Remove the restoresymtable file.

# rm restoresymtable

6. Install the bootblk in sectors 1–15 of the boot disk. Change to the directory containing the bootblk, and run the installboot command.

# cd /usr/platform/`uname -m`/lib/fs/ufs
# installboot bootblk /dev/rdsk/c0t0d0s0

7. Unmount the new file system.

# cd /
# umount /a

8. Use the fsck command to check the restored file system.

# fsck /dev/rdsk/c0t0d0s0

9. Reboot the system.

# init 6

10. Perform a full backup of the file system. For example:

# ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s0

Note – Always back up the newly created file system, as ufsrestore repositions the files and changes the inode allocation.

Restoring the /usr and /var File Systems

To restore the /usr and /var file systems repeat the steps described above, except step 6. This step is required only when restoring the (/) root file system.

To restore a regular file system, (for example, /export/home, or /opt) back to disk, repeat the steps described above, except steps 1, 6, and 9.

Example

# newfs /dev/rdsk/c#t#d#s#
# mount /dev/dsk/c#t#d#s# /mnt
# cd /mnt
# ufsrestore rf /dev/rmt/#
# rm restoresymtable
# cd /
# umount /mnt
# fsck /dev/rdsk/c#t#d#s#
# ufsdump 0uf /dev/rmt/# /dev/rdsk/c#t#d#s#

courtesy of http://www.kobhi.com/solaris/backup_recovery/solaris_backup_root_restore.htm

RAID explained (in several ways)

http://www.ahinc.com/raid.htm

RAID 0

What is it?

RAID 0 uses a method of writing to the disks called striping. Let's assume you have a server with three drives of 500 MB, 1 GB and 2 GB. Normally a server would treat each of these drives individually. By incorporating striping, the system would see all of the drives as only one drive for a total of 3.5 GB. Big deal, you say. Wait, there's more.

When the system writes data to the disk, the RAID 0 striping kicks in and automatically distributes the data across all three drives. Part of a file (chunks of data) will be written to the first drive, the next part to the second drive, the next part to the third drive and then it starts all over again until the entire contents of the file have been written.

Speed

What this does is increase the speed of the reading/writing process. If you have two drives on your server, it increases the speed by about 25%. If you have three drives, it increases the speed about 33%. When you consider that the main task a server is performing is reading and writing data, any increase in speed is highly welcome.

Disk Usage

Besides increasing speed, the other benefit is that the drives can be of different sizes.

Because RAID 0 only writes the data once, it does not achieve data redundancy. If one of the drives fails, the entire system has to be restored because all files are split or striped across all drives. Because there is no data redundancy, there is no loss of disk space.

Requirements

Adding a RAID controller and more drives.

Top of page

RAID 1

What is it?

RAID 1 uses a technology called mirroring or disk shadowing. RAID 1 requires a minimum of two drives that are exactly the same size. Every time a write is executed the same data is written to both drives, i.e. a mirror image. Well, almost a mirror image. The data is not reversed in the same way as when you look in the mirror.

So what you achieve with RAID 1 is data redundancy. If one of the drives fails, the system can continue to run by just writing to one drive. If you have hot swappable drives, you could pull out the bad drive, plug in a new one and the system is back to its normal state. How efficient and easy it is to execute all of this depends onthe RAID controller and/or software that is being used.

Speed

There is basically no increase or decrease in the time it takes to write or read data.

Disk Usage

The disadvantage of RAID 1 is that you lose half of your disk capacity. If you have two 4GB drives, you don't have a total of 8 GB of space, but only 4 GB. So you are losing half of the capacity of disk space that you paid for. But on the other hand disk drives are fairly inexpensive today. What has to be considered is what is the cost of downtime if a drive fails on your server? The downtime cost is probably much more than the cost of the additional drive.

Requirements

RAID 1 can be accomplished by simply adding another drive and perhaps you may need a new controller that supports RAID. It is possible to use a RAID software controller, but we don't recommend it. Windows servers and some versions of the Linux/UNIX/AIX operating systems provide the mirroring software. Configuration and installation is fairly simple. So, for a few hundred dollars you can quickly have RAID 1 up and working.

Top of page

RAID 5

What is it?

RAID 5 accomplishes both techniques of RAID 0 and RAID 1. There are other benefits of RAID 5 but lets leave that discussion to the techies. You'll just have to take my word for it. RAID 5 requires a minimum of three drives and it is recommended that all drives on the system be of the same size. The more drives you have on the server, the better RAID 5 will perform. We usually recommend having five drives where one is used as a spare. This will allow for up to two drives to fail and the system can keep running.

RAID 5 is the version most often recommended. Because the price of disk drives have drastically dropped, the cost of implementing RAID 5 is now within most companies budgets.

Speed

There is a decrease in write speed due to calculations that have to be made before data is written to the drives. If you want to increase read/write speed and have the benefits of RAID 5, you need to implement RAID 10.

Disk Usage

The loss of disk space is basically 100 divided by the number of disk drives. With 3 drives, there is a 33% loss of disk space. With 5 drives, there is a 20% loss of disk space.

Requirements

RAID 5 is more expensive to implement. You will need additional drives and a RAID controller. The cost of implementing RAID 5 could be in the range of $1000 to $5000 depending on the total number of drives, type of drives and controller required.

Top of page

Is it worth it?

If you ...

value your data
can't afford any downtime
consider the cost of recovery when you have a drive failure
Yes it is!

Until companies have a drive failure, most companies don't consider or view this technology as being valuable. Don't let yourself fall in that trap. Give the benefits of RAID some thought. To help prove the benefits, you might want to shutdown your server for an hour and discover how unproductive your organization can become. Of course this is a very short time compared to an actual drive replacement and data recovery that could take up to two or more days.


#-##-##-##-##-##-##-##-##-##-##-##-##-##-##-##-#-

http://cuddletech.com/veritas/raidtheory/x31.html


RAID: The Details

2.1. RAID Type: Concatenation

Concatenations are also know as "Simple" RAIDs. A Concatenation is a collection of disks that are "welded" together. Data in a concatenation is layed across the disks in a linear fashion from on disk to the next. So if we've got 3 9G (gig) disks that are made into a Simple RAID, we'll end up with a single 27G virtual disk (volume). When you write data to the disk you'll write to the first disk, and you'll keep writing your data to the first disk until it's full, then you'll start writing to the second disk, and so on. All this is done by the Volume Manager, which is "keeper of the RAID". Concatenation is the cornerstone of RAID.

Now, do you see the problem with this type of RAID? Because we're writing data linearly across the disks, if we only have 7G of data on our RAID we're only using the first disk! The 2 other disks are just sitting there bored and useless. This sucks. We got the big disk we wanted, but it's not any better than a normal disk drive you can buy off the shelves in terms of performance. There has got to be a better way..........

2.2. RAID Type: Striping (RAID-0)

Striping is similar to Concatenation because it will turn a bunch of little disks into a big single virtual disk (volume), but the difference here is that when we write data we write it across ALL the disks. So, when we need to read or write data we're moving really really fast, in fact faster than any one disk could move. There are 2 things to know about RAID-0, they are: stripe width, and columns. They sound scary, but they're totally sweet, let me show you. So, if we're going to read and write across multiple disks in our RAID we need an organized way to go about it. First, we'll have to agree on how much data should be written to a disk before moving to the next; we call that our "stripe width". Then we'll need far kooler term for each disk, a term that allows us to visualize our new RAID better..... "column" sounds kool! Alright, so each disk is a "column" and the amount of data we put on each "column" before moving to the next is our "stripe width".

Let's solidify this. If we're building a RAID-0 with 4 columns, and a stripe width of 128k, what do I have? It might look something like this:

Look good? So, when we start writing to our new RAID, we'll write the first 128k to the first column, then the next 128k to the second column, then the next 128k to the third column, then the next 128k to the fourth column, THEN the next 128k to the first column, and keep going till all the data is written. See? If we were writing a 1M file we'd wrap that one file around all 4 disks almost 3 times! Can you see now where our speed up comes from? SCSI drives can write data at about (depending on what type of drive and what type of SCSI) 20M/s. On our Striped RAID we'd be writing at 80M/s! Kool huh!?

But, now we've got ANOTHER problem. In a Simple RAID if we had, say, 3 9G disks, we'd have 27G of data. Now, if I only wrote 9G of data to that RAID and the third disk died, so what, there is no data on it. (See where I'm going with this?) We'd only be using one of our three disks in a simple. BUT, in a Striped RAID, we could write only 10M of data to the RAID, but if even ONE disk failed, the whole thing would be trash because we wrote it on ALL of the disks. So, how do we solve this one?

2.3. RAID Type: Mirroring (RAID-1)

Mirroring isn't actually a "RAID" like the other forms, but it's a critical component to RAID, so it was honored by being given it's own number. The concept is to create a separate RAID (Simple or RAID0) that is used to duplicate an existing RAID. So, it's literally a mirror image of your RAID. This is done so that if a disk crashes in your RAID the mirror will take over. If one RAID crashes, then the other RAID takes its place. Simple, right?

There's not much to it. However, there is a new problem! This is expensive... really expensive. Let's say you wanted a 27G RAID. So you bought 3 9G drives. In order to mirror it you'll need to buy 3 more 9G drives. If you ever get depressed you'll start thinking: "You know, I just shelled out $400 for 3 more drives, and I don't even get more usable space!". Well, in this industry we all get depressed a lot so, they thought of another kool idea for a RAID......

2.4. RAID Type: Stripping plus Mirroring (RAID-0+1)

When we talk about mirroring (RAID-1) we're not explicitly specifying whether we're mirroring a Simple RAID or a Striped (RAID-0) RAID. RAID-0+1 is a term used to explicitly say that we're mirroring a Striped RAID. The only thing you need to know about it is this...

A mirror is nothing more that another RAID identical to the RAID we're trying to protect. So when we build a mirror we'll need the mirror to be the same type of RAID as the original RAID. If the RAID we want to mirror is a Simple RAID, our mirror then will be a Simple RAID. If we want to mirror a Striped RAID, then we'll want another Striped RAID to mirror the first. Right? So, if you say to me, we're building a RAID-0+1, I know that we're going to mirror a Striped RAID, and the mirror itself is going to be striped as well.

You'll see this term used more often than "RAID-1" simply because a mirror, in and of itself, isn't useful. Again, it's not really a "RAID" in the sense that we mean to use the word.

2.5. RAID Type: RAID-5 (Striping with Parity)

RAID-5 is the ideal solution for maximizing disk space and disk redundancy. It's like Striping (RAID-0) in the fact that we have columns and stripe widths, but when we write data two interesting things happen: the data is written to multiple disks at the same time, and parity is written with the data.

Okey, let's break it down a bit. Let's say we build a RAID-5 out of 4 9G drives. So we'll have 4 columns, and lets say our stripe width is 128k again. The first 128k is written on disks one, two AND three. At the same time it's written a little magic number is written on each disk with the data. That magic number is called the parity. Then, the second 128k of data is written to (watch carefully) disks two, three and four. Again, a parity number is written with that data. The third 128k of data is written to disks three, four and one. (See, we wrapped around). And data keeps being written like that.

Here's the beauty of it. Each piece of our data is on three different disks in the RAID at the same time! Let's look back at our 4 disk raid. We're working normally, writing along, and then SNAP! Disk 3 fails! Are we worried? Not particularly. Because our data is being written to 3 disks per write instead of just one, the RAID is smart enough to just get the data off the other 2 disks it wrote to! Then, once we replace the bad disk with a new one, the RAID "floods" all the data back onto the disk from the data on the other 2 adjacent disks! But, you ask, how does the RAID know it's giving you the correct data? Because of our parity. When the data was written to disk(s) that parity was written with it. We (actually the computer does this automatically) just look at the data on disks 2 and 4, then compare (XOR) the parity written with the data and if the parity checks out, we know the data is good. Kool huh?

Now, as you might expect, this isn't perfect either. Why? Okey, number 1, remember that parity that saves our butt and makes sure our data is good? Well, as you might expect the systems CPU has to calculate that, which isn't hard but we're still wasting CPU cycles for the RAID, which means if the system is really loaded we may need to (eek!) wait. This is the "performance hit" you'll hear people talk about. Also, we're writing to 3 disks at a time for the SAME data, which means we're using up I/O bandwidth and not getting a real boost out of it.

2.6. RAID Comparison: RAID0+1 vs RAID5

There are battles fought in the storage arena, much like the old UNIX vs NT battles. We tend to fight over RAID0+1 vs RAID5. The fact is that RAID5 is advantageous because we use less disks in the endeavor to provide large amounts of disk space, while still having protection. All that means is that RAID5 is inexpensive compared to RAID0+1 where we'll need double the amount of disk we expect to use, because we'll only need a third more disks rather than twice as many. But, then RAID5 is also slower than RAID0+1 because of that damned parity. If you really want speed, you'll need to bite the bullet and use RAID0+1 because even though you need more disks, you don't need to calculate anything, you just dump the data to the disks. In my estimates (this isn't scientific, just what I've noticed by experience) RAID0+1 is about 20%-30% faster than RAID5.

Now, in the real world, you rarely have much choice, and the way to go is clear. If you're given 10 9G disks and are told to create a 60G RAID, and you can't buy more disks, you'll need to either go RAID5, or be unprotected. However, if you've got thoughs same disks and they only want 36G RAID you can go RAID0+1, with the only drawback that they won't have much room to grow. It's all up to you as an admin, but always take growth into account. Look at what you've got, downtime availability to grow when needed, budget, performance needs, etc, etc, etc. Welcome to the world of capacity planning!

VxVm essentials part II - Virtual Objects

Virtual objects in VxVM

The connection between physical objects and VxVM objects is made when you place a physical disk under VxVM control. After installing VxVM on a host system, you must bring the contents of physical disks under VxVM control by collecting the VM disks into disk groups and allocating the disk group space to create logical volumes.
Bringing the contents of physical disks under VxVM control is accomplished only if VxVM takes control of the physical disks and the disk is not under control of another storage manager such as Sun Microsystems Solaris Volume Manager software. VxVM creates virtual objects and makes logical connections between the objects. The virtual objects are then used by VxVM to do storage management tasks.


VxVm objects include the following


Disk Groups

A disk group is a collection of disks that share a common configuration, and which are managed by VxVM. A disk group configuration is a set of records with detailed information about related VxVM objects, their attributes, and their connections. A disk group name can be up to 31 characters long.
In releases prior to VxVM 4.0, the default disk group was rootdg (the root disk group). For VxVM to function, the rootdg disk group had to exist and it had to contain at least one disk. This requirement no longer exists, and VxVM can work without any disk groups configured (although you must set up at least one disk group before you can create any volumes of otherVxVM objects). You can create additional disk groups when you need them. Disk groups allow you to group disks into logical collections. A disk group and its components can be moved as a unit from one host machine to another. Volumes are created within a disk group. A given volume and its plexes and subdisks must be configured from disks in the same disk group.


VM Disks

When you place a physical disk under VxVM control, a VM disk is assigned to the physical disk. A VM disk is under VxVM control and is usually in a disk group. Each VM disk corresponds to at least one physical disk or disk partition. VxVM allocates storage from a contiguous area of VxVM disk space. A VM disk typically includes a public region (allocated storage) and a small private region where VxVM internal configuration information is stored. Each VM disk has a unique disk media name (a virtual disk name). You can either define a disk name of up to 31 characters, or allow VxVM to assign a default name that takes the form diskgroup##, where diskgroup is the name of the disk group to which the disk belongs.


Sub Disks

A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks. Each subdisk represents a specific portion of a VM disk, which is mapped to a specific region of a physical disk.
The default name for a VM disk is diskgroup## and the default name for a subdisk is diskgroup##-##, where diskgroup is the name of the disk group to which the disk belongs

eg -

sd disk01-01 disk01 <-- Subdisk named "disk01-01" made from "disk01"

Any VM disk space that is not part of a subdisk is free space. You can use free space to create new subdisks.
VxVM release 3.0 or higher supports the concept of layered volumes in which subdisks can contain volumes.


Plexes

VxVM uses subdisks to build virtual objects called plexes. A plex consists of one or more subdisks located on one or more physical disk;

pl myplex striped <-- Striped Plex named "myplex"
sd disk01-01 <-- Made using subdisk "disk01-01"
sd disk02-01 <-- and subdisk "disk02-01"

You can organize data on subdisks to form a plex by using the following methods:
■ concatenation
■ striping (RAID-0)
■ mirroring (RAID-1)
■ striping with parity (RAID-5)


Volumes

A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk device, but does not have the physical limitations of a physical disk device. A volume consists of one or more plexes, each holding a copy of the selected data in the volume. Due to its virtual nature, a volume is not restricted to a particular disk or a specific area of a disk. The configuration of a volume can be changed by using VxVM user interfaces. Configuration changes can be accomplished without causing disruption to applications or file systems that are using the volume. For example, a volume can be mirrored on separate disks or moved to use different disk storage.
Note: VxVM uses the default naming conventions of vol## for volumes and vol##-## for plexes in a volume. For ease of administration, you can choose to select more meaningful names for the volumes that you create.
A volume may be created under the following constraints:
■ Its name can contain up to 31 characters.
■€It can consist of up to 32 plexes, each of which contains one or more subdisks.
■€It must have at least one associated plex that has a complete copy of the data in the volume with at least one associated subdisk.
■ All subdisks within a volume must belong to the same disk group.



How VxVm objects fit together;

VxVM virtual objects are combined to build VOLUMES. The virtual objects contained in volumes are VMDISKS, DISK GROUPS, SUBDISKS, and PLEXES. Veritas Volume Manager objects are organized as follows:
■ VM DISKS are grouped into DISK GROUPS
■€ SUBDISKS (each representing a specific region of a disk) are combined to form PLEXES
■ VOLUMES are composed of one or more PLEXES


#-#-#-#-##-#-#-#-##-#-#-#-##-#-#-#-##-#-#-#-##-#-#-#-#

Cuddletech explain it brilliantly - http://www.cuddletech.com/veritas/vxcrashkourse/ar01s03.html

Disk: This is a normal physical disk with a SCSI id. (c0t0d0...)

VM Disk (dm): A disk that is put under Vx control.

Sub Disk (sd): A section of VM disk used to build plexes.

Plex (pl): A mirror.

Volume (v): A virtual disk, which can contain data.

Now, lets talk about these objects a bit. A disk is nothing new or magical. When we want to use a disk in VERITAS we need to turn it over to VERITAS control. A disk turned over to VERITAS control is then given a VERITAS name (like disk01). After doing this the disk is no longer available to the system for anything outside of use inside VERITAS. Also, when we turn disks over to VERITAS control we don't turn over a partition (c0t0d0s0), but the whole disk itself (c0t0d0). Now that we've got a VM Disk we then create a Sub Disk from the VM Disk. Think of a subdisk as a VERITAS partition. We could make one big subdisk from the VM Disk and use the whole disk, or we could create a bunch of smaller subdisks. You can divide VM Disks into subdisks however you like. From subdisks (one or more) we create what is called a plex. Plexes are confusing so let's talk about these in some length.

Say the following out loud: "A Plex is a Mirror. A Plex is a Mirror. A Plex is a Mirror." Maybe you should do that a couple times. You may feel silly, but plexes are a sort of cornerstone in VERITAS. A Volume is a container, in VERITAS. The volume is made up of one or more plexes. See the catch? A Plex is a mirror, yes, however you can make a volume with only one plex. Is a volume made with only one plex mirrored? No. We'll explain this more later, but for the time being keep it in your head. So, subdisks are grouped together into a plex. The interesting thing is that in VERITAS the plexes do all the work, so lets say you wanted to create a Striped volume. You would actually create a striped plex, and then attach that striped plex to a volume. The volume doesn't care, it's just a container. See the beauty here? Let's put all this stuff together and build an imaginary volume in VERITAS.

We're going to build a striped (RAID0) volume from 2 9G disks. We'll say that the first disk is c1t0d0, and the second is c1t1d0. First, we need to put them in VERITAS control, so we create VM disks. The VM disks are then named disk01, and disk02. Next, we'll create subdisks using these two disks. Let's use the whole disks and just create 2 subdisks, one for each VM disk. We'll call disk01's subdisk "disk01-01", and disk02's subdisk "disk02-01". Now, it's plex time! We're going to build a striped plex using our two subdisks, which we'll call "myplex". (Don't worry yet about how we create the plex, just get the concepts now.) So now we've got a plex, which contains the subdisks "disk01-01" and "disk02-01". Now, we create a volume named "myvol" using "myplex". And bingo! We've got a striped 18G volume ready to create a file system on! Maybe, if we used the short names mentioned earlier (with the list of objects) and make an outline it'd look something like this:

dm disk01 c1t0d0 <-- VM Disk named "disk01" made from "c1t0d0"
dm disk02 c1t1d0 <-- VM named "disk02" made from "c1t1d0"

sd disk01-01 disk01 <-- Subdisk named "disk01-01" made from "disk01"
sd disk02-01 disk02 <-- Subdisk named "disk02-01" made from "disk02"

pl myplex striped <-- Striped Plex named "myplex"
sd disk01-01 <-- Made using subdisk "disk01-01"
sd disk02-01 <-- and subdisk "disk02-01"

v myvol <-- Volume made from....
pl myplex striped <-- the striped plex named "myplex", made from...
sd disk01-01 <-- Subdisk "disk01-01", and...
sd disk02-01 <-- "disk02-01"
Look OK? Because if it does, take a look at this, real output from VERITAS, from a real volume I created on my test machine:

v myvol fsgen ENABLED 35356957 - ACTIVE - -
pl myplex myvol ENABLED 35357021 - ACTIVE - -
sd disk01-01 myplex ENABLED 17678493 0 - - -
sd disk02-01 myplex ENABLED 17678493 0 - - -
Does that make any sense? Any at all? I hope it does. And if it does, you're ready for VERITAS. We're going to explain more, much more, as we roll along, but at least understand that:

Volumes are made up of plexes.
Plexes are made up of subdisks.
Subdisks are made up of VM Disks.
VM Disks are made up of (real) Disks.

and,

Disks can be turned into VM Disks.
Vm Disks can be turned into Subdisks.
Subdisks can be grouped into Plexes.
Plexes can be grouped into a Volume.
Good. One more note about plexes before we move on. Here's the groovy thing about plexes. Because plexes are "mirrors", we can mirror the volume we built earlier simply by building another plex identical to the first one, using two more subdisks (which means we need 2 more vmdisks, etc,etc). Once we build the second plex, we attach it to the volume (myvol) and presto chango! We're mirrored! This is really kool... mirrors are something magical, they are a part of everything. If you have only one mirror you may see yourself, but you'll need a second mirror to see yourself holding the mirror. Hmmm?

VxVM essentials part I

VxVM - essential daemons

these constantly-running daemons and kernel threads are essential for VxVM to operate:
1) vxconfigd — The VxVM configuration daemon maintains disk and group configurations and communicates configuration changes to the kernel, and modifies configuration information stored on disks.
2) vxiod — VxVM I/O kernel threads provide extended I/O operations without blocking calling processes. By default, 16 I/O threads are started at boot time, and at least one I/O thread must continue to run at all times.
3) vxrelocd — The hot-relocation daemon monitors VxVM for events that affect redundancy, and performs hot-relocation to restore redundancy.


How VxVM handles storage management

VxVM uses two types of objects to handle storage management: physical objects and virtual objects.
1) Physical objects — physical disks or other hardware with block and raw operating system device interfaces that are used to store data.
2) Virtual objects — When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks. Each volume records and retrieves data from one or more physical disks. Volumes are accessed by file systems, databases, or other applications in the same way that physical disks are accessed. Volumes are also composed of other virtual objects (plexes and subdisks) that are used in changing the volume configuration. Volumes and their virtual components are called virtual objects or VxVM objects.


How VxVM works with Disk Arrrays

Performing I/O to disks is a relatively slow process because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read or write operations are done to individual disks, one at a time, the read-write time can become unmanageable. Performing these operations on multiple disks can help to reduce this problem.
A disk array is a collection of physical disks that VxVM can represent to the operating system as one or more virtual disks or volumes. The volumes created by VxVM look and act to the operating system like physical disks. Applications that interact with volumes should work in the same way as with physical disks.
Data can be spread across several disks within an array to distribute or balance I/O operations across the disks. Using parallel I/O across multiple disks in this way improves I/O performance by increasing data transfer speed and overall throughput for the array


Multipathed disk arrays

Some disk arrays provide multiple ports to access their disk devices. These ports, coupled with the host bus adaptor (HBA) controller and any data bus or I/ O processor local to the array, make up multiple hardware paths to access the disk devices. Such disk arrays are called multipathed disk arrays. This type of disk array can be connected to host systems in many different configurations, (such as multiple ports connected to different controllers on a single host, chaining of the ports through a single controller on a host, or ports connected to different hosts simultaneously)


Device discovery

Device discovery is the term used to describe the process of discovering the disks that are attached to a host. This feature is important for DMP because it needs to support a growing number of disk arrays from a number of vendors. In conjunction with the ability to discover the devices attached to a host, the Device Discovery service enables you to add support dynamically for new disk arrays. This operation, which uses a facility called the Device Discovery Layer (DDL), is achieved without the need for a reboot. This means that you can dynamically add a new disk array to a host, and run a command which scans the operating system’s device tree for all the attached disk devices, and reconfigures DMP with the new device database.


Enclosure-based naming

Enclosure-based naming provides an alternative to the disk device naming described in “Physical objects—physical disks”. This allows disk devices to be named for enclosures rather than for the controllers through which they are accessed. In a Storage Area Network (SAN) that uses Fibre Channel hubs or fabric switches, information about disk location provided by the operating system may not correctly indicate the physical location of the disks. For example, c#t#d#s# naming assigns controller-based device names to disks in separate enclosures that are connected to the same host controller. Enclosure-based naming allows VxVM to access enclosures as separate physical entities. By configuring redundant copies of your data on separate enclosures, you can safeguard against failure of one or more enclosures.
In a typical SAN environment, host controllers are connected to multiple enclosures in a daisy chain or through a Fibre Channel hub or fabric switch;

host
|
FC hub / switch
| | |
enc 1 enc 2 enc 3 - these are the disk enclosures

In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure. For example, the device names for the disks in
enclosure enc0 are named enc0_0, enc0_1, and so on. The main benefit of this scheme is that it allows you to quickly determine where a disk is physically located in a large SAN configuration.

Note: In many advanced disk arrays, you can use hardware-based storage management to represent several physical disks as one logical disk device to the operating system. In such cases, VxVM also sees a single logical disk device rather than its component disks. For this reason, when reference is made to a disk within an enclosure, this disk may be either a physical or a logical device.
Another important benefit of enclosure-based naming is that it enables VxVM to avoid placing redundant copies of data in the same enclosure. This is a good thing to avoid as each enclosure can be considered to be a separate fault domain. For example, if a mirrored volume were configured only on the disks in enclosure enc1, the failure of the cable between the hub and the enclosure would make the entire volume unavailable.
If required, you can replace the default name that VxVM assigns to an enclosure with one that is more meaningful to your configuration.
In High Availability (HA) configurations, redundant-loop access to storage can be implemented by connecting independent controllers on the host to separate hubs with independent paths to the enclosures. Such a configuration protects against the failure of one of the host controllers (c1 and c2), or of the cable between the host and one of the hubs. In this example, each disk is known by the same name to VxVM for all of the paths over which it can be accessed. For example, the disk device enc0_0 represents a single disk for which two different paths are known to the operating system, such as c1t99d0 and c2t99d0.

Monday, 16 February 2009

nfs - cachefs considerations

You can use cachefs as a way to improve nfs performance over slow / inconsistent networks

there is a great article from here;

The history of CacheFS
Sun didn´t introduced this feature for webservers. Long ago, admins didn´t want to manage dozens of operating system installations. Instead of this they wanted to store all this data on a central fileserver (you know ... the network is the computer). Thus netbooting Solaris and SunOS was invented. But there was a problem: Swap via Network was a really bad idea that days (it was a bad idea in 10 MBit/s times and it´s still a bad idea in 10 GBit/s times). Thus the diskless systems got a disk for a local swap. But there was another problem. All the users started to work at 9 o´clock ... they switched on their workstations ... and the load on the fileserver and the network got higher and higher. They had a local disk ... local installation again? No ... the central installation had it´s advantages. Thus the idea of CacheFS was born.

CacheFS is a really old feature of Solaris/SunOS. It´s first implementation dates back into the year 1991. I really think you can call this feature matured

CacheFS in theory
The mechanism of CacheFS is pretty simple. As i told you before, CacheFS is somewhat similar to a caching web proxy. The CacheFS is a proxy to the original filesystem and caches files their way through CacheFS. The basic idea is to cache remote files locally on a harddisk, so you can deliver them without using the network when you access them the second time.

Of course the CacheFS has to handle changes to the original files. So CacheFS checks the metadata of the file before delivering the copy. If the metadata has changed, the CacheFS loads the original file from the server. When the metadata hasn´t changed it delivers the copy from the cache.

The CacheFS isn´t just usable for NFS, you could use it as well for caching optical media like CD or DVD.

Okay ... using CacheFS is really easy. Let´s assume, you have an fileserver called theoden We use the directory /export/files as the directory shared by NFS. The client in our example is gandalf.


Preparations

Let´s create a NFS server at first. This is easy. Just share an directory on a Solaris Server. We login onto theoden and execute the following commands with root privileges.

[root@theoden:/]# mkdir /export/files
[root@theoden:/]# share -o rw /export/files
# share
- /export/files rw ""
Okay, of course it would be nice to have some files to play around in this directory. I will use some files of the Solaris Environment.
[root@theoden:/]# cd /export/files
[root@theoden:/export/files]# cp -R /usr/share/doc/pcre/html/* .Let´s do a quick test, if we can mount the directory:
[root@gandalf:/]# mkdir /files
[root@gandalf:/]# mount theoden:/export/files /files
[root@gandalf:/]# unmount /filesNow you should be able to access the \verb=/export/files= directory on theoden by accessing \verb=/files= on gandalf. There should be no error messages.

Okay, at first we have to create the location for our caching directories. Let´s assume we want to place our cache at /var/cachefs/caches/cache1. At first we create the directories above the cache directory. You don´t create the last part of the directory structure manually.
[root@gandalf:/]# mkdir -p /var/cachefs/cachesThis directory will the the place where we store our caches for CacheFS. After this step we have to create the cache for the CacheFS.
[root@gandalf:/files]# cfsadmin -c -o maxblocks=60,minblocks=40,threshblocks=50 /var/cachefs/caches/cache1The directory cache1 is created automatically by the command. In the case the directory already exists, the command will quit without creating the cache.

Additionally you have created the cache and you specified some basic parameters to control the behaviour of the cache. Citing the manpage of cfsadmin:
maxblocks: Maximum amount of storage space that CacheFS can use, expressed as a percentage of the total number of blocks in the front file system.
minblocks Minimum amount of storage space, expressed as a percentage of the total number of blocks in the front file system, that CacheFS is always allowed to use without limitation by its internal control mechanisms.
threshblocks A percentage of the total blocks in the front file system beyond which CacheFS cannot claim resources once its block usage has reached the level specified by minblocks.
All this parameter can be tuned to preven CacheFS to eat away all the storage available in a filesystem, a behaviour that was quite common to early versions of this feature.


Mounting a filesystem via CacheFS
We have to mount the original filesystem now.
[root@gandalf:/files]# mkdir -p /var/cachefs/backpaths/files
[root@gandalf:/files]# mount -o vers=3 theoden:/export/files /var/cachefs/backpaths/filesYou may have noticed the parameter that sets the NFS version to 3. This is nescessary, as CacheFS isn´t supported with NFSv4. Thus you can only use it with NFSv3 and below. The reason of this limitation has it´s foundation in the different way NFSv4 handles inodes.

Okay, now we mount the cache filesystem at the old location:
[root@gandalf:/files]# mount -F cachefs -o backfstype=nfs,backpath=/var/cachefs/backpaths/files,cachedir=/var/cachefs/caches/cache1 theoden:/export/files /files
The options of the mount controls some basic parameters of the mount:
backfstype specifies what type of filesystem is proxied by the CacheFS filesystem
backpath specifies where this proxied filesystem is currently mounted
cachedir specifies the cache directory for this instance of the cache. Multiple CacheFS mounts can use the same cache.
From now on every access to the /files directory will be cached by CacheFS. Let´s have a quick look into the /etc/mnttab. There are two important mounts for us:
[root@gandalf:/etc]# cat mnttab
[...]
theoden:/export/files /var/cachefs/backpaths/files nfs vers=3,xattr,dev=4f80001 1219049560
/var/cachefs/backpaths/files /files cachefs backfstype=nfs,backpath=/var/cachefs/backpaths/files,cachedir=/var/cachefs/caches/cache1,dev=4fc0001 1219049688
The first mount is our back file system, it´s a normal NFS mountpoint. But the second mount is a special one. This one is the consequence of the mount with the \verb=-F cachefs= option.


Statistics about the cache
While using it, you will see the cache structure at /var/cachefs/caches/cache1 filling up with files. I will explain some of the structure in the next section. But how efficient is this cache? Solaris provides an command to gather some statistics about the cache. With cachefsstat you print out data like hit rate inclusive the absolute number of cache hits and cache misses:
[root@gandalf:/files]# /usr/bin/cachefsstat

/files
cache hit rate: 60% (3 hits, 2 misses)
consistency checks: 7 (7 pass, 0 fail)
modifies: 0
garbage collection: 0
[root@gandalf:/files]#


###########################################################################

from http://publib.boulder.ibm.com/infocenter/systems/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/cachefs_perf_impacts.htm

CacheFS performance impacts

CacheFS will not increase the write performance to NFS file systems. However, you have some write options to choose as parameters to the -o option of the mount command, when mounting a CacheFS. They will influence the subsequent read performance to the data.

The write options are as follows:

write around
The write around mode is the default mode and it handles writes the same way that NFS does. The writes are made to the back file system, and the affected file is purged from the cache. This means that write around voids the cache and new data must be obtained back from the server after the write.
non-shared
You can use the non-shared mode when you are certain that no one else will be writing to the cached file system. In this mode, all writes are made to both the front and the back file system, and the file remains in the cache. This means that future read accesses can be done to the cache, rather than going to the server.
Small reads might be kept in memory anyway (depending on your memory usage) so there is no benefit in also caching the data on the disk. Caching of random reads to different data blocks does not help, unless you will access the same data over and over again.

The initial read request still has to go to the server because only by the time a user attempts to access files that are part of the back file system will those files be placed in the cache. For the initial read request, you will see typical NFS speed. Only for subsequent accesses to the same data, you will see local JFS access performance.

The consistency of the cached data is only checked at intervals. Therefore, it is dangerous to cache data that is frequently changed. CacheFS should only be used for read-only or read-mostly data.

Write performance over a cached NFS file system differs from NFS Version 2 to NFS Version 3. Performance tests have shown the following:

Sequential writes to a new file over NFS Version 2 to a CacheFS mount point can be 25 percent slower than writes directly to the NFS Version 2 mount point.
Sequential writes to a new file over NFS Version 3 to a CacheFS mount point can be 6 times slower than writes directly to the NFS Version 3 mount point.

stop and start the automounter (solaris)

(as root)

Start the automounter service;
# /etc/init.d/autofs start

Stop the automounter service;
# /etc/init.d/autofs stop

setting up NFS server and clients

Setting up NFS:

NFS servers lets other systems access their file systems by sharing them over the NFS environment.

A shared file system is referred to as a shared resource.

You can specify which file systems are to be shared by entering information in the file /etc/dfs/dfstab. Entries in this file are shared automatically whenever you start the NFS server operation. The /etc/dfs/dfstab file lists all the file systems your NFS server shares with its NFS clients.

Configuring NFS to the resources shown in Diagram (see bottom).

At NFS server :In Diag. : sunserver

1) Sharing /database directory on sun server (adding its entry in /etc/dfs/dfstab file by editing the file with vi editor.)
# vi /etc/dfs/dfstab
it has the following format,
share [ -F fstype ] [ -o options ] [ -d “” ] \ [ resource ]

where
-F — specifies the file system type, such as NFS.
-o –specifies ‘rw’ , ‘ro’ options for clients to access shared resource.
-d –provides a description of the resource being shared.
— name of the file system to be shared.

For eg.( diagram A):
share -F nfs -o ro -d “Project Database” /database
:wq!(save & quit)

2) After you edit the /etc/dfs/dfstab file ,restart the NFS server by either rebooting the system or by typing this,
# svcadm restart nfs/server

3)Verify the NFS share by executing one of the following commands.
# dfshares
or # exportfs



At NFS client :In Diag.A : station1 (assuming having solaris os)

1)Create the mountpoint to mount the share temporarily.
# mkdir /rdatabase

2) Mount the share temporarily .
# mount -F nfs 192.168.0.100:/database /rdatabase
now check by entering in a particular directory i.e.here in this case /rdatabase.

3)If you would like this file system to be mounted automatically at every startup,then you can add the following line to the /etc/vfstab file.
# vi /etc/vfstab
192.168.0.100:/database — /rdatabase nfs — yes — ro
:wq!(save & quit)

4) Reboot the system or without reboot execute the command.
# mountall

It reads /etc/vfstab file & mounts all file systems;



courtesy of http://sharetab.com/solaris-setting-up-nfs-network-file-sharing-server/

NFS crib

You need several daemons to support NFS activities. These daemons can support both NFS client and NFS server activity, NFS server activity alone, or logging of the NFS server activity. To start the NFS server daemons or to specify the number of concurrent NFS requests that can be handled by the nfsd daemon, use the /etc/rc3.d/S15nfs.server script. There are six daemons that support NFS:

mountd Handles file system mount requests from remote systems, and provides access control (server)
nfsd Handles client file system requests (both client and server)
statd Works with the lockd daemon to provide crash recovery functions for the lock manager (server)
lockd Supports record locking operations on NFS files
nfslogd Provides filesystem logging. Runs only if one or more filesystems is mounted with log attribute.

You can detect most NFS problems from console messages or from certain symptoms that appear on a client system. Some common errors are:

The rpcbind failure error incorrect host Internet address or server overload

The server not responding error network connection or server is down

The NFS client fails a reboot error a client is requesting an NFS mount using an entry in the /etc/vfstab file, specifying a foreground mount from a non-operational NFS server.

The service not responding error an accessible server is not running the NFS server daemons.

The program not registered error an accessible server is not running the mountd daemon.

The stale file handle error [file moved on the server]. To solve the stale NFS file handle error condition, unmount and mount the resource again on the client.

The unknown host error the host name of the server on the client is missing from the hosts table.

The mount point error check that the mount point exists on the client

The no such file error unknown file name on the server

No such file or directory the directory does not exists on the server



NFS Server Commands

share Makes a local directory on an NFS server available for mounting. Without parameters displays the contents of the
/etc/dfs/sharetab file.
unshare Makes a previously available directory unavailable for client side mount operations.
shareall Reads and executes share statements in the /etc/dfs/dfstab file.
unshareall Makes previously shared resources unavailable.
dfshares Lists available shared resources from a remote or local NFS server.
dfmounts Displays a list of NFS server directories that are currently mounted.

Tuesday, 10 February 2009

nslookup - it's your friend

unsure of a hostname or ip address - how can I find out the domain name associated with an IP number?

use nslookup


To find the domain name associated with an IP number, use the nslookup command. At the Unix shell prompt, enter nslookup followed by the IP address, for example:

nslookup 129.79.5.100
This command will return the following information:

Server: ns.indiana.edu
Address: 129.79.1.1

Name: ns2.indiana.edu
Address: 129.79.5.100

You can also specify a hostname on the command line to find out its IP number.

You can use the nslookup command interactively. For more information, at the Unix shell prompt, enter:

man nslookup

Monday, 9 February 2009

Linux and NFS - how to mount an NFS resource

Add nfs mount on a linux machine

For the full info - go here http://www.redhat.com/docs/manuals/linux/RHL-7.3-Manual/custom-guide/s1-nfs-mount.html

Vital directories;

/etc/fstab

ALSO - make sure you have created the dir as a dir on the server! In other words, a request to mount the resource ukpoopoop003:/apps/export/hsit as /apps/filestaging2/appsith means you need to create the dir appsith on this server under /apps/filestaging2

Mounting NFS Filesystems using /etc/fstab
An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab file.


The general syntax for the line in /etc/fstab is as follows:

server:/usr/local/pub /pub nfs rsize=8192,wsize=8192,timeo=14,intr

The mount point /pub must exist on the client machine. After adding this line to /etc/fstab on the client system, type the command mount /pub at a shell prompt, and the mount point /pub will be mounted from the server.

Here is an example of /etc/fstab with mounted nfs resources;

server001 $ more /etc/fstab
.......truncated for brevity........
pradltot06:/export/filestaging/cac01 /app/filestaging/pOOpOO01 nfs bg
0 0
pradltot07:/export2/filestaging/cac01 /app/filestaging2/pOOpOO1 nfs bg
0 0

Thursday, 5 February 2009

chmod chart

CHMOD is used to change permissions of a file

PERMISSION COMMAND



U G W

rwx rwx rwx chmod 777 filename

rwx rwx r-x chmod 775 filename

rwx r-x r-x chmod 755 filename

rw- rw- r-- chmod 664 filename

rw- r-- r-- chmod 644 filename



U = User

G = Group

W = World



r = Readable

w = writable

x = executable

- = no permission

how to calculate the umask you want

example - you want to set a umask of 754 (user has rwx - group has r-x - and others have r--)

to calculate the umask, take the 754 from 777 - with leaves 023 (always make sure you use zero instead of a blank)

umask explained

The default file permissions (umask):

Each user has a default set of permissions which apply to all files created by that user, unless the software explicitly sets something else. This is often called the 'umask', after the command used to change it. It is either inherited from the login process, or set in the .cshrc or .login file which configures an individual account, or it can be run manually.
Typically the default configuration is equivalent to typing 'umask 22' which produces permissions of:

-rw-r--r-- for regular files, or
drwxr-xr-x for directories.
In other words, user has full access, everyone else (group and other) has read access to files, lookup access to directories.
When working with group-access files and directories, it is common to use 'umask 2' which produces permissions of:

-rw-rw-r-- for regular files, or
drwxrwxr-x for directories.
For private work, use 'umask 77' which produces permissions:
-rw------- for regular files, or
drwx------ for directories.
The logic behind the number given to umask is not intuitive.
The command to change the permission flags is "chmod". Only the owner of a file can change its permissions.

The command to change the group of a file is "chgrp". Only the owner of a file can change its group, and can only change it to a group of which he is a member.

See the online manual pages for details of these commands on any particular system (e.g. "man chmod").

Examples of typical useage are given below:

chmod g+w myfile
give group write permission to "myfile", leaving all other permission flags alone

chmod g-rw myfile
remove read and write access to "myfile", leaving all other permission flags alone

chmod g+rwxs mydir
give full group read/write access to directory "mydir", also setting the set-groupID flag so that directories created inside it inherit the group

chmod u=rw,go= privatefile
explicitly give user read/write access, and revoke all group and other access, to file 'privatefile'

chmod -R g+rw .
give group read write access to this directory, and everything inside of it (-R = recursive)

chgrp -R medi .
change the ownership of this directory to group 'medi' and everything inside of it (-R = recursive). The person issuing this command must own all the files or it will fail.
WARNINGS:

Putting 'umask 2' into a startup file (.login or .cshrc) will make these settings apply to everything you do unless manually changed. This can lead to giving group access to files such as saved email in your home directory, which is generally not desireable.

Making a file group read/write without checking what its group is can lead to accidentally giving access to almost everyone on the system. Normally all users are members of some default group such as "users", as well as being members of specific project-oriented groups. Don't give group access to "users" when you intended some other group.

Remember that to read a file, you need execute access to the directory it is in AND read access to the file itself. To write a file, your need execute access to the directory AND write access to the file. To create new files or delete files, you need write access to the directory. You also need execute access to all parent directories back to the root. Group access will break if a parent directory is made completely private.

taken from http://www.dartmouth.edu/~rc/help/faq/permissions.html

quick way to check if an LDAP user exists

getent passwd | grep username

quick way to check if a NIS user exists

ypcat passwd | grep username

the 'free' command

A useful tool when you want to check how much physical and virtual and overall memory you have on the system;

free -mt (for mb)

free -kt (for kb)

and on later OS you can also get it in gb;

free -gt

Basic power path commands

With EMC's PowerPath product it may be useful to check the status of what is available as far as PowerPath is concerned.

Use the "powermt display" command for this.

EXAMPLE:

# powermt display
total emcpower devices: 94, highest device number: 93
==============================================================================
------------ Adapters ------------ ----- Device Paths ------ --- Queued ---
## Switch Name Summary Total Closed IOs Blocks
==============================================================================
0 Enabled sbus@1f/fcaw@0 Optimal 94 0 0 0
1 Enabled sbus@1f/fcaw@2 Optimal 94 0 0 0

This shows us that there are 2 paths that PowerPath is using and both of them are online and functioning at optimal performance.



It is possible to search further by using the "powermt display dev=" command.

EXAMPLE:

# powermt display dev=c1t0d93
emcpower91: state=operational, policy=symm_opt, priority=0, IOs in progress=0
Symmetrix ID=0184502576
==============================================================================
------ Adapter ------ --------- Device Path ---------- Serial Queued Errs
## Name Mode Link ID State Number IOs
==============================================================================
0 sbus@1f/fcaw@0 Active c1t0d93 sd2681 Open 7611D000 0 0
1 sbus@1f/fcaw@2 Active c2t1d93 sd2572 Open 7611D000 0 0



To see all devices use the following command :"powermt display dev=all"