Sun solaris E-Book - system administration

Rules to Download :-

1)Always Donate the user who created this site.
2)Most of the Files are in zip, winrar, wmv format so use appropiates software to extract the files.
3)All files are shared on a Third Party website so register with them for free to download the files.
4)Distribute this site to other so that every one gain free knowledge.
5)Leave a comment of the site which encourge me to devop more.



Download - Sun solaris system administration E-Book

By Using Veritas NetBackup To Add A Changed-Robot-And-Drive On The Solaris Unix Command Line

Your old tape robot, along with the drives inside it has inexplicably gone bad, you've spent hours and exhausted your support contract trying to fix it somehow, but, ultimately, you're left facing the fact that your trusty old steel-and-plastic-jukebox just isn't going to come back. Ever. If you're lucky, your warranty allows for replacement of the tape robot (A Tape Loading Device or TLD) and its internal drives (2, for now, to keep things simple - hcart2 drives, just because). Worst case you've purchased suitable replacements that match the specifications listed in the previous sentence.

Probably, your /dev/rmt directory is populated and you may even have some other logical paths created on your Solaris system that are no longer valid. Once you've connected your "MaxTape24" TLD (which exists only in my imagination, with it's two internal "FACTOTUM-TD2" drives, both working properly according to the on-board diagnostics), you should be able to verify that your system (at the very least) can recognize the TLD and, hopefully, the drives inside it. Assuming all of the equipment is good, and that it's been hooked up (however you like to daisy-chain it) properly, this shouldn't be an issue. You may choose to run:

host # devfsadm -C

before proceeding, to check for new symbolic links that need to made in your hardware device tree (and, with the -C option, remove ones you no longer need - Operating System's discretion, unfortunately), although it may not be necessary.

FINDING THE NEW HARDWARE WITH NETBACKUP FIRST: Now, contrary to what it seemed like I was leading in to, we're going to try to get NetBackup to do all the OS-work for us today (because, if it works, it's f'ing brilliant. Good job. Go home and relax :) Actually, you could probably look at this more as a way of giving NetBackup a good kick in the arse. The kind of kick that makes it stand up and take account of its surroundings ;) A good way to get started is to run the following at the command line (Oh yes, there will be no GUI instruction in these posts. If you use the GUI - which is okay - just right click on the type of thing you want to do something to and select whatever seems to be the most reasonable option from the drop-down menu. ...last on that:)

host # /usr/openv/volmgr/bin/sgscan <-- I would recommend including /usr/openv/volmgr/bin, /usr/openv/netbackup/bin and /usr/openv/netbackup/bin/admincmd in your PATH variable if you spend a lot of time working with NetBackup at the command line.

/dev/sg/c0t0l0: Disk (/dev/rdsk/c0t0d0): "SUZUKI MBB2147RCSUN146G" /dev/sg/c0t1l0: Disk (/dev/rdsk/c0t1d0): "SUZUKI MBB2147RCSUN146G" /dev/sg/c0t2l0: Tape (/dev/rmt/1): "BMI FACTOTUM-TD2" /dev/sg/c0t3l0: Cdrom: "Hyundai DV-W28E-R" /dev/sg/c1t0l0: Changer: "TLDHAUS MaxTape24" /dev/sg/c1t1l0: Tape (/dev/rmt/0): "BMI FACTOTUM-TD2"


Your output may differ (even if you run this command on the same box, since I faked up the output to protect the guilty ;), but basically, this output is positive. You'll notice that sgscan has picked up a bit more than just your new TLD and its drives but that's okay. As it stands, this output is very positive, in that you can see that /dev/rmt/0 and /dev/rmt/1 have been properly mapped to the TLD's internal tape drives and the "TLDHAUS MaxTape24" TLD has been properly identified.

Other commands you could use to, basically, get the same information (or peace of mind) would include (but not be limited to) vmoprcmd, tpconfig and tpautoconf. A few examples at the bottom of the post, with the same setup as above (some whitespace has been clipped to save the virtual trees).

And that's it for today. Tomorrow we'll look at several commands (including some we're using today, but with different options) that can be used to "find" those drives if the system doesn't discover them automatically (the first thing you can try is "devfsadm -C" as noted above, followed by another sgscan).

Until then, enjoy the output and we'll continue on tomorrow. Here are a couple of handy anchor-href's for you, so you don't have to try to figure out where the command you're interested in is hiding out amongst all the flotsam below :)

vmoprcmd
tpconfig -d
tpconfig -dl
tpautoconf -t
tpautoconf -a

How to Locate New Backup Hardware Using Veritas NetBackup On The Solaris Unix Command Line

For some reason (and this hardly ever happens ...not sure which word to emphasize to obtain the maximum sarcastic drippage) after we connected our new Tape Loading Device (TLD or Tape Robot), and the two drives it contains, to our backup server, NetBackup - and, possibly, the server itself, is failing to recognize the new device(s). Again, we're going to assume that both the server, TLD, drives and all other hardware are absolutely fine and that all required connections between the devices are set up properly.

NOTE: Today's post is going to assume that some tried and true methods will get you to "good." Tomorrow's post will look at some other ways to make NetBackup recognize and work with your "known good" (and compliant) setup.

If we take the same direct route to initial discovery that we did yesterday, we'd run the same sgscan (which is, as one reader noted, shorthand for "sgscan all") command initially, like so (pardon the error output. I can't afford to create the situation I want to display so I'm doing it from memory):


host # /usr/openv/volmgr/bin/sgscan /dev/sg/c0t0l0: Disk (/dev/rdsk/c0t0d0): "SUZUKI MBB2147RCSUN146G" /dev/sg/c0t1l0: Disk (/dev/rdsk/c0t1d0): "SUZUKI MBB2147RCSUN146G" /dev/sg/c0t2l0: Tape (???): "Unknown" /dev/sg/c0t3l0: Cdrom: "Hyundai DV-W28E-R" /dev/sg/c1t0l0: Changer: "Unknown" /dev/sg/c1t1l0: Tape (???): "Unknown"


Basically, every line where it says "Unknown" is where we're interested in looking. The system can't find our TLD or its drives, so now we have to try to discover them ourselves (with and/or without NetBackup) and then come back around and use NetBackup to verify that we're okay. These steps are pretty dry, but I think if you follow them in a somewhat linear order (skipping some or doing some before others, if you're comfortable) they should get you to where you want to be. Fat, happy and with a TLD your backup server recognizes. Okay, maybe not happy ;)

Note:
If you feel uncomfortable about running any of the commands below, please enlist the assistance of someone who is either able to provide guidance (since each case is unique) and/or will get in trouble instead of you if things to go to Hell ;) j.k.

And, here we go. These steps won't be numbered, so I can't possibly screw that aspect up, but should be easy to follow since each command will be separated by space and begin with the "host # " prompt. Some of these commands, as the title of today's post suggests, may not exist on a flavour of Unix or Linux that isn't Solaris.

First, we'll take a look at our device tree. Do the device links listed in sgscan exist? Also, is /dev/rmt populated at all?

host # ls /dev/sg/c0t2l0 /dev/sg/c1t1l0 /dev/sg/c1t0l0 /dev/rmt /dev/sg/c0t2l0 /dev/sg/c1t0l0 /dev/sg/c1t1l0 /dev/rmt: 0 0cb 0hb 0lb 0mb 0u 1 1cb 1hb 1lb 1mb 1u 0b 0cbn 0hbn 0lbn 0mbn 0ub 1b 1cbn 1hbn 1lbn 1mbn 1ub 0bn 0cn 0hn 0ln 0mn 0ubn 1bn 1cn 1hn 1ln 1mn 1ubn 0c 0h 0l 0m 0n 0un 1c 1h 1l 1m 1n 1un


They appear to be there, but they're probably bad. Let's try devfsadm, all on its lonesome and check sgscan again (From now on we'll just assume the output is the same as the train-wreck we witnessed above, until we get to the end. Hopefully, your journey will come to a close sooner!):

host # devfsadm

If this fails to produce results, you can try to run the same command with the "-C" option to remove stale links that no longer point to a valid physical device path:

host # devfsadm -C

Of course, if you know that you only had two tape drives before (/dev/rmt/0 and 1) and believe sgscan when it says it can't recognize the paths we listed, you can delete all of that stuff and try those two steps again. Sometimes it helps to force Solaris to recreate the dev links:

host # rm /dev/rmt/*
host # devfsadm -C


should be enough, but you can almost certainly do this, as well:

host # rm /dev/rmt/* /dev/sg/c0t2l0 /dev/sg/c1t1l0 /dev/sg/c1t0l0 host # devfsadm -C


Running the "ls /dev/sg/c0t2l0 /dev/sg/c1t1l0 /dev/sg/c1t0l0 /dev/rmt" listed above will, almost always, give you the same results once you've completed these steps.

You might also run this command if you have the drivers installed:

host # cfgadm -al

If you find a section with /dev/rmt1, /dev/rmt0 and the /dev/sg path to your Changer in it, and one or some of them are showing unconfigured (all the sections start with a controller number and a colon - in our setup the output is "c2:xxxx") you can either specifically configure any of the entries listed behind the controller number, by using the entire device name your rmt and disk changer devices are listed beside, or you can just configure the whole shebang. Why not?:

host # cfgadm -c configure c2

Listing it again with "cfgadm -al" should show all the appropriate devices as "configured." If it doesn't; don't worry. It probably doesn't matter, but was worth a shot.

Both "tpconfig-d" and "tpconfig -dl" will give you back the same results as sgscan (although formatted differently and limited to the tape and TLD information) if the problem still hasn't resolved. To save space and prevent carpal-knuckle syndrome, full versions of the output of these commands, as run against a working setup, are located at the bottom of yesterday's posts as a series of in-page hyperlinks. The only things that will be different in your execution of:

host # tpconfig -d

and

host # tpconfig -dl

output will be that the drives will usually either show up as DOWN ( possibly with an identifier - for us, hcart2 - and path like /dev/rmt/0) or you will get virtually no output at all ...yeah, I guess that's a "huge" difference :) If you notice that tpconfig returns a listing for you, this is positive, even if it shows your drives as "down." We won't go crazy yet, since we were going to run the next command, regardless:

host # vmoprcmd

Now we may get results that show "HOST STATUS" as or, hopefully ACTIVE (good to go!), ACTIVE-DISK (can do local disk backups), ACTIVE-TAPE (can backup to tape, but, for some reason, can't backup to local disk) or even DEACTIVATED (either it's off or NetBackup thinks it is) or OFFLINE (Same as the last, except substitute offline for off ;) Your drives will also show as either non-existent, UP, UP-TLD, RESTART or DOWN (perhaps a few others, but all of them self-explanatory). As long as the tape drive type (hcart2 for us) is shown, you're on the way.

And the final things we'll try today will be to react to the output produced for the Tape Drives. If your TLD is still not showing, that's something for tomorrow. If you see your tapes in a DOWN state, but correctly identified as the types of tapes they are, this will probably do the trick for you:

host # vmoprcmd -up 0
host # vmoprcmd -up 1


for the first (0) and second (1) instance of the drive, listed in the first "Id" column of "tpconfig -d". You can also do this, which is easier (at least for me) to remember, since you can directly map it from the vmoprcmd output without squinting ;)

host # vmoprcmd -upbyname Drive000
host # vmoprcmd -upbyname Drive001


from the vmoprcmd output in the "Drive Name" column, which also happens to be the first column in the "vmoprcmd" output.

When you're done with that, or if your tape drives show as RESTART, do yourself a favor and stop and start NetBackup. You may not get a chance once you let everyone know it's fixed. If you don't have other startup scripts set up, you can use:

host # /usr/openv/netbackup/bin/goodies/netbackup stop

then run:

host # /usr/openv/netbackup/bin/bpps -a

and, if everything is gone (unless you're running the GUI - it's okay to not kill those PID's), start 'er up again, like so:

host # /usr/openv/netbackup/bin/goodies/netbackup start

and do another "bpps -a" to make sure all of the appropriate daemons are running. Then, just to make yourself feel better, and so you're absolutely sure, do one more "sgscan." All should look as it did in yesterday's post (see link-back above) and you should be all set. At least, you'll be ready to test some backups and pray that your troubles are over ;)

DiskSuite/VolumeManager or Zpool Mirroring On Solaris: Pros and Cons

Using the Solaris DiskSuite set of tools (meta-whathaveyou ;), which was, at one point, changed to Solaris Volume Manager (which introduced some feature enhancements, but not the kind I was expecting. The name Volume Manager has a direct connection in my brain to Veritas and the improvements weren't about coming closer to working seamlessly with that product).

The somewhat-new way (using the zpool command) won't work - to my knowledge - on any OS prior to Solaris 10, but with Solaris 8 and 9 reaching end of life in the not-too-distant future, every Solaris Sysadmin will have some measure of choice.

With that in mind let's take a look at a simple two disk mirror. We'll look at how to create one and review it in terms of ease-of-implementation and cost (insofar as work is considered expensive if it takes a long time... which leaves one to wonder why I'm not comparing the two methods in terms of time ;)

Both setups will assume that you've already installed your operating system, and all required packages, and that the only task before you is to create a mirror of your root disk and have it available for failover (which it should be by default)

The DiskSuite/VolumeManager Way:

1. Since you just installed your OS, you wouldn't need to check if your disks were mirrored. In the event that you're picking up where someone else left off (and it isn't blatantly obvious - I mean "as usual" ;), you can check the status of your mirror using the metastat command:

host # metastat -p

You'll get errors because nothing is set up. Cool :)

2. The first thing you'll want to do is to ensure that both disks have exactly the same partition table. The same-ness has to be "exact," as in down to the cylinder. If you're off even slightly, you could be causing yourself major headaches. Luckily, it's very easy to make your second (soon to be a mirror) layout exactly the same as your base OS disk. You actually have at least two options:

a. You can run format, select the disk you have the OS installed on, type label (if format tells you the disk isn't labeled), then select your second disk, type partition, type select and pick the number of the label of your original disk. A lot of times these labels will be very generic (especially if you just typed "y" when format asked you to label the disk or format already did it for you during install) and you may have more than one to choose from. It's simple enough to figure out which one is the right one though (as long as you remember your partition map from the original disk and have made is sufficiently different from the default 2 or 3 partition layout). Just choose select, pick one, then choose print. If you've got the right one, then type label. Otherwise, repeat until you've gone through all of your selections. One of them has to be it, unless you never labeled your primary disk.

b. You can use two command (fmthard and prtvtoc) and just get it over with:

host # prtvtoc /dev/rdsk/c0t0d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2

3. Then you'll want to mirror all of your "slices" (or partitions; whatever you want to call them. We'll assume you have 6 slices set up (s0, s1, s3, s4, s5 and s6) for use and slice 7 (s7) partitioned with about 5 Mb of space. You can probably get away with less. You just need to set this up for DiskSuite/VolumeManager to be able to keep track of itself.

Firstly, you'll need to initialize the minimum number of "databases," set up the mirror group and add the primary disk slices as the first mirrors in the mirror-set (even though, at this point, they're not mirroring anything, nor are they mirrors of anything ;) Note that it's considered best practice to not attach the secondary mirror slices to the mirror device, even though you can do it for some of your slices. You'll have to reboot to get root to work anyway, so you may as well do them all at once and be as efficient as is possible:

host # metadb -a -f /dev/rdsk/c0t0d0s7
host # metadb -a /dev/rdsk/c1t0d0s7
host # metainit -f d10 1 1 c0t0d0s0
host # metainit -f d20 1 1 c1t0d0d0
host # metainit -d0 -m d10
host # metainit -f d11 1 1 c0t0d0s1
host # metainit -f d21 1 1 c1t0d0d1
host # metainit -d1 -m d11
host # metainit -f d13 1 1 c0t0d0s3
host # metainit -f d23 1 1 c1t0d0d3
host # metainit -d3 -m d13
host # metainit -f d14 1 1 c0t0d0s4
host # metainit -f d24 1 1 c1t0d0d4
host # metainit -d4 -m d14
host # metainit -f d15 1 1 c0t0d0s5
host # metainit -f d25 1 1 c1t0d0d5
host # metainit -d5 -m d15
host # metainit -f d16 1 1 c0t0d0s6
host # metainit -f d26 1 1 c1t0d0d6
host # metainit -d6 -m d16


4. Now you'll run the "metaroot" command, which will add some lines to your /etc/system file and modify your /etc/vfstab to list the metadevice for your root slice, rather than the plain old slice (/dev/dsk/c0t0d0s0, /dev/rdsk/c0t0d0s0):

host # metaroot

5. Then, you'll need to manually edit /etc/vfstab to replace all of the other slices' regular logical device entries with the new metadevice entries. You can use the root line (done for you) as an example. For instance, this line:


/dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /users ufs 1 yes -


would need to be changed to:

/dev/md/dsk/d6 /dev/md/rdsk/d6 /users ufs 1 yes -


and, once that's done you can reboot. If you didn't make any mistakes, everything will come up normally.

6. Once you're back up and logged in, you need to attach the secondary mirror slices. This is fairly simple and where the actual syncing up of the disk begins. Continuing from our example above, you'd just need to type:

host # metattach d0 d20
host # metattach d1 d21
host # metattach d3 d23
host # metattach d4 d24
host # metattach d5 d25
host # metattach d6 d26


The syncing work will go on in the background, and may take some time depending upon how large your hard drives and slices are. Note that, if you reboot during a sync, that sync will fail and it will start from 0% on reboot with the affected primary mirror slices remaining intact and the secondary mirror slices automatically resyncing. You can use the "metastat" command to check out the progress of your syncing slices.

And, oh yeah... I almost forgot this part of the post:

The Zpool way:

1. First you'll want to do exactly what you did with DiskSuite/VolumeManager (since both disks have to be exactly the same). We'll assume you're insanely practical, and will just use this command to make sure your disks are both formatted exactly the same (just like above):

host # prtvtoc /dev/rdsk/c0t0d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2

2. Now we'll need create a pool, add your disks to it (all slices as one) and mirror them:

host # zpool create mypool mirror c0t0d0 c1t0d0

3. Wait for the mirror to sync up all the slices. You can check the progress with "zpool status POOLNAME" - like:

host # zpool status mypool

And that's that. The choice is yours, unless you're still using Solaris 9 or older. This post isn't meant to condemn the SDS/SVM way. It works reliably and is really easy to script out (and when both of these methods are scripted out, they're just as easy to run and the only hassle the old way gets you is the forced reboot).

Basic Root Disk Mirroring With Solaris Volume Manager

Solaris Volume Manager (SVM - Used to be called Solaris Disk Suite - SDS - pretty much only the name changed) is an excellent OS standard software RAID management tool used by a lot of IT shops. Even shops that use Veritas Volume Manager software for serious storage management, still use SVM to take care of the base disks.

The reason most shops utilize SVM to manage the root disks is that it's easier to manage than Veritas for this limited purpose. Another reason is that, in coordination with its ease of use, it's integrated into Solaris. The thinking around this is generally: Why not use Solaris' tool to manage our Solaris OS root disk(s)? If you've ever had to deal with Sun support, you know another reason why using their product on their product is a good thing :)

Setting up RAID groups, and managing them, is generally very simple and doesn't require rebooting the machine, etc, when you want to add new disk (with some exceptions). One of the exceptions to this rule is when you use SVM to set up management of your root disks.

Managing your root disks with SVM is a two-part process. Generally, it's used for mirorring the root disk for quick failover and is configured right after installation. Configuring it after is also easy, but I like to get my necessary reboots completed before I hand over product to a customer.

The process is fairly simple and is accomplished like this:

1. The two disks that will be used for mirroring have to be formatted "exactly" alike. The easiest way to do this is to partition one root disk and then either use the output from "prtvtoc" to seed "fmthard" or, more simply, use Solaris' "format" command to format the initial disk and then "select" the same layout from the list of available partition setups when formatting the second disk (labeling the first makes its partition table a selectable option!)

2. Meta Databases need to be created on both disks. The first needs to be forced, and any additional do not. We're using slice 7 for our example here:

metadb -a -f /dev/dsk/c0t0d0s7
metadb -a /dev/dsk/c0t1d0s7


3. Next, the metadevices need to be setup. For each slice that we want to mirror, we need to create two stripe-concat metadevices, as such (shown for only two, here, but done for every slice we want to mirror - which should be every slice on the disk except slice 2!):

metainit -f d10 1 1 c0t0d0s0
metainit -f d20 1 1 c0t1d0s0

metainit -f d11 1 1 c0t0d0s1
metainit -f d21 1 1 c0t1d0s1


4. Next, the metadevice that will be composed of the two mirrors needs to be created. You can do this by attaching one of the mirrors to the new metadevice. Note that, you can do this all at once, normally, but for the root disk, you can only attach one mirror slice at this point for each slice, as such:

metainit d0 -m d10
metainit d1 -m d11


5. The root partition is special in that SVM actually has a command to change its value to the metadevice value (d0) in /etc/vfstab. It also adds some information in /etc/system for you. That command is:

metaroot d0

6. Now, sadly enough you'll need to edit your /etc/vfstab to reflect the new metadevices. So for the entry for metadevice d1, you'd change:

/dev/dsk/c0t0d0s1 to /dev/md/dsk/d1
/dev/rdsk/c0t0d0s1 to /dev/md/rdsk/d1

7. Once that's completed and all commands have returned successfully (you can check that all's well by running "metastat -p") you will need to reboot. I would suggest actually shutting your machine down to init level 0 so that you can change your alternate boot device to the secondary (mirror) disk from the default of "net." This way, once you're set up, if your primary disk fails your system will automatically boot off of the mirror.

8. Now to the final step. You need to attach that second mirror slice to your mirror metadevice, as such:

metattach d0 d20
metattach d1 d21


Again, as long as you receive no errors, you should be all set. It will take SVM a little while to sync up the two disk (the actual mirroring process from the first disk to the second). You can check on the status by running "metastat" on its own. I personally prefer to run a command line loop to keep me up to date, like:

while true;do metastat -t|grep -i sync;sleep 60;done

then, when all the sync's are complete, I'll know right away.

How to Check the State of Your DNS Setup Externally!

One of the areas that seems to require looking over more consistently, is your DNS setup. New versions are released regularly, bugs are found just as regulary, and acceptable syntax can sometime change between releases and/or RFC's.

Note for today's script - It is intended only for your benefit, and I strongly urge you to use any free DNS reporting service you can find to accomplish what we're accomplishing here. I neither work for, nor do I do any afilliate marketing for, www.dnsstuff.com. We only use it where I work, because it's the standard. Probably, most readers already know about it. The site tools require a free registration, if you're not a paying member, but also require a payment for use after 30 DNS Reports. I have placed a very obvious "COMMENT" in the script above the only line you'd need to change to use another service. If you "do" find something equal, or better, I would recommend that you use it (and, maybe, send me an email to let me know the URL so I can check it out and update my scripts, too :) That being said, I'm not trashing them either. If you have to do this sort of thing a lot(for your employer, lets say), the company can probably shell out a few bucks a month for the service. It's worth the price if you need to use it regularly.

Aside from your own internal auditing, it's good to get a fair and objective third-party assessment of the state of your DNS setup. A great site to get this accomplished on the web is located at http://www.dnsstuff.com. They have a tool called "DNS Report" (which used to be its own domain - www.dnsreport.com) which can be used very effectively, even if you choose not to be a paying member of the site. A long long time ago, it was available to everyone for free, but since it's pay-for now, you really have to be careful not to deluge it with requests to check your DNS zones (all 30 of them, if you're doing this for free), or you'll get dumped on their blacklist and barred from using the service at all.

The little Perl script I've written below will help you to automate the usage of that web service for all of your DNS zones. The reports are nice and easy to understand, even for the highest of higher-ups (green = good, red = bad ;) and the service does a very good job of pointing out weaknesses, or areas that don't conform to current RFC expectations, in your DNS zones. This script does require that you have Perl installed (although it's just a script I whipped up so it's submitted under GPL (Gnu Public License), at best, and you can feel free to rewrite it to suite your own needs). It also makes use of curl. You can substitute lynx or wget or any other program that can download and save web pages to your Unix/Linux hard drive. And here it is, below (Please read the comments regarding use of dnsstuff's dnsreport tool):


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/usr/bin/perl

#
# 2007 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#
# Registration is now required to use this tool - It used to be free.
# When/If you register, be sure to uncheck the two opt-in mail
# checkboxes at the bottom unless you want to receive those emails.
#
# Login to dnsstuff.com before running this script or it will just return
# a page indicating that you need to log in before submitting a full request
# without that request being linked to from one of their pages
#
# Simple Error Checking here, just want to be sure that a file exists at all.
# Disregard everything else on the command line. This script will fail if the
# file named on the command line doesn't include one URL per line, anyway :)

if ( ! -f $ARGV[0] ) {
print "Usage: $0 fileWithUrls\n";
exit(1);
}
$dns_file = $ARGV[0];

open(URL_FILE, "< $dns_file");
@url_file = ;
close(URL_FILE);
$counter = 120;

foreach $url (@url_file) {
print "$url";
$| = 1;
if ($pid = fork) {
if ( $counter = 900 ) {
$counter = 120;
} else {
$counter = $counter+60;
}
print "next\n";
} else {
close (STDOUT);
chomp($url);
#
# COMMENT <--- The Obvious One :)
# CHANGE THE URL ON THIS LINE TO THE URL OF ANY OTHER SITE YOU FIND THAT
# PROVIDES AN EQUAL SERVICE!
#
@output = `curl http://www.dnsstuff.com/tools/dnsreport.ch?domain=$url`;
open(OUTPUT, ">$url.htm");
print OUTPUT @output;
close(OUTPUT);
exit(0);
}
if ( $counter = 900 ) {
$counter = 120;
} else {
$counter = $counter + 60;
}
sleep $counter;
}
exit(0);


And that's that! As I mentioned before, this tool will blacklist you if you hit dnsstuff too hard. My script assumes that you can hit the site at the rate I used last time I used it. Since it's a pay-for service now, I would recommend changing the wait times to at least twice as much. To give you an idea, I ran this when the service was free and ended up getting blacklisted in a under a few hours. Granted, there were certain other factors that contributed to my getting the boot; most prominently that I was checking the DNS for about 300 to 400 zones we were hosting. It could have been left at checking one and making sure all the others were the same, but, as noted above, some bosses like to see reports with lots of colors and lots of pages.

Also, just so you don't feel like I'm leaving you in a lurch, if you do happen to get blacklisted, just go to their member forums (you get access to post to these since you had to do the free registration - you can only browse them if you're not registered) at http://member.dnsstuff.com/forums/ and do a search on "banned," "blacklist" or "black list." Most folks just have to start a thread and request their access back in order to get off the blacklist.

How to find out your NIC's speed and duplex on Solaris

If there's ever an issue with networking, you need to be able to confidently say that your NIC is up at 1Gb full duplex (or whatever your network admin insists).

The way to check this has changed somewhat in Solaris 10, but the old way to check is still available; although not totally reliable.

For instance, you can use "ndd" in all flavors of Solaris (at least from 2.6 up) to get information from /dev/hme (or whatever your NIC's device driver is). Generally, you would look at the speed and duplex settings using the following commands (slight variation depending on NIC's - e.g. 100Mb hme's don't have values for the 1000 Mb queries)

The following commands are pretty useful, and non-destructive, for any device driver, even though you'll get errors for all the stuff that isn't supported:

/usr/sbin/ndd -set /dev/ce instance 0
/usr/sbin/ndd -get /dev/ce adv_1000fdx_cap
/usr/sbin/ndd -get /dev/ce adv_1000hdx_cap
/usr/sbin/ndd -get /dev/ce adv_100fdx_cap
/usr/sbin/ndd -get /dev/ce adv_100hdx_cap
/usr/sbin/ndd -get /dev/ce adv_10fdx_cap
/usr/sbin/ndd -get /dev/ce adv_10hdx_cap
/usr/sbin/ndd -get /dev/ce adv_autoneg_cap


Of course, replace the "/dev/ce" with your particular driver. The only downside to this hack-and-slash method is that you may see 1's (indicating that the parameter is set) rather than 0's (indicating that the paramet is not set) in more than one place (like in adv_1000fdx_cap , adv_100hdx_cap and adv_autoneg_cap all at once ???)

The best way to do it, in my experience is to use either "netstat -k" (In Solaris up to, and including, version 9) or "kstat -p."

In Solaris 9, assuming the same NIC driver and instance "ce0," you can do the following to find out the status of your NIC:

netstat -k ce|grep 0|egrep 'link_speed|link_dupl'

On Solaris 10, you'd do this:

kstat -p|grep ce0|egrep 'speed|dupl'

Basically, the speed should be 1000 for Gb, 100 for 100Mb, etc. Your duplex is represented numerically as either 0, 1 or 2.

0 = No Duplex (or Down)
1 = Half Duplex
2 = Full Duplex

Linux Basic for Ease of Use and Management of a Hosted Website: Getting Started!

These are both links to the first installment: Getting Started.

You can go directly to them through the links below. Hopefully, you will find it informative and, at least somewhat, helpful.

Getting Started on goarticles.com

Getting Started on ezinearticles.com

ZFS Internals

Max Bruning wrote an excellent paper on how to examine the internals of a ZFS data structure. (Look for the article on the ZFS On-Disk Data Walk.) The structure is defined in ZFS On-Disk Specification.
Some key structures:
  • uberblock_t: The starting point when examining a ZFS file system. 128k array of 1k uberblock_t structures, starting at 0x20000 bytes within a vdev label. Defined in uts/common/fs/zfs/sys/uberblock_impl.h Only one uberblock is active at a time; the active uberblock can be found with
    zdb -uuu zpool-name
  • blkptr_t: Locates, describes, and verifies blocks on a disk. Defined in uts/common/fs/zfs/sys/spa.h.
  • dnode_phys_t: Describes an object. Defined by uts/common/fs/zfs/sys/dmu.h
  • objset_phys_t: Describes a group of objects. Defined by uts/common/fs/zfs/sys/dmu_objset.h
  • ZAP Objects: Blocks containing name/value pair attributes. ZAP stands for ZFS Attribute Processor. Defined by uts/common/fs/zfs/sys/zap_leaf.h
  • Bonus Buffer Objects:
    • dsl_dir_phys_t: Contained in a DSL directory dnode_phys_t; contains object ID for a DSL dataset dnode_phys_t
    • dsl_dataset_phys_t: Contained in a DSL dataset dnode_phys_t; contains a blkprt_t pointing indirectly at a second array of dnode_phys_t for objects within a ZFS file system.
    • znode_phys_t: In the bonus buffer of dnode_phys_t structures for files and directories; contains attributes of the file or directory. Similar to a UFS inode in a ZFS context.

NIS Master Server Configurations

NIS Master Server Config

NIS maps are located in /var/yp/domainname directory(where domainname is the name of the NIS domain). There are two files (.pag and .dir files) for each map in this directory. Eg.
/var/yp/training/hosts.byname.pag file
/var/yp/training/hosts.byname.dir file
/var/yp/training/hosts.byaddr.pag file
/var/yp/training/hosts.byaddr.dir file

The syntax for the NIS map is map.key.pag and map.key.dir

Ypcat [-k] mname -- To retrieve values from NIS name service map, mname can be either a
map name or a map nickname
# ypcat hosts
localhost 127.0.0.1 localhost
sysprint 192.168.30.70 sysprint
sys44 192.168.30.44 sys44 loghost

ypmatch [-k] value mname -- Prints values associated with one or more keys from the NIS
name services map specified by the mname argument.
# ypmatch sys44 hosts
sys44: 192.168.30.44 sys44 loghost
# ypmatch usera passwd
usera: usera:LojyTdiQev5i:3001:10::/export/home/usera:/bin/ksh

NIS Domain Contains
One NIS Master Server
NIS Slave Servers (Optional)
NIS Clients

The NIS Master Server

Contains the original /etc/ASCII files used to build the NIS maps
Contains the NIS maps generated from the ASCII files
Provides a single point-of control for the entire NIS domain

NIS Slave Servers

Do not contain the original /etc/ASCII files
Contains copied of the NIS maps copied from the NIS Master Server
Provides a a backup repository for NIS map information
Provides redundancy in case of server failure
Provides load sharing on large networks

NIS Clients

Do not contain original /etc/ASCII files
Do not contain any NIS maps
Bind to the master server or to a Slave Server to obtain access to the administrative file information contained in that server’s NIS maps
Dynamically rebind to another server in case of server failure
Make all appropriate system calls aware of NIS

NIS Processes

The main daemons involved in the running of an NIS domain are
The ypserv daemon -- Responds to client information requests
The ypbind daemon -- Client to server binding
The rpc.yppasswd daemon -- Password change update in master server
The ypxfrd daemon -- Push the map to slave servers (sync)
The rpc.ypupdated domain -- Update NIS maps using the config stored in /var/yp/updates

The NIS Slave Server contains upserv and ypbind daemon

The NIS Clients contains only ypbind daemon

The three most common search orders are
Search files and then NIS
Search NIS and then files
Forward hosts lookup requests from NIS to DNS



Introducing NIS Security

The /var/yp/securenets file to restrict access to a single host or to a subnetwork, and using the passwd.adjunct file to limit access to the password information across the network.

The /var/yp/securenets File

If exist on an NIS server, the server only answers queries or supplies maps to hosts and networks whose IP Address exist in the file. The server must be part of the subnet to access itself.
# cat /var/yp/securenets
# Two methods of giving access to a system. Using the netmask followed by the IP Address
# or host keyword followed by the IP Address
host 127.0.0.1
255.255.255.0 150.10.1.0
host 13.13.14.1
host 13.13.14.2

If you modify entries in the /var/yp/securenets file. You must kill and restart the ypserv and ypxfrd daemons.
# /usr/lib/netsvc/yp/ypstop (or) ypstart

The passwd.adjunct File

Encrypted password are normally hidden from the user in the /etc/shadow file. With the default NIS configuration, however the encrypted password string is shown as part of passwd maps. This file prevents unauthorized users from seeing the encrypted passwords.
# ypmatch –k usera passwd
usera: usera:LojyTdiQev512:3001:10:/export/home/usera:/bin/ksh

The passwd.adjunct file contains the account name preceded by ## in the password field. Subsequent attempts to gain account ino, using the ypcat or ypmatch commands, returnds the password entry from the passwd adjunct file.
# ypmatch –k usera passwd
usera: usera:##usera:3001:10:/export/home/usera:/bin/ksh

Configuring NIS Domain

To locate the source file in another directory, modify the /var/yp/Makefile file:
Change the DIR=/etc line to DIR=Your choice
Change the PWDIR=/etc line to PWDIR=/your-choice

Before you make any modification to the /var/yp/Makefile, save a copy of the original Makefile file.

The NIS configuration script /usr/sbin/ypinit and the make utility generate NIS maps. The ypinit command reads the Makefile for source file locations, and converts ASCII scource files into NIS maps.The /etc/defaultdomain file sets the NIS domain name during system boot.

Important files on the NIS Master (Part 1) -- hosts, passwd & shadow

Important files on the NIS Master (Part 2)
The /var/yp/domainname directory is the repository for the NIS maps created by the ypinit script.
The /var/yp/binding/domainname directory contains the ypservers file where the names of NIS Master server and NIS Slave server are stored.

Important files on the NIS Master (Part 3) -- The /usr/netsvc/yp directory contains the ypstop and ypstart commands that stop and start NIS services respectively

# /usr/sbin/ypinit –m -- This command prompts for a list of other machies to become NIS
slave servers.


Configuring the NIS Master Server

Core, End User or Developer software configuration cluster do not have all necessary files in the /usr/lib/netsvc/yp directory to allow a host to function as an NIS server.

1. Copy the /etc/nsswitch.nis file to the /etc/nsswitch.conf file. If necessary, modify the file
2. Enter the domainname command to set the local NIS domain
# domainname classroom.central.sun.com
3. Create an /etc/defaultdomain file with the domain name
4. If the files do not already exist, use the touch command to create zero-length files.
/etc/ethers, /etc/bootparams, /etc/locale, /etc/timezone, /etc/netgroup and /etc/netmasks.
These files are necessary for the creation of the complete set of NIS maps.
5. Install and update Makefile file in the /var/yp directory.
6. Create or populate the /etc/locale file, and make an entry for each domain on your network
using the following format
domainname locale eg. Classroom.central.sun.com en_us
7. Initialize the master server by using the local /etc files
# ypinit –m -- Provide slave server names and Ctrl+D to save the details. Press n for
“Terminate it on the first fatal error”
Note: If you have to restart the ypinit program, You are prompted to destroy the
/var/yp/domainname directory. Answer Y
8. # /usr/lib/netsvc/yp/ypstart

Testing the NIS Service

$ ypcat hosts -- Prints value from an NIS map
# ypmatch sys41 localhost hosts
192.168.30.41 sys41
127.0.0.1 localhost loghost
$ ypwhich -- To identify the master server
sys41

Configure the NIS Client

1. Copy the /etc/nsswitch.nis file to the /etc/nsswitch.conf file
2. Edit the /etc/inet/hosts file to ensure NIS master and slave servers have been defined.
3. # domainname domainname -- To set the local NIS domain
4. Create and populate the /etc/defaultdomain file with the domain name
5. # ypinit –c -- To initialize the system as an NIS client
6. Enter the names of the NIS Master and all Slave Servers
7. # /usr/lib/netsvc/yp/ypstart
8. # ypwhich –m -- To test the functionality



Configuring NIS Slave Server

Follow the client configuration steps and perform the below command
# ypinit –s master -- Command to initialize the system as an NIS slave server. Where master is the name of the NIS master. Start the service and test the functionality

Updating the NIS Map

1. Updates the text files in your source directory (typically /etc, unless it was changed in the Makefile file)
2. # cd /var/yp
3. # /usr/ccs/bin/make -- Refresh the NIS database maps using the make utility

Updating NIS Password Map

If the NIS master is running the rpc.yppasswdd daemon, any client system can update the NIS password map by using the yppasswd or passwd commands.
1. Run the rpc.yppasswdd daemon on the NIS master server
# /usr/lib/netsvc/yp/rpc.yppassed /$PWDIR/passwd –m passwd

Updating the NIS Slave Server Map

The following steps manually update the NIS timezone map on the master server and propagate all maps to the slave servers
1. Edit the source file on the NIS Master
# vi /etc/timezone
2. Remake and push the NIS maps to slave servers
# cd /var/yp; /usr/ccs/bin/make
3. If the push fails. Manually pull only the timezone map from the master server by
performing the below command in Slave server
# /usr/lib/netsvc/yp/ypxfr timezone.byname
# ypinit –s nis_master -- To pull all of the maps at once

Sometimes maps fail to propagate, and you must manually use the ypxfr command to retrieve new map information. You can use shell scripts to run cron jobs for automatic update. The Solaris OE provides several template scripts in the /usr/lib/netsvc/yp directory that you can use and modify to meet your local site requirement.

ypxfr_lperhour script -- To sync NIS Slave Servers passwd map
ypxfr_lperday script -- To sync NIS Slave Servers NIS maps for the group, protocols,
networks, services, and upservers keys.
Ypxfr_2perday script -- To sync NIS Slave servers nIS maps for the hosts, ethers, netfroups keys, and mail aliases.


Quick Reference

# domainname digit.com -- Create domain name

# domainname > /etc/defaultdomain -- Creating domainname file

# cp /etc/nsswitch.nis /etc/nsswitch.conf

# /var/yp/Makefie -- Config file

Makefile 4 parts

1 – Declaration
2 – Details of centralization
3 – Coding for mapping
4 – Declaration of original path

# cd /var/yp

# ypinit –m -- Initializing the master server
# ypinit –s -- Initializing the slave server
# ypinit –c -- Initializing the client
Ctrl+D -- To save the file
Is this correct? [y/n] y
Non fatal error [y/n] n

 If there is any error follow the below procedure

# cd /etc
# touch ethers bootparams netgroup netmasks timezone
# cd /var/yp
# ypinit –m
# /usr/lib/netsvc/yp/ypstart -- To start the daemons

# ypwhich -- Shows the map server details
Solaris

# ypwhich –m -- Full details of map

 A directory will be created with domain name

# cd /var/yp/digit.com -- Contains all config file with .pag & .dir extensions

# ypcat to read the file

# ypcat –k passwd -- With arguments print keys as well as values

# ypmatch –k root passwd

Solaris Zones configuration and set up

Solaris Zones Features :-

1.Virtualization like VMware
2.Solaris Zones can hosts only instances of solaris. Not other Os's
3.Limit of 8192 Zones per solaris Host
4.Primary Zone(Global) has access to all zones
5.non-global zones, do not have access to other non-global zones
6.Default non global zones derive oackages from global Zone
7.Program Isolation like zone1 for apache zone2 for mysql zone3 for databases.
8.Provides 'Z' commands to manage Zones : Zlogin zonecfg zoneadm zonename

Features of Global Zone

1.Solaris Always boots(cold/warm) to the global zone.
2.Knows about All Hardware devices attach to system
3.Knows about all non global Zones

Features of Non-Global Zones.

1.Installed at a location on the filesystem of the Global Zone
'Zone root path' /export/home/zones/zones1 {Zone2,Zone3----} this is as root directory for this zones.
2.Share Packages with Global Zone.
3.Manage distinct hostname and table files.
4.cannot communicate with other non-global zones by default.NIC must be used, which means use standard network API(TCP)
5.Global Zone admin can delegatenon-global zone administration

Zones Commands example :-

#which zonename - to check if you os has zonename commands
/usr/bin/zonename

#zonename - by default will show global zonename.
global

#z - "z' commands

Zone Configuration.

#zonecfg - to configure zones

note - zonecfg can run interactively , non -interactively, command-file modes

Requirements for non-global zones;
1.Hostname of
2.Zone root path ie /export/home/zones/testzone1
3.IP Adress - bound to logical or physical interfaces.

Zones Types:-

1.sparse Root Zones - share key fileswith global zones.
2.Whole Root Zones - require more storage

#df -k and select a slice which has more space lets example /export/home has 5GB

Steps for Configuring non-global-zone:


1.mkdir /export/home/zones/testzone1
2.chmod 700 /export/home/zones/testzone1 - for user restriction of global zone users.
3.ls -ltr /export/homes/zones

4#zonecfg -z testzone1
no such zone configured create one This error will pop when you first create a zone
>create - to create a zone
>set zonepath=/export/home/zones/testzone1 - This is the Root path for zone
>add net
>set address=192.168.1.0 - ip address
>set physocal=el000g0 - physical name of network card check with 'ifconfig -a'
> - If you are ready with you parameters press end before you can exit
>info- to see what we have set
>set autoboot=true - test zone will started automatically when system start
>info
>add attr - to add some extra parameters
attr>set name=commnet
attr>set type=string
attr>value =TestZone1
>end
>verify - verify if any error check the parameter again.
>commit - commit changes
>exit.

#list -iv - to list zones

#zoneadmin -z testzone1 install

Zone testzone1 in to installed in not ready for production so we have to get it in ready state now

#zoneadm list -iv - still u can see testzone1 has not got any id assigned like global one so now

#zoneadm -z testzone1 boot - boots the zone , changing its state from installed to ready

Simple is we are restarting the Testzone1

#zoneadm list -iv - now you can see an id is assigned and status is running.

#ps -ef | grep z
zoneadmd -z testzone1 - this process is responsible for this zone to run.

zlogin - is used to login to zones
Note - each non - global maintain a console, use 'zlogin -C testzone1' to acces that zone.

Note - zlogin permits login to non - global zone via the following messages
1.Interactive - i.e zlogin -l username zonename
2.Non -Interactive - zlogin options command
3.Console Mode - zlogin -C zonename
4.Safe Mode - zlogin -S

#zoneadm list -iv

#zlogin -C testzone1
select a laguage - 0 English
Vt100 - terminal
testzone1 press F2
Configure Kerbose - yes
name service - nis

Enter from this procure its same like installation of solaris so specify all details as required.
like dns names, nis services location places etc etc

#enter with root user and passwd
#zonename
testzone1

#zoneadm list -iv - shows all zones global and non - global

# once u r in testzone1 check /etc/passwd file u can see the system user but not users of the main system

#netstat -anp tcp

#Zoneadm -z testzone1 reboot - reboots the zone

#zlogin testzone1 shutdown - to shutdown the zone

Once Zones are created you can ssh or telnet from remote machine to connect that zone

How to Configure Name Service Clients

Configuring a DNS Client
The client resolver code is controlled by the following files
/etc/resolv.conf -- Contains directive to specify the scope of a query
/etc/nsswitch.conf -- Contains the reference to DNS for the hosts entry

Configuring the DNS Client During Installation

-- Select DNS -- Give Domain Name -- Enter IP Address -- Enter search Domains -- Confirm

Editing DNS Client Configuration Files

# vi /etc/resolv.conf
domain digigeeks.com
nameserver 140.40.40.152
search digigeeks.com -- List the local domain as the first argument to the search

Copying the /etc/nsswitch.dns File to the /etc/nsswitch.conf

# cp /etc/nsswitch.dns /etc/nsswitch.conf
# cat /etc/nsswitch.conf
………
hosts files dns
……..

If you want to add DNS name resolution to a system currently running a name service, such as NIS or NIS+. You must place the dns keyword on the hosts line in the specific location, along with other keywords.

# cat /etc/nsswitch.conf
…..
hosts: nfs files dns
…..
Setting up an LDAP Client

The LDAP server cannot be a client of itself. Getting this configuration to work properly requires changes to the LDAP server and the LDAP client. The ldap_cachemgr daemon is responsible for maintaining and updating the changes to the client profile information.

Configuring LDAP Client During Installation

-- Select LDAP -- Enter Domain Name -- Enter Profile Name & Profile Server IP Address -- Confirm

Initializing the Native LDAP Client


You execute the ldapclient command on the client system once to initiate the client as a native LDAP client. The ldapclient command creates two files in the ./var/ldap directory on the LDAP client. These files contain info that the LDAP client use when binding to and accessing LDAP data.
/var/ldap/ldap_client_cred -- The proxy agent info that the client uses for LDAP authentication
/var/ldap/ldap_client_file -- The config info from the client profile in the LDAP server DB

# ldapclient init –a proxy password=proxy –a proxy DN=cn=proxyagent, ou=profile, dc=suned.com, dc=sun –a domainname=suned.com 192.168.0.100

# ldapclient list

Copying the /etc/nsswitch.ldap to the /etc/nsswitch.conf

During LDAP client initialization the /etc/nsswitch.ldap file is copied over the /etc/nsswitch.conf file

# ldaplist -- To list naming info from LDAP server

# ldapclient uninit -- Unconfiguring LDAP Client

Jump start and Boot Only Server

Four Main Services - Boot Services, Identification Services, Configuration Services, Installation Services

Implementing a Basic Jumpstart Server

1. Spool the OS image
2. Edit the sysidcfg file
3. Edit the rules and profile files
4. Run the check script
5. Run the add_install_client scripts
6. Boot the client


# cd /export
# mkdir config
# mkdir sol_dump
# cd /cdrom/cdrom0/s0/Solaris_9/Misc/Jumpstart_sample/
# cp –r * /export/config/
# cd /cdrom/cdrom0/s0/Solaris_8/Tools
# ./setup_install_server /export/home/sol_dump -- Copying solaris dump to local directory
# cd /cdrom/cdrom0/Solaris_9/Tools/
# ./add_to_install_server /export/home/sol_dump -- Appending 2nd CD content
# cd /etc

# vi ethers
8:0:20:a6:aa:2b ultra5 (hostname)

# vi /etc/hosts
140.40.40.154 ultra5

# vi /etc/timezone
Asia/Calcutta ultra5

# cd /export/config/

# vi rules
hostname ultra5 - host_class finish_script

- Pre Install script
host_class -- Config details like partition
finish_script -- Post install scripts


# vi host_class
install_type initial_install
system_type standalone
partitioning explicit
Cluster SUNWXall
filesys c0t0d0s0 10000 /
filesys c0t0d0s1 550 swap
filesys c0t0d0s7 free /export/home

# vi finish_script
touch /a/noaushutdown
rm /a/etc/defaultdomain
rm –r /a/var/yp/digit.com
cp /a/etc/nsswitch.files /a/etc/nsswitch.conf

# vi sysidcfg -- System identification & configuration. Timezone can also be given here
security_policy=none
name_service=none
network_interface=primary [netmask=255.255.0.0 protocol_ipv6=no]
timezone= Asia/Calcutta
system_locale=en_US

-- Time zone are listed in the directory structure below the /usr/share/lib/zoneinfo directory.
-- Locales are listed in the /usr/lib/locale directory

# chmod 755 finish_script
# ./check -- To check the config

# vi /etc/dfs/dfstab
share –o anon=0 /export/home/sol_dump
share –o anon=o /export/config

# cd /var/yp
# /usr/ccs/bin/Make
# cd /export/home/sol_dump/solaris_9/Tools
# ./add_install_client –c 140.40.40.151:/export/config –p 140.40.40.151:/export/config ultra5(hostname) sun4u
# update the NIS file with make command

From Client

ok boot net –install -- Will search the network and start the installation automatically

-- Before a Jumpstart client can boot and obtain all of the NFS resourctes it requires, every directory listed as an argument to the add_install_client script must be shared by the server on which it resides.


Setting Up a Boot-Only Server

A boot server responds to RARP, TFTP, and bootparams requests from jumpstart clients and provides a boot image using the NFS service.
1. Running the setup_install_server script with the –b option to spool a boot image from CD-Rom or DVD
2. Running the add_install_client script with options and argument that shows a list of servers and the identification config, and installation services that they provide.

Executing the setup_install_server script
# mkdir /export/install
# cd /cdrom/cdrom0/s0/Solaris_9/Tools
# ./setup_install_server –b /export/install
Executing the add_install_client script
Before you run the script, update the hosts and ethers information for the jumpstart client

/etc/inet/hosts
192.10.10.4 client1

/etc/ethers
8:0:20:9c:88:5b client1

The boot server must have entry in /etc/inet/hosts file for each server you specify while you run add_install_client script.
# cd /export/install/Solaris_9/Tools
# ./add_install_client –c server1:/export/config –p server1:/export/config client1 sun4u

Name Services / Using

/etc/rc2.d/S72inetsvc script -- Starts DNS during system boot.
/etc/rc2.d/S71rpc script -- Starts NIS & NIS+ during system boot
/etc/rc2.d/S72directory script -- Starts iPlanet Server during system boot.

Name Services -- DNS, NIS, NIS+, LDAP

The name service switch file determines which services a system users to search for information and in which order the name services are searched. All Solaris OE systems uses the /etc/nsswitch.conf file as the name service switch file. The nsswitch.conf is loaded with the contents of a template file during the installation of the Solaris OE depending on the name service that is selected.

Name Service Name Service Template
Local Files /etc/nsswitch.files
DNS /etc/nsswitch.dns
NIS /etc/nsswitch.nis
NIS+ /etc/nsswitch.nisplus
LDAP /etc/nsswitch.ldap

Configuring the Name Service Cache Daemo (nscd)

The nscd daemon is a process that provides a cache for the most common name service requests. The /etc/nscd.conf file controls the behavior of the nscd daemon. The nscd daemon provides caching for passwd, group, hosts, ipnodes, exec_attr, prof_attr and user_attr databases. Each line specifies either an attribute and a value or an attribute, a cache name, and a value.

# /etc/init.d.nscd stop (or) start

The getent command provides generic retrieval interface to search many name service database. As a system administrator, you can query name service information sources with tools, such as the
ypcat NIS namespace
nslookup DNS
ldaplist LDAP
Bt these tools are not consulting nsswitch.conf file. Whereas getent command searches the information sources in the order in which they are configured in the name service switch file. So if there is any error in the file will be identified with this command.

getent database [key]…..
database -- The name of the database to be examined. This name can be passwd, group, hosts, ipnodes, services, protocols, ethers, networkds, or netmasks.

# getent passwd lp
lp:x:71:8:Line Printer Admin:/usr/spool/lp:

# getent group 10
staff::10:

# getent hosts sys44
192.168.38.44 sys44 loghost (loghost will be absent if the NIS is searched first)

Replace a Disk Drive in solaris

Use this procedure to replace a failed disk drive in a running cluster.
1. Does replacing the disk drive affect any LUN's availability?
If no, proceed to Step 2.

If yes, remove the LUNs from volume management control. For more information,
see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager
documentation.

2. Replace the disk drive in the storage array.
For the procedure about how to replace a disk drive, see the Sun StorEdge
D1000 Storage Guide.

3. Run Health Check to ensure that the new disk drive is not defective.
For the procedure about how to run Recovery Guru and Health Check, see the Sun StorEdge RAID
Manager User's Guide.
3.
Does the failed drive belong to a drive group?
If no, proceed to Step 5.
If yes, reconstruction starts automatically. If reconstruction does not start automatically for any reason,
then select Reconstruct from the Manual Recovery application. Do not select Revive. When
reconstruction is complete, skip to Step 6.
4.
Fail the new drive, then revive the drive to update DacStore on the drive.
For the procedure about how to fail and revive drives, see the Sun StorEdge RAID Manager User's Guide.
5.
If you removed LUNs from volume management control in Step 1, return the LUNs to volume management
control.

How to Configure System Messaging

The syslog system messaging features track system activities and events. You can manually generate log messages by using the logger command. The syslog function, the syslogd daemon, and input from the /etc/syslog.conf file work together to facilitate system messaging for the solaris 9 OE.

The /etc/syslog.conf file

This file consists of two tab-separated fields: selector and action. The selector field has two components, a facility and a level written as facility.level. Facility represent categories of system processes that can generate messages. Levels represent the severity or importance of the message. The action field determines whether to send the message.

*.err /var/adm/messages -- Error messages for all facilities are sent to the /var/adm/messages

Only use tabs as white space in the .etc.syslog.conf file. The Solaris OE accesses the /usr/include/sys/syslog.h file to determine the correct facility.level sequencing order.

Selector Fields (facility) Options

kern Messages generated by the kernel
user Messages generated by user processes and don’t have default priority for messages
daemon System daemon, such as the in.ftpd and the telnetd daemon
auth The authorization system, including the login, su, and ttymon commands
syslog Messages generated internally by the syslogd daemon
lpr The line printer spooling system, such as the lpr and lpc commands
news Files reserved for the USENET network news system
uucp The UNIX to UNIX copy (uucp) system does not use the syslog function
cron The cron and at facilities, including crontab, at, and cron
local0-7 Fields reserved for local use.
mark The time when the message was last saved and produced by the syslogd daemon
* All facilities, except the mark facility.

You can use the asterisk (*) to select all facilities (for eg. *.err); however, you cannot use * to select all levels of a facility (for eg. Kern.*)

The levels in descending order of severity
Selector Fields (level) Options
Level Priority Description

emerg 0 Panic conditions that are normally broadcast to all users
alert 1 Conditions that should be corrected immediately
crit 2 Warnings about critical conditions, such as hard device errors
err 3 Errors other than hard device errors
warning4 Warning messages
notice 5 Non-error conditions that might require special handling
info 6 Informational messages
debug 7 Messages that are normally used only when debugging a program
none 8 Messages are not sent from the indicated facility to the selected file

Not all levels of severity are implemented for all facilities in the same way.


Action Field -- The action field defines where to forward the message. This field can have any one of the following entries

/filename The targeted file
@host The @sign denoted that messages must be forwarded to a remote host.
Messages are forwarded to the syslogd daemon on the remote host
user1, user2 The user1 and user2 entries receive messages if they are logged in
* All logged in users will receive messages

You must restart the syslogd daemon whenever you make any changes to /etc/syslog.conf file
# /etc/init.d/syslog stop (or) start
# pkill –HUP syslogd

 Syslogd started -- It’s starting the M4 Macro Processor -- M4 will read the /etc/syslog.conf file.


Configuring syslog Messaging

The inetd daemon uses the syslog command to record incoming network connection requests made by using TCP. You can modify the behavior of the inetd daemon to log TCP connections by using the syslogd daemon. The daemon facility and the notice message level are supported by inetd.
Use the –t option as an argument to the inetd daemon to enable tracing of TCP services. When you enable the trace option for the inetd daemon, it uses the daemon.notice to log the client’s IP address and TCP port number, and the name of the service. Add the –t option to the entry which activated the inetd daemon in the inetsvc script located in the /etc/init.d directory

# grep inetd /etc/init.d/inetsvc
/usr/sbin/inetd –s –t -- You must restart the inetd daemon for the new option to take effect

# grep daemon.notice /etc/syslog.conf
*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages

Monitoring a syslog File in Real Time

The tail –f command holds the file open so that you can view messages being written to the file by the syslogd daemon.

# tail –f /var/adm/messages -- Press Ctrl+c to exit


Adding One-Line Entries to a System Log File

logger [-i](logs PID) [-f file] [-p priority] [-t tag] [message]

# logger system rebooted -- If the user.notice field is configured in the /etc/syslog.conf file, the message is logged to the file designated for the user.notice selector field

# logger –p user.err system rebooted -- Changing the priority of the messages to user.err route the messages to the /var/adm/messages file as indicated in the /etc/syslog.conf file
# logger –i –p2 “crit”

/dev/sysmsg -- Console

How to Performe Smartcard Authentication

Smartcard Authentication

# /usr/dt/bin/sdtsmartcardadmin & -- To start smartcard console

ATR – Answer to reset Number (unique)

# smartcard –c disable -- Disabling smartcard operation
# smartcard –c admin -- Display the current client and server configuration

# /etc/smartcard/opencard.properties -- Config File

installation of Vmware ESX 3.5 server and client / Vmware ESX 3.5 server and client installation

Vmware Interview questions and answers, vmware CBT, vmware FAQ, vmware e-books, vmware MNC interview questions and answers, vmware Video, vmware tips and tricks @  http://sunadmintools.blogspot.com/

Vmware ESX 3.5 server and client installation complete presentation click the next button and wait for the ppt to get downloaded completely.

Click the full icon in the botton to see the complete e-book in full mode.
If you like my blog do comment me
Donate if this blog really help you to build your career.
share documents if you have any more stuff with you
share the blog with friends for better result.



Solaris commands / VI Editor


Inserting and Appending Text

a - Append text after the cursor
A - Appends text at the end of the line
i - Inserts text before the cursor
I - Inserts text at the beginning of the line
o - Opens a new line below the cursor
O - Opens a new line above the cursor
:r Inserts text from another file into the current file

Key Sequence for the VI Editor

n, left arrow or backspace Left one characters
j or down arrow Down one line
k or up arrow Up one line
l, right arrow or spacebar Right one character
w Forward one word
b Back one word
e To the end of the current word
$ To the end of the line
0 (zero) To the beginning of the line
^ To the first non whitespace character on the line
Return Down to the beginning of the next line
G Goes to the last line of the file
1G Goes to the first line of the file
:n Goes to the line n
nG Goes to the line n
Ctrl F Pages forward one screen
Ctrl D Scroll down one half screen
Ctrl B Pages back one screen
Ctrl U Scrolls up one half screen
Ctrl L Refreshes the screen

Editing files using the VI editing commands

R Overwrites or replace characters to the right of the cursor
C Changes or overwrites characters to the end of the line
s Substitute a string for a character at the cursor
x Deletes a character at the cursor
dw Deletes a word or part of the word to the right of the cursor
dd Dletes the line containing the cursor
D Deletes the line from the cursor to the right end of the line
:n,nd Deletes the line n through n

Using the Text Changing Commands

u Undoes the previous command
U Undoes all changes to the current line
. Repeats the previous command

Search and Replace Command
/string Searches forward for the string
?string Searches backward for the string
n Searches the next occurrence of the string
N Searches for the previous occurrence of the string
:%s/old/new/g Searches for the old string and replace it with the new string globally

Using the text copying and Text Pasting Commands

yy Yanks a copy of a line
p Puts yanked or deleted text under the line containing the cursor
P Puts yanked or deleted text before the line containing the cursor
:n,n co n Copies lines n through n and puts them after line n
:n,n m n Moves lines n through n to line n

Booting process in Solaris

Booting process in Solaris can be divided in to different phases for ease of study . First phase starts at the time of switching on the machine and is boot prom level , it displays a identification banner mentioning machine host id serial no , architecture type memory and Ethernet address This is followed by the self test of various systems in the machine.

This process ultimately looks for the default boot device and reads the boot program from the boot block which is located on the 1-15 blocks of boot device. The boot block contains the ufs file system reader which is required by the next boot processes.

The ufs file system reader opens the boot device and loads the secondary boot program from /usr/platform/`uname –i`/ufsboot ( uname –i expands to system architecture type)

The boot program above loads a platform specific kernel along with a generic solaris kernel

The kernel initialize itself and load modules which are required to mount the root partition for continuing the booting process.

 
 
 
 
Copyright © Sun solaris admin