Sun solaris E-Book - system administration

Rules to Download :-

1)Always Donate the user who created this site.
2)Most of the Files are in zip, winrar, wmv format so use appropiates software to extract the files.
3)All files are shared on a Third Party website so register with them for free to download the files.
4)Distribute this site to other so that every one gain free knowledge.
5)Leave a comment of the site which encourge me to devop more.



Download - Sun solaris system administration E-Book

By Using Veritas NetBackup To Add A Changed-Robot-And-Drive On The Solaris Unix Command Line

Your old tape robot, along with the drives inside it has inexplicably gone bad, you've spent hours and exhausted your support contract trying to fix it somehow, but, ultimately, you're left facing the fact that your trusty old steel-and-plastic-jukebox just isn't going to come back. Ever. If you're lucky, your warranty allows for replacement of the tape robot (A Tape Loading Device or TLD) and its internal drives (2, for now, to keep things simple - hcart2 drives, just because). Worst case you've purchased suitable replacements that match the specifications listed in the previous sentence.

Probably, your /dev/rmt directory is populated and you may even have some other logical paths created on your Solaris system that are no longer valid. Once you've connected your "MaxTape24" TLD (which exists only in my imagination, with it's two internal "FACTOTUM-TD2" drives, both working properly according to the on-board diagnostics), you should be able to verify that your system (at the very least) can recognize the TLD and, hopefully, the drives inside it. Assuming all of the equipment is good, and that it's been hooked up (however you like to daisy-chain it) properly, this shouldn't be an issue. You may choose to run:

host # devfsadm -C

before proceeding, to check for new symbolic links that need to made in your hardware device tree (and, with the -C option, remove ones you no longer need - Operating System's discretion, unfortunately), although it may not be necessary.

FINDING THE NEW HARDWARE WITH NETBACKUP FIRST: Now, contrary to what it seemed like I was leading in to, we're going to try to get NetBackup to do all the OS-work for us today (because, if it works, it's f'ing brilliant. Good job. Go home and relax :) Actually, you could probably look at this more as a way of giving NetBackup a good kick in the arse. The kind of kick that makes it stand up and take account of its surroundings ;) A good way to get started is to run the following at the command line (Oh yes, there will be no GUI instruction in these posts. If you use the GUI - which is okay - just right click on the type of thing you want to do something to and select whatever seems to be the most reasonable option from the drop-down menu. ...last on that:)

host # /usr/openv/volmgr/bin/sgscan <-- I would recommend including /usr/openv/volmgr/bin, /usr/openv/netbackup/bin and /usr/openv/netbackup/bin/admincmd in your PATH variable if you spend a lot of time working with NetBackup at the command line.

/dev/sg/c0t0l0: Disk (/dev/rdsk/c0t0d0): "SUZUKI MBB2147RCSUN146G" /dev/sg/c0t1l0: Disk (/dev/rdsk/c0t1d0): "SUZUKI MBB2147RCSUN146G" /dev/sg/c0t2l0: Tape (/dev/rmt/1): "BMI FACTOTUM-TD2" /dev/sg/c0t3l0: Cdrom: "Hyundai DV-W28E-R" /dev/sg/c1t0l0: Changer: "TLDHAUS MaxTape24" /dev/sg/c1t1l0: Tape (/dev/rmt/0): "BMI FACTOTUM-TD2"


Your output may differ (even if you run this command on the same box, since I faked up the output to protect the guilty ;), but basically, this output is positive. You'll notice that sgscan has picked up a bit more than just your new TLD and its drives but that's okay. As it stands, this output is very positive, in that you can see that /dev/rmt/0 and /dev/rmt/1 have been properly mapped to the TLD's internal tape drives and the "TLDHAUS MaxTape24" TLD has been properly identified.

Other commands you could use to, basically, get the same information (or peace of mind) would include (but not be limited to) vmoprcmd, tpconfig and tpautoconf. A few examples at the bottom of the post, with the same setup as above (some whitespace has been clipped to save the virtual trees).

And that's it for today. Tomorrow we'll look at several commands (including some we're using today, but with different options) that can be used to "find" those drives if the system doesn't discover them automatically (the first thing you can try is "devfsadm -C" as noted above, followed by another sgscan).

Until then, enjoy the output and we'll continue on tomorrow. Here are a couple of handy anchor-href's for you, so you don't have to try to figure out where the command you're interested in is hiding out amongst all the flotsam below :)

vmoprcmd
tpconfig -d
tpconfig -dl
tpautoconf -t
tpautoconf -a

How to Locate New Backup Hardware Using Veritas NetBackup On The Solaris Unix Command Line

For some reason (and this hardly ever happens ...not sure which word to emphasize to obtain the maximum sarcastic drippage) after we connected our new Tape Loading Device (TLD or Tape Robot), and the two drives it contains, to our backup server, NetBackup - and, possibly, the server itself, is failing to recognize the new device(s). Again, we're going to assume that both the server, TLD, drives and all other hardware are absolutely fine and that all required connections between the devices are set up properly.

NOTE: Today's post is going to assume that some tried and true methods will get you to "good." Tomorrow's post will look at some other ways to make NetBackup recognize and work with your "known good" (and compliant) setup.

If we take the same direct route to initial discovery that we did yesterday, we'd run the same sgscan (which is, as one reader noted, shorthand for "sgscan all") command initially, like so (pardon the error output. I can't afford to create the situation I want to display so I'm doing it from memory):


host # /usr/openv/volmgr/bin/sgscan /dev/sg/c0t0l0: Disk (/dev/rdsk/c0t0d0): "SUZUKI MBB2147RCSUN146G" /dev/sg/c0t1l0: Disk (/dev/rdsk/c0t1d0): "SUZUKI MBB2147RCSUN146G" /dev/sg/c0t2l0: Tape (???): "Unknown" /dev/sg/c0t3l0: Cdrom: "Hyundai DV-W28E-R" /dev/sg/c1t0l0: Changer: "Unknown" /dev/sg/c1t1l0: Tape (???): "Unknown"


Basically, every line where it says "Unknown" is where we're interested in looking. The system can't find our TLD or its drives, so now we have to try to discover them ourselves (with and/or without NetBackup) and then come back around and use NetBackup to verify that we're okay. These steps are pretty dry, but I think if you follow them in a somewhat linear order (skipping some or doing some before others, if you're comfortable) they should get you to where you want to be. Fat, happy and with a TLD your backup server recognizes. Okay, maybe not happy ;)

Note:
If you feel uncomfortable about running any of the commands below, please enlist the assistance of someone who is either able to provide guidance (since each case is unique) and/or will get in trouble instead of you if things to go to Hell ;) j.k.

And, here we go. These steps won't be numbered, so I can't possibly screw that aspect up, but should be easy to follow since each command will be separated by space and begin with the "host # " prompt. Some of these commands, as the title of today's post suggests, may not exist on a flavour of Unix or Linux that isn't Solaris.

First, we'll take a look at our device tree. Do the device links listed in sgscan exist? Also, is /dev/rmt populated at all?

host # ls /dev/sg/c0t2l0 /dev/sg/c1t1l0 /dev/sg/c1t0l0 /dev/rmt /dev/sg/c0t2l0 /dev/sg/c1t0l0 /dev/sg/c1t1l0 /dev/rmt: 0 0cb 0hb 0lb 0mb 0u 1 1cb 1hb 1lb 1mb 1u 0b 0cbn 0hbn 0lbn 0mbn 0ub 1b 1cbn 1hbn 1lbn 1mbn 1ub 0bn 0cn 0hn 0ln 0mn 0ubn 1bn 1cn 1hn 1ln 1mn 1ubn 0c 0h 0l 0m 0n 0un 1c 1h 1l 1m 1n 1un


They appear to be there, but they're probably bad. Let's try devfsadm, all on its lonesome and check sgscan again (From now on we'll just assume the output is the same as the train-wreck we witnessed above, until we get to the end. Hopefully, your journey will come to a close sooner!):

host # devfsadm

If this fails to produce results, you can try to run the same command with the "-C" option to remove stale links that no longer point to a valid physical device path:

host # devfsadm -C

Of course, if you know that you only had two tape drives before (/dev/rmt/0 and 1) and believe sgscan when it says it can't recognize the paths we listed, you can delete all of that stuff and try those two steps again. Sometimes it helps to force Solaris to recreate the dev links:

host # rm /dev/rmt/*
host # devfsadm -C


should be enough, but you can almost certainly do this, as well:

host # rm /dev/rmt/* /dev/sg/c0t2l0 /dev/sg/c1t1l0 /dev/sg/c1t0l0 host # devfsadm -C


Running the "ls /dev/sg/c0t2l0 /dev/sg/c1t1l0 /dev/sg/c1t0l0 /dev/rmt" listed above will, almost always, give you the same results once you've completed these steps.

You might also run this command if you have the drivers installed:

host # cfgadm -al

If you find a section with /dev/rmt1, /dev/rmt0 and the /dev/sg path to your Changer in it, and one or some of them are showing unconfigured (all the sections start with a controller number and a colon - in our setup the output is "c2:xxxx") you can either specifically configure any of the entries listed behind the controller number, by using the entire device name your rmt and disk changer devices are listed beside, or you can just configure the whole shebang. Why not?:

host # cfgadm -c configure c2

Listing it again with "cfgadm -al" should show all the appropriate devices as "configured." If it doesn't; don't worry. It probably doesn't matter, but was worth a shot.

Both "tpconfig-d" and "tpconfig -dl" will give you back the same results as sgscan (although formatted differently and limited to the tape and TLD information) if the problem still hasn't resolved. To save space and prevent carpal-knuckle syndrome, full versions of the output of these commands, as run against a working setup, are located at the bottom of yesterday's posts as a series of in-page hyperlinks. The only things that will be different in your execution of:

host # tpconfig -d

and

host # tpconfig -dl

output will be that the drives will usually either show up as DOWN ( possibly with an identifier - for us, hcart2 - and path like /dev/rmt/0) or you will get virtually no output at all ...yeah, I guess that's a "huge" difference :) If you notice that tpconfig returns a listing for you, this is positive, even if it shows your drives as "down." We won't go crazy yet, since we were going to run the next command, regardless:

host # vmoprcmd

Now we may get results that show "HOST STATUS" as or, hopefully ACTIVE (good to go!), ACTIVE-DISK (can do local disk backups), ACTIVE-TAPE (can backup to tape, but, for some reason, can't backup to local disk) or even DEACTIVATED (either it's off or NetBackup thinks it is) or OFFLINE (Same as the last, except substitute offline for off ;) Your drives will also show as either non-existent, UP, UP-TLD, RESTART or DOWN (perhaps a few others, but all of them self-explanatory). As long as the tape drive type (hcart2 for us) is shown, you're on the way.

And the final things we'll try today will be to react to the output produced for the Tape Drives. If your TLD is still not showing, that's something for tomorrow. If you see your tapes in a DOWN state, but correctly identified as the types of tapes they are, this will probably do the trick for you:

host # vmoprcmd -up 0
host # vmoprcmd -up 1


for the first (0) and second (1) instance of the drive, listed in the first "Id" column of "tpconfig -d". You can also do this, which is easier (at least for me) to remember, since you can directly map it from the vmoprcmd output without squinting ;)

host # vmoprcmd -upbyname Drive000
host # vmoprcmd -upbyname Drive001


from the vmoprcmd output in the "Drive Name" column, which also happens to be the first column in the "vmoprcmd" output.

When you're done with that, or if your tape drives show as RESTART, do yourself a favor and stop and start NetBackup. You may not get a chance once you let everyone know it's fixed. If you don't have other startup scripts set up, you can use:

host # /usr/openv/netbackup/bin/goodies/netbackup stop

then run:

host # /usr/openv/netbackup/bin/bpps -a

and, if everything is gone (unless you're running the GUI - it's okay to not kill those PID's), start 'er up again, like so:

host # /usr/openv/netbackup/bin/goodies/netbackup start

and do another "bpps -a" to make sure all of the appropriate daemons are running. Then, just to make yourself feel better, and so you're absolutely sure, do one more "sgscan." All should look as it did in yesterday's post (see link-back above) and you should be all set. At least, you'll be ready to test some backups and pray that your troubles are over ;)

DiskSuite/VolumeManager or Zpool Mirroring On Solaris: Pros and Cons

Using the Solaris DiskSuite set of tools (meta-whathaveyou ;), which was, at one point, changed to Solaris Volume Manager (which introduced some feature enhancements, but not the kind I was expecting. The name Volume Manager has a direct connection in my brain to Veritas and the improvements weren't about coming closer to working seamlessly with that product).

The somewhat-new way (using the zpool command) won't work - to my knowledge - on any OS prior to Solaris 10, but with Solaris 8 and 9 reaching end of life in the not-too-distant future, every Solaris Sysadmin will have some measure of choice.

With that in mind let's take a look at a simple two disk mirror. We'll look at how to create one and review it in terms of ease-of-implementation and cost (insofar as work is considered expensive if it takes a long time... which leaves one to wonder why I'm not comparing the two methods in terms of time ;)

Both setups will assume that you've already installed your operating system, and all required packages, and that the only task before you is to create a mirror of your root disk and have it available for failover (which it should be by default)

The DiskSuite/VolumeManager Way:

1. Since you just installed your OS, you wouldn't need to check if your disks were mirrored. In the event that you're picking up where someone else left off (and it isn't blatantly obvious - I mean "as usual" ;), you can check the status of your mirror using the metastat command:

host # metastat -p

You'll get errors because nothing is set up. Cool :)

2. The first thing you'll want to do is to ensure that both disks have exactly the same partition table. The same-ness has to be "exact," as in down to the cylinder. If you're off even slightly, you could be causing yourself major headaches. Luckily, it's very easy to make your second (soon to be a mirror) layout exactly the same as your base OS disk. You actually have at least two options:

a. You can run format, select the disk you have the OS installed on, type label (if format tells you the disk isn't labeled), then select your second disk, type partition, type select and pick the number of the label of your original disk. A lot of times these labels will be very generic (especially if you just typed "y" when format asked you to label the disk or format already did it for you during install) and you may have more than one to choose from. It's simple enough to figure out which one is the right one though (as long as you remember your partition map from the original disk and have made is sufficiently different from the default 2 or 3 partition layout). Just choose select, pick one, then choose print. If you've got the right one, then type label. Otherwise, repeat until you've gone through all of your selections. One of them has to be it, unless you never labeled your primary disk.

b. You can use two command (fmthard and prtvtoc) and just get it over with:

host # prtvtoc /dev/rdsk/c0t0d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2

3. Then you'll want to mirror all of your "slices" (or partitions; whatever you want to call them. We'll assume you have 6 slices set up (s0, s1, s3, s4, s5 and s6) for use and slice 7 (s7) partitioned with about 5 Mb of space. You can probably get away with less. You just need to set this up for DiskSuite/VolumeManager to be able to keep track of itself.

Firstly, you'll need to initialize the minimum number of "databases," set up the mirror group and add the primary disk slices as the first mirrors in the mirror-set (even though, at this point, they're not mirroring anything, nor are they mirrors of anything ;) Note that it's considered best practice to not attach the secondary mirror slices to the mirror device, even though you can do it for some of your slices. You'll have to reboot to get root to work anyway, so you may as well do them all at once and be as efficient as is possible:

host # metadb -a -f /dev/rdsk/c0t0d0s7
host # metadb -a /dev/rdsk/c1t0d0s7
host # metainit -f d10 1 1 c0t0d0s0
host # metainit -f d20 1 1 c1t0d0d0
host # metainit -d0 -m d10
host # metainit -f d11 1 1 c0t0d0s1
host # metainit -f d21 1 1 c1t0d0d1
host # metainit -d1 -m d11
host # metainit -f d13 1 1 c0t0d0s3
host # metainit -f d23 1 1 c1t0d0d3
host # metainit -d3 -m d13
host # metainit -f d14 1 1 c0t0d0s4
host # metainit -f d24 1 1 c1t0d0d4
host # metainit -d4 -m d14
host # metainit -f d15 1 1 c0t0d0s5
host # metainit -f d25 1 1 c1t0d0d5
host # metainit -d5 -m d15
host # metainit -f d16 1 1 c0t0d0s6
host # metainit -f d26 1 1 c1t0d0d6
host # metainit -d6 -m d16


4. Now you'll run the "metaroot" command, which will add some lines to your /etc/system file and modify your /etc/vfstab to list the metadevice for your root slice, rather than the plain old slice (/dev/dsk/c0t0d0s0, /dev/rdsk/c0t0d0s0):

host # metaroot

5. Then, you'll need to manually edit /etc/vfstab to replace all of the other slices' regular logical device entries with the new metadevice entries. You can use the root line (done for you) as an example. For instance, this line:


/dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /users ufs 1 yes -


would need to be changed to:

/dev/md/dsk/d6 /dev/md/rdsk/d6 /users ufs 1 yes -


and, once that's done you can reboot. If you didn't make any mistakes, everything will come up normally.

6. Once you're back up and logged in, you need to attach the secondary mirror slices. This is fairly simple and where the actual syncing up of the disk begins. Continuing from our example above, you'd just need to type:

host # metattach d0 d20
host # metattach d1 d21
host # metattach d3 d23
host # metattach d4 d24
host # metattach d5 d25
host # metattach d6 d26


The syncing work will go on in the background, and may take some time depending upon how large your hard drives and slices are. Note that, if you reboot during a sync, that sync will fail and it will start from 0% on reboot with the affected primary mirror slices remaining intact and the secondary mirror slices automatically resyncing. You can use the "metastat" command to check out the progress of your syncing slices.

And, oh yeah... I almost forgot this part of the post:

The Zpool way:

1. First you'll want to do exactly what you did with DiskSuite/VolumeManager (since both disks have to be exactly the same). We'll assume you're insanely practical, and will just use this command to make sure your disks are both formatted exactly the same (just like above):

host # prtvtoc /dev/rdsk/c0t0d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2

2. Now we'll need create a pool, add your disks to it (all slices as one) and mirror them:

host # zpool create mypool mirror c0t0d0 c1t0d0

3. Wait for the mirror to sync up all the slices. You can check the progress with "zpool status POOLNAME" - like:

host # zpool status mypool

And that's that. The choice is yours, unless you're still using Solaris 9 or older. This post isn't meant to condemn the SDS/SVM way. It works reliably and is really easy to script out (and when both of these methods are scripted out, they're just as easy to run and the only hassle the old way gets you is the forced reboot).

Basic Root Disk Mirroring With Solaris Volume Manager

Solaris Volume Manager (SVM - Used to be called Solaris Disk Suite - SDS - pretty much only the name changed) is an excellent OS standard software RAID management tool used by a lot of IT shops. Even shops that use Veritas Volume Manager software for serious storage management, still use SVM to take care of the base disks.

The reason most shops utilize SVM to manage the root disks is that it's easier to manage than Veritas for this limited purpose. Another reason is that, in coordination with its ease of use, it's integrated into Solaris. The thinking around this is generally: Why not use Solaris' tool to manage our Solaris OS root disk(s)? If you've ever had to deal with Sun support, you know another reason why using their product on their product is a good thing :)

Setting up RAID groups, and managing them, is generally very simple and doesn't require rebooting the machine, etc, when you want to add new disk (with some exceptions). One of the exceptions to this rule is when you use SVM to set up management of your root disks.

Managing your root disks with SVM is a two-part process. Generally, it's used for mirorring the root disk for quick failover and is configured right after installation. Configuring it after is also easy, but I like to get my necessary reboots completed before I hand over product to a customer.

The process is fairly simple and is accomplished like this:

1. The two disks that will be used for mirroring have to be formatted "exactly" alike. The easiest way to do this is to partition one root disk and then either use the output from "prtvtoc" to seed "fmthard" or, more simply, use Solaris' "format" command to format the initial disk and then "select" the same layout from the list of available partition setups when formatting the second disk (labeling the first makes its partition table a selectable option!)

2. Meta Databases need to be created on both disks. The first needs to be forced, and any additional do not. We're using slice 7 for our example here:

metadb -a -f /dev/dsk/c0t0d0s7
metadb -a /dev/dsk/c0t1d0s7


3. Next, the metadevices need to be setup. For each slice that we want to mirror, we need to create two stripe-concat metadevices, as such (shown for only two, here, but done for every slice we want to mirror - which should be every slice on the disk except slice 2!):

metainit -f d10 1 1 c0t0d0s0
metainit -f d20 1 1 c0t1d0s0

metainit -f d11 1 1 c0t0d0s1
metainit -f d21 1 1 c0t1d0s1


4. Next, the metadevice that will be composed of the two mirrors needs to be created. You can do this by attaching one of the mirrors to the new metadevice. Note that, you can do this all at once, normally, but for the root disk, you can only attach one mirror slice at this point for each slice, as such:

metainit d0 -m d10
metainit d1 -m d11


5. The root partition is special in that SVM actually has a command to change its value to the metadevice value (d0) in /etc/vfstab. It also adds some information in /etc/system for you. That command is:

metaroot d0

6. Now, sadly enough you'll need to edit your /etc/vfstab to reflect the new metadevices. So for the entry for metadevice d1, you'd change:

/dev/dsk/c0t0d0s1 to /dev/md/dsk/d1
/dev/rdsk/c0t0d0s1 to /dev/md/rdsk/d1

7. Once that's completed and all commands have returned successfully (you can check that all's well by running "metastat -p") you will need to reboot. I would suggest actually shutting your machine down to init level 0 so that you can change your alternate boot device to the secondary (mirror) disk from the default of "net." This way, once you're set up, if your primary disk fails your system will automatically boot off of the mirror.

8. Now to the final step. You need to attach that second mirror slice to your mirror metadevice, as such:

metattach d0 d20
metattach d1 d21


Again, as long as you receive no errors, you should be all set. It will take SVM a little while to sync up the two disk (the actual mirroring process from the first disk to the second). You can check on the status by running "metastat" on its own. I personally prefer to run a command line loop to keep me up to date, like:

while true;do metastat -t|grep -i sync;sleep 60;done

then, when all the sync's are complete, I'll know right away.

How to Check the State of Your DNS Setup Externally!

One of the areas that seems to require looking over more consistently, is your DNS setup. New versions are released regularly, bugs are found just as regulary, and acceptable syntax can sometime change between releases and/or RFC's.

Note for today's script - It is intended only for your benefit, and I strongly urge you to use any free DNS reporting service you can find to accomplish what we're accomplishing here. I neither work for, nor do I do any afilliate marketing for, www.dnsstuff.com. We only use it where I work, because it's the standard. Probably, most readers already know about it. The site tools require a free registration, if you're not a paying member, but also require a payment for use after 30 DNS Reports. I have placed a very obvious "COMMENT" in the script above the only line you'd need to change to use another service. If you "do" find something equal, or better, I would recommend that you use it (and, maybe, send me an email to let me know the URL so I can check it out and update my scripts, too :) That being said, I'm not trashing them either. If you have to do this sort of thing a lot(for your employer, lets say), the company can probably shell out a few bucks a month for the service. It's worth the price if you need to use it regularly.

Aside from your own internal auditing, it's good to get a fair and objective third-party assessment of the state of your DNS setup. A great site to get this accomplished on the web is located at http://www.dnsstuff.com. They have a tool called "DNS Report" (which used to be its own domain - www.dnsreport.com) which can be used very effectively, even if you choose not to be a paying member of the site. A long long time ago, it was available to everyone for free, but since it's pay-for now, you really have to be careful not to deluge it with requests to check your DNS zones (all 30 of them, if you're doing this for free), or you'll get dumped on their blacklist and barred from using the service at all.

The little Perl script I've written below will help you to automate the usage of that web service for all of your DNS zones. The reports are nice and easy to understand, even for the highest of higher-ups (green = good, red = bad ;) and the service does a very good job of pointing out weaknesses, or areas that don't conform to current RFC expectations, in your DNS zones. This script does require that you have Perl installed (although it's just a script I whipped up so it's submitted under GPL (Gnu Public License), at best, and you can feel free to rewrite it to suite your own needs). It also makes use of curl. You can substitute lynx or wget or any other program that can download and save web pages to your Unix/Linux hard drive. And here it is, below (Please read the comments regarding use of dnsstuff's dnsreport tool):


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/usr/bin/perl

#
# 2007 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#
# Registration is now required to use this tool - It used to be free.
# When/If you register, be sure to uncheck the two opt-in mail
# checkboxes at the bottom unless you want to receive those emails.
#
# Login to dnsstuff.com before running this script or it will just return
# a page indicating that you need to log in before submitting a full request
# without that request being linked to from one of their pages
#
# Simple Error Checking here, just want to be sure that a file exists at all.
# Disregard everything else on the command line. This script will fail if the
# file named on the command line doesn't include one URL per line, anyway :)

if ( ! -f $ARGV[0] ) {
print "Usage: $0 fileWithUrls\n";
exit(1);
}
$dns_file = $ARGV[0];

open(URL_FILE, "< $dns_file");
@url_file = ;
close(URL_FILE);
$counter = 120;

foreach $url (@url_file) {
print "$url";
$| = 1;
if ($pid = fork) {
if ( $counter = 900 ) {
$counter = 120;
} else {
$counter = $counter+60;
}
print "next\n";
} else {
close (STDOUT);
chomp($url);
#
# COMMENT <--- The Obvious One :)
# CHANGE THE URL ON THIS LINE TO THE URL OF ANY OTHER SITE YOU FIND THAT
# PROVIDES AN EQUAL SERVICE!
#
@output = `curl http://www.dnsstuff.com/tools/dnsreport.ch?domain=$url`;
open(OUTPUT, ">$url.htm");
print OUTPUT @output;
close(OUTPUT);
exit(0);
}
if ( $counter = 900 ) {
$counter = 120;
} else {
$counter = $counter + 60;
}
sleep $counter;
}
exit(0);


And that's that! As I mentioned before, this tool will blacklist you if you hit dnsstuff too hard. My script assumes that you can hit the site at the rate I used last time I used it. Since it's a pay-for service now, I would recommend changing the wait times to at least twice as much. To give you an idea, I ran this when the service was free and ended up getting blacklisted in a under a few hours. Granted, there were certain other factors that contributed to my getting the boot; most prominently that I was checking the DNS for about 300 to 400 zones we were hosting. It could have been left at checking one and making sure all the others were the same, but, as noted above, some bosses like to see reports with lots of colors and lots of pages.

Also, just so you don't feel like I'm leaving you in a lurch, if you do happen to get blacklisted, just go to their member forums (you get access to post to these since you had to do the free registration - you can only browse them if you're not registered) at http://member.dnsstuff.com/forums/ and do a search on "banned," "blacklist" or "black list." Most folks just have to start a thread and request their access back in order to get off the blacklist.

How to find out your NIC's speed and duplex on Solaris

If there's ever an issue with networking, you need to be able to confidently say that your NIC is up at 1Gb full duplex (or whatever your network admin insists).

The way to check this has changed somewhat in Solaris 10, but the old way to check is still available; although not totally reliable.

For instance, you can use "ndd" in all flavors of Solaris (at least from 2.6 up) to get information from /dev/hme (or whatever your NIC's device driver is). Generally, you would look at the speed and duplex settings using the following commands (slight variation depending on NIC's - e.g. 100Mb hme's don't have values for the 1000 Mb queries)

The following commands are pretty useful, and non-destructive, for any device driver, even though you'll get errors for all the stuff that isn't supported:

/usr/sbin/ndd -set /dev/ce instance 0
/usr/sbin/ndd -get /dev/ce adv_1000fdx_cap
/usr/sbin/ndd -get /dev/ce adv_1000hdx_cap
/usr/sbin/ndd -get /dev/ce adv_100fdx_cap
/usr/sbin/ndd -get /dev/ce adv_100hdx_cap
/usr/sbin/ndd -get /dev/ce adv_10fdx_cap
/usr/sbin/ndd -get /dev/ce adv_10hdx_cap
/usr/sbin/ndd -get /dev/ce adv_autoneg_cap


Of course, replace the "/dev/ce" with your particular driver. The only downside to this hack-and-slash method is that you may see 1's (indicating that the parameter is set) rather than 0's (indicating that the paramet is not set) in more than one place (like in adv_1000fdx_cap , adv_100hdx_cap and adv_autoneg_cap all at once ???)

The best way to do it, in my experience is to use either "netstat -k" (In Solaris up to, and including, version 9) or "kstat -p."

In Solaris 9, assuming the same NIC driver and instance "ce0," you can do the following to find out the status of your NIC:

netstat -k ce|grep 0|egrep 'link_speed|link_dupl'

On Solaris 10, you'd do this:

kstat -p|grep ce0|egrep 'speed|dupl'

Basically, the speed should be 1000 for Gb, 100 for 100Mb, etc. Your duplex is represented numerically as either 0, 1 or 2.

0 = No Duplex (or Down)
1 = Half Duplex
2 = Full Duplex

Linux Basic for Ease of Use and Management of a Hosted Website: Getting Started!

These are both links to the first installment: Getting Started.

You can go directly to them through the links below. Hopefully, you will find it informative and, at least somewhat, helpful.

Getting Started on goarticles.com

Getting Started on ezinearticles.com

ZFS Internals

Max Bruning wrote an excellent paper on how to examine the internals of a ZFS data structure. (Look for the article on the ZFS On-Disk Data Walk.) The structure is defined in ZFS On-Disk Specification.
Some key structures:
  • uberblock_t: The starting point when examining a ZFS file system. 128k array of 1k uberblock_t structures, starting at 0x20000 bytes within a vdev label. Defined in uts/common/fs/zfs/sys/uberblock_impl.h Only one uberblock is active at a time; the active uberblock can be found with
    zdb -uuu zpool-name
  • blkptr_t: Locates, describes, and verifies blocks on a disk. Defined in uts/common/fs/zfs/sys/spa.h.
  • dnode_phys_t: Describes an object. Defined by uts/common/fs/zfs/sys/dmu.h
  • objset_phys_t: Describes a group of objects. Defined by uts/common/fs/zfs/sys/dmu_objset.h
  • ZAP Objects: Blocks containing name/value pair attributes. ZAP stands for ZFS Attribute Processor. Defined by uts/common/fs/zfs/sys/zap_leaf.h
  • Bonus Buffer Objects:
    • dsl_dir_phys_t: Contained in a DSL directory dnode_phys_t; contains object ID for a DSL dataset dnode_phys_t
    • dsl_dataset_phys_t: Contained in a DSL dataset dnode_phys_t; contains a blkprt_t pointing indirectly at a second array of dnode_phys_t for objects within a ZFS file system.
    • znode_phys_t: In the bonus buffer of dnode_phys_t structures for files and directories; contains attributes of the file or directory. Similar to a UFS inode in a ZFS context.

 
 
 
 
Copyright © Sun solaris admin