Updates for mdb, dtrace and diskio pages

I just sent a pile of updates to Princeton. They will probably be added to the site on the 2nd.

The index page has been re-written to include pointers to some of the major sources used for the material on the site.

An Intermittent Problems page has been added to describe how to deal with intermittent problems.

The mdb, dtrace and Disk I/O pages have been rewritten and expanded. The adb and mdb pages were merged to reflect the fact that Solaris 7 is really a dead OS at this point in time. The Disk I/O page was updated to reflect current Solaris 10 information from Solaris Internals and Solaris Performance and Tools by McDougall, Mauro and Gregg, and especially to include pointers to the really cool tools on the DTrace Toolkit page.

The kstat page has been updated to provide some additional information, and the netstat page has been changed to reflect the death of netstat -k.

The next major effort for the site will be an expansion of the zones page. (The current page is really not much more than a placeholder to avoid dead links on the other pages that refer to zones.)

I am also working on a root cause analysis page. I am finding that this page is involving a lot of reading of business publications; the business community seems to be way ahead of us on thinking about this issue.

--Scott

New Year 2007 - The year of GNU/Linux

Today is the dawn of a new year, the year 2007. Every year, we wish, hope and dream that it will be the year when GNU/Linux will gain critical mass appeal - not that it has failed to significantly widen its base. One of the most endearing aspect of GNU/Linux for me over and above the ideological considerations is its simplicity.

A couple of years back, when I was yet to be introduced to Linux, I remember having faced many situations when my OS (Windows 98) had died on me for no apparent reason and I was left staring at the blue screen of death. The outcome being doing a clean re-install of Windows. From those experiences, I realized that the Windows was a complex beast especially when it came to troubleshooting problems. Compared to that, troubleshooting in GNU/Linux is akin to a piece of cake or a walk in the park.

The inherent strength of GNU/Linux lies in the fact that all the configuration pertaining to the OS is saved in liberally commented text files which reside in a specific location. And almost all actions executed by the OS is logged in the appropriate files which are also plain text files. For example, reading the file /var/log/messages will reveal a wealth of knowledge about the actions carried out by the OS and the errors if any during boot-up. So once the initial learning curve is overcome, it becomes a joy to work in GNU/Linux.


Some time in 2007, we can hope to see KDE 4.0 released. Already, when I compare KDE 3.5 with older versions, I have found significant increase in speed with which applications start up. KDE 4.0 is expected to be much more snappy as it is developed using Qt 4.0 library and will contain a lot of additional features. Of course, this year Microsoft is also officially releasing its new OS Vista. But many reviews indicate that there are lots of shortcomings in Microsoft's latest offering and the general opinion is that it is not worth its price tag.

I am not trying to disparage Microsoft but when you have a fabulous choice in GNU/Linux which comes with an unbeatable price tag (Free) and if you are able to do almost all your tasks in GNU/Linux baring say playing some of your favorite games, why would you consider buying another OS paying hundreds of dollars ? More over if you are an avid gaming enthusiast, you should rather be buying a Sony PlayStation or a Nintendo Wii or even an XBox and not an OS.

There was a time when I used to boot into Windows to carry out certain tasks. But for the past many months, I have realized that I am able to do all my tasks from within GNU/Linux itself and it has been some time now since I have booted into Windows.

But when we look back, Linux or rather GNU/Linux the OS has done quite well in 2006. With many popular distributions opting for a 6 Month release schedule, we get to try out at least two versions each of many distributions each year. More over, we get the latest software too. Other than that, in 2006 we also saw the open source release of Java code by Sun Microsystems - a great victory for Free software enthusiasts. The Linux BIOS project also got its share of publicity with many hardware manufacturers evincing interest in the project. So in many ways I look forward to an exciting year 2007 for GNU/Linux, Open Source and Free Software. And as always (lets hope) 2007 is going to be the Year of GNU/Linux.

On this positive note, I wish you all a very happy and prosperous New Year.

mdb and kmdb pages

I've submitted the following new pages to Princeton for inclusion: Intermittent Problems, mdb and kmdb. I also made significant improvements to the dtrace page.

A great collection of repositories for Open SuSE Linux

When ever I try out a GNU/Linux distribution, the one thing which hound me atleast in the initial stages is the lack of awareness about additional repositories particularly the ones which contain software packages which are necessary to make working in Linux a complete experience. So the first time I tried Red Hat, I had to scrounge the Net to get the addresses of additional repositories because the servers which hosted the Red Hat official repositories were stretched to their limits and were dead slow and more over, they did not contain non-Free software.

When I tried Ubuntu, this travail was elevated to some extent partly due to the fact that the switches for enabling additional repositories which contained non-Free software was made available in the Linux distribution itself and also partly due to help from the strong active community revolving around it.

Now Vichar Bhatt, a staunch supporter of SuSE Linux - more so for its robustness and superior features, has compiled a collection of repositories which host packages meant for the SuSE Linux distribution though he is quick to point out that installing software from unverified repositories carry a slight security risk. Nevertheless his efforts are commendable. He has also provided a list of official SuSE repositories too which can be found here. Hopefully, the list will be updated as and when new repositories are made available.

dtrace, methodology, SMF

I've submitted pages on
general methodology, dtrace and SMF. Depending on Princeton's work schedule, they may not be up until after the holidays.

KVM Virtualization solution to be tightly integrated with Linux kernel 2.6.20

There is good news on the horizon... which is that Linus Torvalds has merged the KVM code - which is the Kernel Virtual Machine Module in the kernel source tree leading to Linux Kernel 2.6.20. This opens up a lot of avenues as far as Linux is concerned. Using KVM, it is possible to run multiple virtual machines running unmodified Linux or Windows images.

KVM is not the only technology that is around as far as Linux is concerned. But its advantage over other similar technologies is that it is a part of Linux and uses the regular Linux scheduler and memory management which in turn makes it much smaller and simpler to use. It uses slightly modified userland tools that comes bundled with QEMU to manage virtual machines. But the similarity ends there as QEMU inherently uses emulation where as KVM makes use of processor extensions for virtualization.

A normal Linux process has two modes of execution - which is the Kernel mode and the User mode. When you use KVM, Linux will have an additional mode which is the guest mode which in turn will have its own kernel and user modes (see figure below).

On the down side, for KVM to function properly, your computer should have Intel or AMD processors which supports integrated virtualization technology in the hardware level.

Updated Load Average and ZFS Tuning

I've posted updates to the discussion of Load Averages and ZFS Tuning.

25 Shortcomings of Microsoft Vista OS - A good reason to choose GNU/Linux ...

As a continuation of the previous post, here are 25 shortcomings found by Frank J. Ohlhorst when he reviewed the yet to be formally released Microsoft Vista OS. I have added my views which are enclosed in parentesis, alongside the Vista shortcomings.
  • Vista introduces a new variant of the SMB protocol - (I wonder what is the future of Samba now...)
  • Need significant hardware upgrades
  • No anti-virus bundled with Vista
  • Many third party applications still not supported
  • Your machine better have a truck load of Memory - somewhere around 2 GB. (Linux works flawlessly with just 128 MB... even less).
  • Too many Vista editions.
  • Need product activation. (Now that is something you will never see in Linux).
  • Vista OS will take over 10 GB of hard disk space. (With Linux you have a lot of flexibility with respect to the size of the distribution.).
  • Backing up the desktop will take up a lot of space. (Not so in Linux)
  • No must have reasons to buy Vista. (The fact that Linux is Free is reason enough to opt for it)
  • Is significantly different from Windows XP and so there is a learning curve. (Switching to Linux also involves some learning curve but then it is worth it as it doesn't cost you much and in the long run, you have a lot to gain).
  • You'd better come to terms with the cost of Vista - it is really exorbitant running to over $300. (In price, Vista can't beat Linux which is free as in beer and Freedom).
  • Hardware vendors are taking their own time to provide support for Vista.(Now a days, more and more hardware vendors are providing support for Linux).
  • Vista's backup application is more limited than Windows XP's. (Linux has a rich set of backup options and every one of them is free).
  • No VoIP or other communication applications built in. (Skype, Ekiga... the list goes on in Linux).
  • Lacks intelligence and forces users to approve the use of many native applications, such as a task scheduler or disk defragmenter. (Linux is flexible to a fault).
  • Buried controls - requiring a half a dozen mouse clicks. (Some window managers in Linux also have this problem but then here too, you have a variety of choice to suit your tastes).
  • Installation can take hours, upgrades even more. (Barring upgrades, installation of Linux will take atmost 45 minutes. Upgrades will take a little longer).
  • Little information support for Hybrid hard drives.
  • 50 Million lines of code - equates to countless undiscovered bugs. (True, true... It is high time you switch to Linux).
  • New volume-licensing technology limits installations or requires dedicated key-management servers to keep systems activated. (Linux users do not have this headache I believe).
  • Promises have remained just that - mere promises. A case to the point being WinFS, Virtual folders and so on. - (Clever marketing my friend, to keep you interested in their product).
  • Does not have support for IPX, Gopher, WebDAV, NetDDE and AppleTalk. (Linux has better support for many protocols which Windows do not support).
  • Wordpad's ability to open .doc files have been removed. (Now that is what I call extinguishing with style. OpenOffice.org which is bundled with most Linux distributions can open, read and write DOC files).

SysAdmin article

An expanded and rewritten version of the Resource Management page has been tentatively accepted by SysAdmin as for its April 2007 issue.

Updated pages

I've submitted some updated pages for Resource Management, ZFS and Scheduling.

I also added a beginning of a page on Zones.

--Scott

Various ways of detecting rootkits in GNU/Linux

Consider this scenario... Your machine running GNU/Linux has been penetrated by a hacker without your knowledge and he has swapped the passwd program which you use to change the user password with one of his own. His passwd program has the same name as the real passwd program and works flawlessly in all respects except for the fact that it will also gather data residing on your machine such as the user details each time it is run and transmit it to a remote location or it will open a back door for outsiders by providing easy root access and all the time, you will not be aware of its true intention. This is an example of your machine getting rooted - another way of saying your machine is compromised. And the passwd program which the hacker introduced into your machine is a trojaned rootkit.

A rootkit is a collection of tools a hacker installs on a victim computer after gaining initial access. It generally consists of network sniffers, log-cleaning scripts, and trojaned replacements of core system utilities such as ps, netstat, ifconfig, and killall.

Hackers are not the only ones who are found to introduce rootkits in your machine. Recently Sony - a multi billion dollar company, was found guilty of surreptitiously installing a rootkit when a user played one of their music CDs on Windows platform.This was designed *supposedly* to stop copyright infringement. And leading to a furore world wide, they withdrew the CD from the market.

Detecting rootkits on your machine running GNU/Linux
I know of two programs which aid in detecting whether a rootkit has been installed on your machine. They are Rootkit Hunter and Chkrootkit.

Rootkit Hunter
This script will check for and detect around 58 known rootkits and a couple of sniffers and backdoors and make sure that your machine is not infected with these. It does this by running a series of tests which check for default files used by rootkits, wrong file permissions for binaries, checking the kernel modules and so on. Rootkit Hunter is developed by Michael Boelen and has been released under a GPL licence.

Installing Rootkit Hunter is easy and involves downloading and unpacking the archive from its website and then running the installer.sh script logged in as root user.

Fig: Rootkit Hunter checking for rootkits on a Linux machine.

Once installed, you can run rootkit hunter to check for any rootkits infecting your computer using the following command:
# rkhunter -c
The binary rkhunter is installed in the /usr/local/bin directory and one needs to be logged in as root to run this program. Once the program is executed, it conducts a series of tests as follows :
  • MD5 tests to check for any changes
  • Checks the binaries and system tools for any rootkits
  • Checks for trojan specific characteristics
  • Checks for any suspicious file properties of most commonly used programs
  • Carries out a couple of OS dependent tests - this is because rootkit hunter supports multiple OSes.
  • Scans for any promiscuous interfaces and checks frequently used backdoor ports.
  • Checks all the configuration files such as those in the /etc/rc.d directory, the history files, any suspicious hidden files and so on. For example, in my system, it gave a warning to check the files /dev/.udev and /etc/.pwd.lock .
  • Does a version scan of applications which listen on any ports such as the apache web server, procmail and so on.
After all this, it outputs the results of the scan and lists the possible infected files, incorrect MD5 checksums and vulnerable applications if any.

Fig: Another screenshot of rootkit hunter conducting a series of tests.

On my machine, the scanning took 175 seconds. By default, rkhunter conducts a known good check of the system. But you can also insist on a known bad check by passing the '--scan-knownbad-files' option as follows :
# rkhunter -c --scan-knownbad-files 
As rkhunter relies on a database of rootkit names to detect the vulnerability of the system, it is important to check for updates of the database. This is also achieved from the command line as follows:
# rkhunter --update
Ideally, it would be better to run the above command as a cron job so that once you set it up, you can forget all about checking for the updates as the cron will do the task for you. For example, I entered the following line in my crontab file as root user.
59 23 1 * * echo "Rkhunter update check in progress";/usr/local/bin/rkhunter --update
The above line will check for updates first of every month at exactly 11:59 PM. And I will get a mail of the result in my root account.

Chkrootkit
This is another very useful program created by Nelson Murilo and Klaus Steding Jessen which aids in finding out any rootkits on your machine. Unlike Rootkit hunter program, chrootkit does not come with an installer, rather you just unpack the archive and execute the program by name chrootkit. And it conducts a series of tests on a number of binary files. Just like the previous program, this also checks all the important binary files, searches for telltale signs of log files left behind by an intruder and many other tests. In fact, if you pass the option -l to this command, it will list out all the tests it will conduct on your system.
# chkrootkit -l
And if you really want to see some interesting stuff scroll across your terminal, execute the chkrootkit tool with the following option:
# chkrootkit -x 
... which will run this tool in expert mode.

Rootkit Hunter and Chkrootkit together form a nice combination of tools in ones forte to detect rootkits in a machine running Linux.

Update: One reader has kindly pointed out that Michael Boelen has handed over the Rootkit Hunter project to a group of 8 like minded developers. And the new site is located at rkhunter.sourceforge.net

FSF starts campaign to enlighten computer users against Microsoft's Vista OS

When a multi-billion dollar company famed for their extreme stand for all proprietary software is on the verge of releasing their much touted next generation OS named Vista, what does Free Software Foundation which shuns all things proprietary do ? That is right, they start a campaign trying to enlighten the computer users about the pitfalls of buying Vista and also introduce them to the Free alternatives that one can have in place of Microsoft's offer.

FSF has launched a new site named badvista.org which will focus on the danger posed by Treacherous Computing in Vista.

John Sullivan, the FSF program administrator has aptly put it as thus :
Vista is an upsell masquerading as an upgrade. It is an overall regression when you look at the most important aspect of owning and using a computer: your control over what it does. Obviously MS Windows is already proprietary and very restrictive, and well worth rejecting. But the new 'features' in Vista are a Trojan Horse to smuggle in even more restrictions. We'll be focusing attention on detailing how they work, how to resist them, and why people should care.
FSF invites all Freedom loving computer users to participate in the campaign at Badvista.org.

RPM to be revitalized - courtesy of Fedora Project

The hot news right out of the oven is that RPM - the famous package manager that is the base of all Red Hat based Linux distribution packages is going to get a shot in the arm. The Fedora project has decided to create an active community around the RPM. Already a wiki for RPM has been setup which details the project goals.

My first foray with Linux was with Red Hat and during the course of time, I learnt to use RPM to install, upgrade and uninstall packages. But once I started using it, I realized that it was not as simple as it looked. For example, if Package A depended on a library in Package B and Package B was not installed on the machine, then RPM refused to install Package A. And if Package B in turn is dependent on a library residing in Package C, then this problem gets repeated down the line. This came to be known popularly as dependency hell. I have always wondered why Red Hat was not bringing changes to RPM and making the lives of the users easier, given that most packages for Red Hat are RPM based packages.

Perhaps the need of the hour is that all Linux distributions support a universal package format with all packages residing in a central repository, which can be shared by all Linux distributions alike. But this scenario looks bleak with Debian having its own dpkg format and Red Hat based distributions having their own RPM based package formats. Atleast there is going to be better inter operability with different Red Hat based Linux distributions in the future as one of the goals of this new project is to work towards a shared code base between SuSE, Mandrake, Fedora and so on. At present, a lot of work in creating packages and maintaining repositories is being repeated over and over. But Fedora's decision breathes new life in the future of RPM and one can hope to see RPM morph into a more efficient, robust package manager with lesser bugs.

Some of the initial goals of the new project are going to be as follows :
  • Give RPM a full technical review and work towards a shared base.
  • Make RPM a lot simpler.
  • Remove a lot of existing bugs in the RPM code base.
  • Make it more stable.
  • Enhance the RPM-Python bindings thus bringing greater interoperability between Python programs and RPM.

Sun Microsystems - doing all it can to propagate its immense software wealth

A couple of weeks back, Sun Microsystems created a buzz in the tech world when it announced its decision to release their flag ship language Java under a GPL license albeit GPL v2. But even though it could have surprised and gladdened the Free Software fans the world over, it is clear that it was a well calculated, deeply thought out decision which was aimed at the survival and further propagation of the Java language.

It is true that at its core, Sun is a hardware company with the bulk of its revenue being generated from selling high end servers, workstations and storage solutions. But it has also invested heavily in developing robust software. And what is amusing is that it does not charge anything for most of the software it has developed and has been providing it free of cost. OpenOffice.org, Netbeans, Java and Solaris being a case to the point.

At one time, Solaris was the most popular Unix operating system enjoying a huge market share, greater than even IBM AIX and HP-UX combined. Then Linux arrived at the horizon and slowly started chipping away at the market share of all the Unixes including Solaris. With Linux gaining demi god status, it was inevitable that Sun take a deep look at itself. It realized that if it did not re-structure its thinking, it will be reduced to a mere hardware company like Dell selling boxes, from its present status as an IP creator. And it has shown enough foresight to change with changing times. Instead of fighting Linux, it started bundling Linux - more specifically Red Hat Linux with its servers along side its own operating system Solaris. And over an year back, it released the Solaris code under an open license and named it OpenSolaris.

Now Sun is going even further by hinting that it is seriously considering releasing Solaris under a GPL license. A few years back, the PCs that were sold did not meet the minimum requirements for running Solaris which made it a difficult proposition to run it as a desktop. But with rapid advances made in the hardware field, a drastic drop in hardware prices and partly thanks to Microsoft for upping the ante with regard to minimum memory requirements for running Vista, it has suddenly become possible to look at Solaris as a viable desktop OS alternative as it works smoothly with just 512 MB RAM.

Fig: Get a Free DVD consisting of Solaris 10 and Sun Studio software

Taking all these events into consideration, Sun is doing everything in its power to ensure that the fruits of its hard work lives on and gains in popularity. A few days back when I visited Sun's website, I was surprised to see a link offering to send a free DVD media kit consisting of the latest build of Solaris 10 and Sun Studio 11 software to the address of ones choice. I have always believed that one of the reasons for Ubuntu to gain so much popularity was because of its decision to ship free CDs of its OS. Perhaps taking a leaf from Ubuntu, Sun has also started shipping free DVDs of Solaris 10 OS to anybody who want a copy of the same - a sure way of expanding its community.

In the long run, the logical thing for Sun to do will be to release Solaris under GPL. By doing so, Sun would gain the immense good will of the Free Software fans the world over and ensure a permanent place in the history of computing. Unlike GNU/Linux which is a loose amalgamation of scores of individual software pieces around the Linux kernel, Solaris is a whole product whose tools are tightly integrated with its kernel. So even if Solaris is released under GPL, it may not see as many distributions as we see in Linux. And who is better qualified to provide services and support for Solaris other than Sun itself?

Travails of adding a second hard disk in a PC running Linux

Over the years, I have accumulated a couple of hard disks which I salvaged from my old computers. I have a Seagate 12 GB hard disk, a Samsung 2.1 GB hard disk apart from another Seagate 20 GB hard disk. In fact, these were just lying around with out being put to any use and recently I decided to add one of them to my present computer.

I opened up the case and inserted one of the hard disk in the hard disk bay, set up the connectors and turned on the machine hoping to see it boot up as normal. And it did go beyond the Bios POST and I got the grub boot loader screen. But when I chose to boot the Linux distribution, it gave the error that it couldn't find the root partition. That was rather surprising as I had not made any changes to the structure of the hard disk either by re-installing Linux or modifying the grub menu. After some head scratching, I figured out that perhaps the hard disks are detected in a different order by the computer. To verify this, I booted using a Linux Live CD and I was right. The original hard disk was detected by Linux as /dev/hdb instead of /dev/hda and this screwed up everything as the /etc/fstab file and the grub menu had the entry /dev/hda.

The thing to remember is that the hard disks - and I am talking about the IDE variety - have around 8 pins at the back which can be connected together via jumpers. And depending on what position you have set the jumpers, the hard disks will be detected in different ways by the computer.

Usually, when you buy a new hard drive, it will have the jumper pins in the cable select position. This allows the drive to assume the proper role of master or slave based on the connector used on the cable. For the cable select setting to work properly, the cables you are using must support the cable select feature.

In my case, I had two hard disks connected using the same cable and both had the jumper pins in the cable select position. This meant that when I booted the PC, it automatically selected one hard disk as the primary master and the other as the primary slave. And unfortunately, it selected the hard disk which had the Linux OSes as the primary slave which was why it was detected by Linux as /dev/hdb instead of /dev/hda.

Fig: Hard disk jumper settings

Once I figured this out, the solution was simple. I re-opened my computer case and changed the jumper settings of the hard disk containing the Linux OS to the primary master and the jumper settings of the second hard disk to the slave position (See figure above). And I was able to boot into Linux without a problem.

One thing worth noting is that different IDE hard disks have different jumper positions for setting them as primary and slave and the positions are usually printed on top of the hard disk. So you should check the table printed on your hard disk before changing the jumper pins.

Now if you are wondering what I did with the remaining two hard disks, I could have very well added them too but then you can connect only a total of four devices this way namely primary master, primary slave, secondary master and secondary slave. And if I did that, there wouldn't have been a vacant slot to connect the internal CD Writer and the DVD drive. So I use these two hard disks for backing up data.

ZFS and Resource Pools

Additional pages for ZFS and Resource Pools have been submitted. The scheduler page has been expanded.

Sources for the ZFS page include the following:

Solaris ZFS Administration Guide


Brune, Corey, ZFS Administration, SysAdmin Magazine Jan 2007

Scheduling page

I've added a new scheduling page. It is still a work in progress.

--Scott

Humor - Get your ABC's of Linux right

Recently, one of my friends shared with me this rather funny ode to Linux which was passed on to him by a friend of his, which I am in turn sharing with you. So without much ado, here is the rhyming ode to Linux ...

A is for awk, which runs like a snail, and
B is for biff, which reads all your mail.
C is for cc, as hackers recall, while
D is for dd, the command that does all.
E is for emacs, which rebinds your keys, and
F is for fsck, which rebuilds your trees.
G is for grep, a clever detective, while
H is for halt, which may seem defective.
I is for indent, which rarely amuses, and
J is for join, which nobody uses.
K is for kill, which makes you the boss, while
L is for lex, which is missing from DOS.
M is for more, from which less was begot, and
N is for nice, which it really is not.
O is for od, which prints out things nice, while
P is for passwd, which reads in strings twice.
Q is for quota, a Berkeley-type fable, and
R is for ranlib, for sorting ar table.
S is for spell, which attempts to belittle, while
T is for true, which does very little.
U is for uniq, which is used after sort, and
V is for vi, which is hard to abort.
W is for whoami, which tells you your name, while
X is, well, X, of dubious fame.
Y is for yes, which makes an impression, and
Z is for zcat, which handles compression.

I noticed one error in the third line of the poem though, which is that Linux does not use the cc compiler, rather it uses gcc. But apart from that, this is a nice compilation.

Ishikawa and Interrelationship Diagrams

I've been working on a page including information on some formal troubleshooting methods. En route, I have been looking at Cause-and-Effect (Ishikawa fishbone) diagrams and Interrelationship Diagrams.

Here are some of the noteworthy web pages I've been looking at:

Concordia: Cause and Effect Diagram and
Concordia Interrelationship Diagram provide a nice introduction to the two types of diagrams.

HCI Cause and Effect Diagram provides a slightly longer article, including some historical informaton about Ishikawa diagrams.

balancedscorecard.org Cause and Effect Diagram provides a much more in-depth view of Ishikawa diagrams.


questlearningskills.org Interrelationship Diagrams
provides a howto level article about Interrelationship diagrams.


ASQ Interrelationship Diagrams
provides a slightly longer article about Interrelationship Diagrams.


Root Cause Analysis: A Framework for Tool Selection
provides a nice comparison of Ishikawa and Interrelationship diagrams, as well as Current Reality diagrams.

Trolltech's Qtopia Greenphone

We are moving towards an era where the line demarcating a computer and the rest of the electronic devices is at best getting hazy. Take the mobile phones for instance... Now a days, the sheer power and the number of features available in some models of mobile phones rivals those found in a low end PC. Electronic devices are fast morphing into gadgets which are many things for different people.

Trolltech, the creators of the Qt library which is used to develop KDE has released a Linux mobile development device - the rest of us can call it a mobile phone. What is unique about this phone is that it is powered by Linux and more importantly, it is aimed at the developers who are interested in creating applications using the Greenphone SDK and the phone allows the developers to test their applications on it. The current model of greenphone was developed with close cooperation with a Chinese device manufacturer called Yuhua Teltech. Offered as a part of the Greenphone SDK, Trolltech claims that this powerful GSM/GPRS device provides the perfect platform for creation, testing and demonstration of new mobile technology services.

Fig: Trolltech's greenphone powered by Linux.

Nathan Willis who spent a couple of weeks with a review unit reveals his thoughts about this unique product from Trolltech. And even though he finds a couple of faults with the design of the phone, he concludes that nevertheless, it is a small step in the right direction. He has also made available a slide show of the pictures of the phone here.

Specifications of the Qtopia Greenphone
The software that powers this phone consists of Qtopia Phone Edition 4.1.4 and Linux kernel 2.4.19

The hardware consists of the following:
  • Touch-screen and keypad UI
  • QVGA® LCD color screen
  • Marvell® PXA270 312 MHz application processor
  • 64MB RAM & 128MB Flash
  • Mini-SD card slot
  • Broadcom® BCM2121 GSM/GPRS baseband processor
  • Bluetooth® equipped
  • Mini-USB port
Minimum system requirements for the development environment are as follows:
  • 512 MB RAM
  • 2.2 GB HDD space and
  • 1 GHz processor
It may be worth noting that there are a number of embedded devices which are powered by Linux, Nokia's internet tablet being a prominent one. But what makes Trolltech's greenphone unique is the open developement environment provided with a capability to reflash application memory thus making it truly Open Source.

Resource Management resources

I've been looking at documentation on Resource Management over the last few days. Here are some of the articles that I have found. Unfortunately, much of the information I found is based on the Solaris 9 and even Solaris 8 implementations of Resource Manager, which is only somewhat useful when looking at Solaris 10.

If you are aware of additional resources, please feel free to add them to the comments on this post.

Here are the best items I've found:

System Administration Guide: Solaris Containers-Resource Management and Solaris Zones from the Solaris 10 documentation. This is quite well-written, though organized differently than I would have done it.

The Sun BluePrints Guide to Solaris Containers by Foxwell, Lageman, Hoogeveen, Rozenfeld, Setty and Victor. The Resource Management section is also quite well-written, and I found the organization to be more helpful than the manual in the Solaris 10 docs.

Solaris Resource Management by Galvin in SysAdmin. This is a high-level introduction. Though it is specific to Solaris 9, it is still the best quick introduction to the subject I've come across.

Capping a Solaris processes memory by matty is a blog page describing the ability of Solaris 10 to use rcapd to manage memory. This is a brief but thorough discussion of this topic.

FizzBall - A well designed enjoyable game for Linux

Anybody who has played games on their PC will be familiar with a classic game called Breakout where you have to bounce a ball with a paddle and smash all the bricks. While this game in its original make does not sport any special features, it has helped spawn a number of breakout clones which provide additional special effects such as power-ups that provide more power to the ball for a short while - and which make it far more entertaining and enjoyable to play. A couple of years back, I enjoyed playing a breakout clone called DxBall. But most of these so called breakout clones are developed to be run exclusively on Windows. And one of the standing grouse of Linux users is the dearth in quality professional games which run on Linux.

But that is bound to change as more and more professional game developers are seriously considering Linux as a viable platform alongside Windows to release their games. One such professional game development company is Grubby games - founded by Ryan Clark and Matt Parry, which has been developing games which entertain as well as educate the players.

FizzBall one of the games developed by Grubby games and released for Linux is a game with a little similarity to the classic Breakout game in that you have to bounce a bubble using a machine which has the same functions as a paddle in Breakout. But barring that, the game play is entirely different. The aim of the game is to collect all the animals from the wild by directing the bubble towards the animals. In the beginning of each level, the bubble will be small and will bounce off the animals which are larger than the bubble. So you have to collect food in the form of apples, coconuts, acorns and so on littering the area; and as the bubble gobbles up these things, it grows in size and is able to collect larger animals. The level is completed once all the animals are collected inside the bubble in which case, you are taken to the next level. There are over 180 levels in this game.

Fig: You have to break the crates to get to the animals inside.

Fig: Another game level.

What I really liked about the game is that the developers have kept a sharp eye for details. The game is gorgeously animated and illustrated. For example, the animals do not remain stationary but move around. When the bubble bounces off an animal, the animal emits a sound - for example if it is a cow, it moos, if it is a lion, it roars and so on. And if at all the bubble when it is tiny, hits a skunk, it will release a smell.The animals you have collected in each level are kept in an animal sanctuary. All along the game play, you get lots of money and power-ups which you have to collect by directing the machine to them. The money you collect helps you to hop from one island to another (there are seven of them) and also to feed the animals residing in the sanctuary.

Fig: Animal sanctuary

And the power-ups provide additional power to the bubble. Some of the extra powers that are available are the gravity bubble, energy shield, faster bubble, wacky weather ... just to name a few. There are bonus levels after every few regular levels which allow you to gain additional points and money. And each island has offbeat paths that introduce a new animal. And in some levels, you come face to face with an alien which shoots at you and the animals. And it is your duty to capture the alien by directing the bubble towards it.

Fig: View your trophies in the trophy room

The game has two modes - the regular mode and the kids mode. In the kids mode, you do not lose the bubble even if you miss hitting it with the machine. And each new level in the kids mode is preceded by a fun quiz. Just to give a taste, these are some of the questions I encountered in the fun quiz:
  • Which baby animal can be called a kid? Goat
  • A group of these animals can be called a Mob. - I forgot the answer ;-)
  • A group of these animals can be called a pride. Lions
  • Which baby animal can be called a gosling ? Goose
  • Which animal's baby can be called a snakelet ? snake
  • A group of these animals can be called a Parliament. Owl
It is clear to see that the developers behind this game, had dual purpose in mind while creating this game - which is, to educate and entertain. For instance, there are bonus levels in the game where the player has to break the numbered objects in the right order - a good way to teach the little kids how to count.

Fig: Break the numbered crates in order

The story is good, the game play is simple but entertaining and the graphical effects are outstanding which makes this game a very good one for both adults and children alike.

FizzBall game features
  • Over 180 unique levels of game play.
  • The game stage is automatically saved once you exit the game and you can continue where you left off the next time you start playing.
  • Multiple users can be created and each user's game is saved separately.
  • There are two modes - Regular mode and Kids mode. The kids mode does not allow you to lose the balls and includes fun quizzes between levels.
  • If you lose all your bubbles, you can still continue with the game, though all your scores will be canceled.
  • Get trophys for achieving unique feats. For example, I recieved a trophy for capturing an alien without getting hit by a laser :-) .
Running FizzBall in GNU/Linux
This game for GNU/Linux is packaged as a gzipped archive. And all you have to do is unpack the archive and run the script named run.sh and the game will commence.

Pros of the game
  • Eye catching design and excellent graphics.
  • Is educative for little kids as well as entertaining for all ages.
  • Over 180 levels in both the regular and kids mode of the game.
Cons of the game
Is not released under GPL, with the full version of the game costing USD $19.95. A time limited demo version of the game is available though for trying out before buying. But having played the full game, I would say that the money is well spent.

The good news is, professional game developers are seriously eyeing the Linux OS alongside Windows as a viable platform to release their games, FizzBall being a case to the point.

Introduction

This blog is designed to be a companion to my Solaris Troubleshooting web site, hosted by Princeton University.

I used to have an email link to solicit feedback on the web site. I received some outstanding feedback, but I also received an outstanding amount of spam.

I am in the process of updating the site to include more Solaris 10 specific information, especially with regards to Resource Management and dtrace. I've posted a first cut at a Resource Management page.

Thanks to everyone who contributed to the old Solaris 8 site, and a special thanks to Princeton University for continuing to host the site long after I no longer worked on their Unix team.

--Scott Cromar

Richard M Stallman talks on GPL version 3 at the 5th International GPLv3 Conference in Japan

The fifth international GPLv3 conference was held on 22nd and 23rd of November in Akibara Tokyo Japan. A couple of months back, RMS had spoken at the 4th GPLv3 international conference held at Bangalore India. These conferences are a part of a series of events organized by Free Software Foundation to enlighten the public about the upcoming new version of GPL, more specifically to make them aware how GPLv3 will help them better in safeguarding their freedom vis-a-vis the software they use.

In Tokyo too, RMS gave a talk which concentrated on the upcoming GPLv3 and the major changes that they are thinking of bringing to the license in its current form. fsfeurope.org is running a transcript of Mr Stallman's talk in Tokyo which is a must read for any GNU/Linux enthusiast.

He dwelt in depth on a variety of topics such as the differences between GPLv1 and GPLv2, The changes that are aimed at GPLv3 such as better support for internationalization, better license compatibility with the Apache license and the Eclipse license, preventing tivoisation, fighting software patents by carrying an explicit patent license and a few other things.

It is really simple when you look at the logic provided by RMS. He is not concerned about any particular OS or software... rather, his number one priority is to conserve the freedoms enjoyed by the people who use Free software in a way such that nobody will be able to hold the Free Software Movement at ransom. Today Linux is the darling of many corporates with many of the heavy weights jumping on the Linux bandwagon. For any business, the fundamental aim is to make money. And with Linux becoming a viable platform, businesses are slowly realizing the advantages of embracing Linux. The only irritant that is standing in their way is the GPL license which they could do without. RMS and Free Software Foundation is working towards safeguarding the GPL by plugging all its loopholes so that it is not possible to circumvent it and thus compromise any of the freedoms guaranteed by GPL.

Making the right decisions while buying a PC

With the speed with which advances are made on the technological front, I sometimes wonder if buying an electronic product now is a good decision. Especially since if I choose to wait for a couple more months, I could get an even better product with more features at more or less the same price as the product I intended to buy now.

This truism is especially valid while buying a PC. On the one hand, the applications that are being developed demand more and more processing power and memory to run at their optimal level and on the other, the hardware prices are coming down at a steep rate. So if I go out to buy a PC, I have to make sure that it will be able to meet my purpose for atleast the next one and half to two years... after which it will be time to either upgrade - if I am lucky enough to have taken the right decision of buying a PC which was designed with expansions in mind, or just discard the PC and buy a new one.

So what are the things you need to watch out for if you are seriously considering buying a PC now? Thomas Soderstrom has written a very informative article which throws light on the components that one should select to be included in ones PC. He touches on the cases to be used such as full towers, ATX, mini ATX, shuttle form factor and so on, the best processor (CPU), the type of interface slots on the motherboard, the memory, the capacity of the hard drive and so on.

The gist of his choice filters down to the following:
  • ATX tower case - is capable of holding a full size motherboard with space for several optical drives and is ideal for home users and gaming enthusiasts.
  • CPU - As of now Intel core duo provides the best power-performance-price ratio. Enough applications have been optimized for dual-core chips that these should be considered for any moderate to heavy use, especially when multitasking.
  • Always go for motherboards that have the PCI Express slots over the now fast becoming outdated ordinary PCI slots.
  • And with respect to memory (RAM), your best bet is to go for atleast DDR-400 and above though ideally DDR2-800 is recommended. And don't even think of a machine with less than 512 MB RAM. The article strongly recommends a choice of 2 GB memory if you can afford it as near future applications and OSes will demand that much memory.
  • On the storage front, if you are in the habit of archiving video or hoarding music on your hard disk, do consider atleast a hard disk of 150 GB. The article recommends Western Digital's Raptor 150 GB drives if you are on the look out for better performance and Seagate Barracuda 750 GB for those on the look out for larger capacity drives. Both are costly though.
  • And do go for a DVD writer over a CD-RW/DVD combo.
I remember reading an article on the best value Desktop PC in the most recent print edition of PCWorld (Indian edition) magazine. And they selected the "HCL Ezeebee Z991 Core2 Duo" branded PC as the best buy from among a number of other branded PCs. This PC sports the Intel core 2 Duo E6300 processor, 512 MB DDR2 RAM, An optical DVD-RW drive and 160 GB SATA hard disk.

Something I have noticed is that in India, the PCs that are advertised sport just enough memory for the current needs. In fact, it is the habit of these people to skimp on memory while selling a PC. Every day, I see atleast 3 to 4 advertisements selling PCs with just 256 MB memory and in one or two cases with a measly 128 MB. The rule of the thumb to follow is the more memory the better.

A peep into how Compact Discs are manufactured

Ever wonder how a CD aka Compact disc is manufactured? There is a whole string of tasks involved in creating the compact disc. It starts by creating an original master disc made of glass. During the process, the glass disc is treated with two chemicals - a primer and a photo resistant coating. Then the photo resistant coating on the glass surface is dried in an oven for 30 minutes. Then the data that goes on the CD is etched on the glass and then the glass is electrocoated by applying a thin coating of nickel and vanadium. After going through a few other steps, what you have is a die - or a master copy. The CDs that you hold in your hand are manufactured from this master copy. The CDs are not made of glass but is actually a liquid polycarbonate which is injected into the mold to create the CDs.

One thing worth noting is that there are two different ways of creating a CD. One is the recordable CDs or blank CDs and the other is the pressed CD in which the data is directly stamped on the disc at the time of creation of the disc. An example of pressed CDs are the ones you get along with a IT magazines.

I found this short video of manufacturing a CD quite informative. The video clip details the creation of a pressed CD.

Update (Feb 14th 2007): The Youtube video clip embedded here has been removed as I have been notified by its real owners that the video clip is copyrighted.

Ifconfig - dissected and demystified

ifconfig - the ubiquitous command bundled with any Unix/Linux OS is used to setup any/all the network interfaces such as ethernet, wireless, modem and so on that are connected to your computer. ifconfig command provides a wealth of knowledge to any person who takes the time to look at its output. Commonly, the ifconfig command is used for the following tasks:

1) Configuring an interface - be it ethernet card, wireless card, loop back interface or any other. For example, in its simplest form, to set up the IP address of your ethernet card, you pass the necessary options to the ifconfig command as follows:
# ifconfig eth0 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 up
Where the 192.168.0.1 number pertains to the IP address of your machine. I have used a private IP address. 255.255.255.0 denotes the network mask which decides the potential size of your network and the number 192.168.0.255 denotes the broadcast address and lastly, the 'up' keyword is the flag which loads the module related to this particular ethernet card and makes it ready to receive and send data.

2) Gathering data related to the network off which our computer is a part.
When used without any parameters, the command ifconfig shows details of the network interfaces that are up and running in your computer. In my machine which has a single ethernet card and a loop back interface, I get the following output.

eth0 Link encap:Ethernet HWaddr 00:70:40:42:8A:60
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1
RX packets:160889 errors:0 dropped:0 overruns:0 frame:0
TX packets:22345 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33172704 (31.6 MiB) TX bytes:2709641 (2.5 MiB)
Interrupt:9 Base address:0xfc00

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:43 errors:0 dropped:0 overruns:0 frame:0
TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3176 (3.1 KiB) TX bytes:3176 (3.1 KiB)
As you can see, it throws up a lot of data, most of it providing one detail or another. Lets look at the data spewed out by the ifconfig command one by one for the ethernet device.
  • Link encap:Ethernet - This denotes that the interface is an ethernet related device.
  • HWaddr 00:70:40:42:8A:60 - This is the hardware address or MAC address which is unique to each ethernet card which is manufactured. Usually, the first half part of this address will contain the manufacturer code which is common for all the ethernet cards manufactured by the same manufacturer and the rest will denote the device Id which should not be the same for any two devices manufactured at the same place.
  • inet addr - indicates the machine IP address
  • Bcast - denotes the broadcast address
  • Mask - is the network mask which we passed using the netmask option (see above).
  • UP - This flag indicates that the kernel modules related to the ethernet interface has been loaded.
  • BROADCAST - Denotes that the ethernet device supports broadcasting - a necessary characteristic to obtain IP address via DHCP.
  • NOTRAILERS - indicate that trailer encapsulation is disabled. Linux usually ignore trailer encapsulation so this value has no effect at all.
  • RUNNING - The interface is ready to accept data.
  • MULTICAST - This indicates that the ethernet interface supports multicasting. Multicasting can be best understood by relating to a radio station. Multiple devices can capture the same signal from the radio station but if and only if they tune to a particular frequency. Multicast allows a source to send a packet(s) to multiple machines as long as the machines are watching out for that packet.
  • MTU - short form for Maximum Transmission Unit is the size of each packet received by the ethernet card. The value of MTU for all ethernet devices by default is set to 1500. Though you can change the value by passing the necessary option to the ifconfig command. Setting this to a higher value could hazard packet fragmentation or buffer overflows. Do compare the MTU value of your ethernet device and the loopback device and see if they are same or different. Usually, the loopback device will have a larger packet length.
  • Metric - This option can take a value of 0,1,2,3... with the lower the value the more leverage it has. The value of this property decides the priority of the device. This parameter has significance only while routing packets. For example, if you have two ethernet cards and you want to forcibly make your machine use one card over the other in sending the data. Then you can set the Metric value of the ethernet card which you favor lower than that of the other ethernet card. I am told that in Linux, setting this value using ifconfig has no effect on the priority of the card being chosen as Linux uses the Metric value in its routing table to decide the priority.
  • RX Packets, TX Packets - The next two lines show the total number of packets received and transmitted respectively. As you can see in the output, the total errors are 0, no packets are dropped and there are no overruns. If you find the errors or dropped value greater than zero, then it could mean that the ethernet device is failing or there is some congestion in your network.
  • collisions - The value of this field should ideally be 0. If it has a value greater than 0, it could mean that the packets are colliding while traversing your network - a sure sign of network congestion.
  • txqueuelen - This denotes the length of the transmit queue of the device. You usually set it to smaller values for slower devices with a high latency such as modem links and ISDN.
  • RX Bytes, TX Bytes - These indicate the total amount of data that has passed through the ethernet interface either way. Taking the above example, I can fairly assume that I have used up 31.6 MB in downloading and 2.5 MB uploading which is a total of 37.1 MB of bandwidth. As long as there is some network traffic being generated via the ethernet device, both the RX and TX bytes will go on increasing.
  • Interrupt - From the data, I come to know that my network interface card is using the interrupt number 9. This is usually set by the system.
The values of almost all the options listed above can be modified using the requisite ifconfig options. For example, you can pass the option 'trailers' to the ifconfig command to enable trailer encapsulation. Or you can change the packet size by using the 'mtu' option along with the new value and so on. But in majority of the cases, you always accept the default values.

Learning to use the right command is only a minuscule part of the job of a network administrator. The major part of the job is in analyzing the data returned by the command and arriving at the right conclusions.

LinuxBIOS - A truly GPLed Free Software BIOS

A few months back, I had posted an article related to BIOS which described its functions. A BIOS is an acronym for Basic Input Output System and is the starting point of the boot process in your computer. But one of the disadvantages of the proprietary BIOS which are embedded in most PCs is that there is a good amount of code which is used in it to support legacy operating systems such as DOS and the end result is a longer time taken to boot up and pass the control to the resident operating system.

This time can be significantly reduced if the code pertaining to legacy OSes is removed; especially if you intend to install and use any of the modern OSes on your system which tends to do all the hardware probing and load its own hardware drivers anyway. So in a PC running a modern OS such as one of the BSDs, Linux or Windows, the BIOS is doing nothing but providing information, and much of the information it provides will not even be used. In such machines, all the BIOS really had to do is load the bootstrap loader or bootloader and pass the control to the resident OS.

One project which intends to give the BIOS chip makers such as Phoenix and Award a run for their money is the LinuxBIOS project. LinuxBIOS aims to replace the normal BIOS found on PCs, Alphas, and other machines with a Linux kernel that can boot Linux from a cold start. The trick that LinuxBIOS uses is to use a embedded Linux OS to load the main OS. Some of the benefits of using a LinuxBIOS as listed in their website over the more common BIOS-es are as follows (and I quote):

  • 100% Free Software BIOS (GPL)
  • No royalties or license fees!
  • Fast boot times (3 seconds from power-on to Linux console)
  • Avoids the need for a slow, buggy, proprietary BIOS
  • Runs in 32-Bit protected mode almost from the start
  • Written in C, contains virtually no assembly code
  • Supports a wide variety of hardware and payloads
  • Further features: netboot, serial console, remote flashing, ...
The LinuxBIOS project has been making rapid inroads into general acceptance by many computer manufacturers. But one of its major break through was that the One Laptop per Child project selected it for inclusion in its laptop meant for use by children. But the hot news fresh out is that Google - the search engine giant has jumped in the foray by deciding to sponsor the LinuxBIOS project. As of now the LinuxBIOS supports a total of 121 motherboards from 58 vendors.

You can watch a video of Linux BIOS booting Linux on a rev board below:


Is Free Software the future of India? Steve Ballmer CEO of Microsoft answers...

The solemn occasion was the talk show hosted by NDTV 24x7 - a premier cable television news channel in India. And the discussion centered on the topic - "Bridging the digital divide between the urban rich and rural poor in India". The panel composed of distinguished personalities including Steve Ballmer - the CEO of Microsoft, N.R. Narayana Murthy - Chairman of Infosys Technologies, Ashok Jhunjunwala professor of Electrical Engineering from IIT Chennai and Malvinder Mohan Singh - the chief executive and MD of Ranbaxy Laboratories. And the talk was hosted by NDTV's Prannoy Roy. The very first question that was asked off Steve Ballmer was the following: Is Free Software the future of India?

Taking care not to use the word(s) "Free software", Mr Ballmer conceded that a number of revenue streams including those by selling hardware, internet connectivity and software are important. He went on to say, "As rich and good be bridging the digital divide, software companies should look forward to three or four sources of income. Many revenues for software companies will come from not any one thing but will include subscription fees, lower cost hardware, advertising and of course traditional transaction (read proprietary software)". He does agree that "prices must come down" though it was plain to see him take care not to use the word "FREE" in his answer.

Another question that was posed to him was "Is bridging the rural divide all about money ?". Mr Ballmer answered by saying "It is not not about money but also not about short term profits". In short Microsoft is looking for long term profits.

And when asked , "American government spearheads democracy. Are the American businesses in tune with that?". He answered as follows: "Any multi-national should behave appropriately and lawfully in any country in which it does business. But our primary aim is to have a generally more helpful participation in world economy". He went on to say, "You can do three things ... you can stay in and do nothing, stay in and have a point of view or stay out".

Watching the talk show, I could not help thinking that Microsoft is more or less resigned to the fact that Open Source and Free Software is here to stay. And what ever one might do, you cannot easily wish it away. If you can't beat them, join them is the new mantra at Microsoft. The recent news of Microsoft's acquisition (sic) of (Um... partnership with) Novell being a case to the point. But I was left with the feeling that Microsoft needs to be honest and more outright in acknowledging the very important part that Free Software and Linux plays in the over all big picture in IT. Steve Ballmer was on a three day visit to India, his itinerary included calling on the Indian Prime Minister Dr Manmohan Singh to discuss Microsoft's future plans for India.

Book Review: Ubuntu Hacks

I recently got hold of a very nice book on Ubuntu called Ubuntu Hacks co-authored by three authors - Kyle Rankin, Jonathan Oxer and Bill Childers. This is the latest of the hack series of books published by O'Reilly. They have made available a rough cut version of the book online ahead of schedule which was how I got hold of the book but as of now you can also buy the book in print. Put in a nutshell, this book is a collection of around 100 tips and tricks which the authors choose to call hacks, which explain how to accomplish various tasks in Ubuntu Linux. The so called hacks range from down right ordinary to the other end of the spectrum of doing specialised things.

The book is divided into 10 chapters each containing a collection of hacks on a particular topic.

In the first chapter titled - Getting Started, the authors explains how to install Ubuntu on a Mac and Windows PC, moving data like mail from windows Outlook express to Ubuntu, setting up a printer and more. This chapter contains a total of 14 hacks. And my favorite hack is the one where the authors explains how to create a customized version of Ubuntu Live CD containing ones favourite applications.

The second chapter dwells on the topics related to customizing the Ubuntu desktop. Here the authors give tips to install Java, customize Ubuntu desktop, install additional window managers, synchronizing ones PDA and Pocket PC, just to name a few. This chapter contains around 27 tips. My favourite one here would be how to create PDF files by using the print command from any application in Ubuntu.

Ubuntu like other main stream GNU/Linux distributions is encumbered by the patent restrictions related to various popular multimedia file formats. The net result is one cannot play multimedia files like mp3, wmv or quick time in a default Ubuntu installation. In the chapter titled "Multimedia", one gets to know how to enable audio and video applications bundled with Ubuntu to play these restricted media files.Topics like CD ripping, playing encrypted DVDs and playing any media formats using the all time popular mplayer are also explained in simple detail. But the one hack which takes the prize is that which explains how to buy songs at the iTunes music store and download the music on Linux.

Laptop users have some advantages as well as disadvantages over people using the desktop. And considering that the number of laptop users are ever increasing, there is a need to explain how to configure and take care of ones laptop running Ubuntu - like prolonging the battery life, configuring the wireless card on the laptop, hibernating, setting up bluetooth connection and so on. The 4th chapter contains around 8 detailed tips which deals with these interesting topics related to a laptop. I really liked the tip explaining how to make ones laptop settings roam with ones network which could be quite useful for people who are always on the move.

Chapter five of this well structured book deals exclusively with configuring and fine tuning X11 - the X Windows System. Here one gets to know how to configure ones mouse the old fashioned way by editing the requisite section in the X configuration file.As an example, the authors elaborate on a special case of configuring a seven button mouse with a tilted scroll wheel to work properly in Ubuntu. This chapter additionally contain a slew of tips to configure different difficult to configure hardware like the touch pad, setting up dual head displays, installing and configuring Nvidia, ATI and Matrox proprietary graphics drivers to work in Ubuntu and more.

The next chapter titled "Package Management" has a collection of tips in managing packages. Over and above explaining how to install, remove and update packages using apt-get, synaptic and Adept, this chapter also contain tips on creating ones own Ubuntu package from source, cache packages locally from source and more. I especially found the hack where the authors explain how to create ones own Ubuntu package repository really informative.

The seventh chapter dwells exclusively on Security. Usually Ubuntu for the desktop comes with all the ports closed by default which makes it relatively secure. But in these times of cheap high speed Internet access when a home network is connected to the Internet at all times, it is always prudent to run a firewall on ones machine. In this chapter, the authors explain how to setup a robust firewall using iptables and firewallbuilder and then manage it from the Ubuntu desktop. But that is not all, there are tips on configuring SUDO to limit permissions to different users where one gets to know how to do it the command line way. But my favourite tip in this chapter is the one which explains how to encrypt the file system to protect important data. This chapter contains a total of six in-depth hacks all related to enhancing the security of the machine running Ubuntu.

Ubuntu developers have always persevered in providing easy to use front-ends for conducting the most common system administration tasks - be it creating additional user accounts or managing the services running on ones machine. But at times the user is forced to do system administration tasks the command line way. In this chapter titled "Administration", the authors explain for instance how to compile a kernel from source the Ubuntu way and also ways of installing multiple copies of one kernel version on the same machine which could be useful for testing purpose. There are tips for taking backups as well as restoring them. I found the hack titled "Rescuing an unbootable system" really useful. This hack is in fact a collection of tips where common rescue scenarios are elaborated. I found this chapter full of very useful tips as varied as ways of synchronizing files between different machines, mounting a remote filesystem and even a tip on creating videos by capturing what is done on the desktop which could be really useful when shared with others while seeking help on a particular error.

A virtual machine is a simulated computer-inside-another-computer, allowing one to boot an extra operating system inside the primary environment. The next chapter titled "Virtualization and Emulation" explains the different virtualization and emulation technologies available which allow one to run windows/Dos applications and games in Ubuntu, running Ubuntu inside Windows and so on. Here the authors gives in-depth step-by-step walkthroughs in configuring and running virtualization and emulation technologies such as Xen, VMWare server and Wine which imparts a lot of importance to this chapter.

The final chapter of this excellent book which is also the 10th chapter deals with setting up a small home/office server. Here one gets to know how to install and configure a Ubuntu server from scratch. All the topics like setting up quotas to control disk space usage among users, setting up an SSH server, configuring Apache web server, building an email server, DHCP server, DNS server and so on which are a part and parcel of an office server setup have been given due importance in this chapter.

All the ten chapters combined, there are a total of 100 tips (Oops! hacks) in this unique book which are based on the latest version of Ubuntu - Dapper Drake. What is worth noting is that one is not expected to read the book from cover-to-cover rather, you can flip to the hack you are interested in and carry on from there which makes this book a very good reference for setting up and configuring all things related to Ubuntu. At this point, one might have questions in ones mind whether many of the solutions listed in this book aren't already available on the net in popular Linux/Ubuntu forums. True, with some searching one might be able to get what one is looking for. But if you ask me, it is always nice to have something tangible in ones hands while reading instead of having to stare at the monitor for hours on end. More over, each and every tip in this book has been tested by the authors on the latest version of Ubuntu (Dapper Drake) and is guaranteed to work. In writing this book, it is clear that the authors have put in a lot of hard work in covering all facets of configuring this popular Linux distribution which makes this book a worth while buy.

Book Specifications
Name : Ubuntu Hacks
Authors : Kyle Rankin, Jonathan Oxer and Bill Childers
ISBN No: 0-596-52720-9
No of pages: 447
Price : Check at Amazon.com or compare prices of the book.
Rating: Very good

Note to readers: I had written this book review for slashdot. You can read the original piece here.

A list of Ubuntu/Kubuntu repositories

At a time when I was using Red Hat (Fedora), One of my favourite repositories was Dag-wieers not only because the official Red Hat repository was dead slow due to excess traffic but also because dag-wieers contained a number of additional RPM packages which were missing in the official repositories such as those with support for proprietary file formats. That was the culmination of my search for additional repositories to include in my Yum configuration file.

Now a days, this is not at all a problem especially when you are using Ubuntu, as the repositories have been demarcated into different sections such as Universe, Multiverse and so on depending upon the type of package available in each one of them such as whether the package is released under a free license or a proprietary one. And it is only a matter of enabling the desired repository and then using apt-get to install the requisite package. Still, it doesn't hurt to have a number of additional repositories apart from the ones provided officially by Ubuntu. Trevino has compiled an exhaustive collection of repositories for Ubuntu and Kubuntu which you can include in your /etc/apt/sources.list file. A word of caution is in order though, which is that since these are unofficial repositories, it is difficult to verify the integrity of the packages. So use at your own risk.

Learning to use netcat - The TCP/IP swiss army knife

NC - short form for Netcat is a very useful tool available on all Posix OSes which allow one to transfer data across the network via TCP/UDP with ease. The principle is simple... There is a server mode and a client mode. You run the netcat tool as a server listening to a particular port on the machine which sends the data and you use netcat as a client connecting to that particular port on the machine it is running as a server. The basic syntax of netcat is as follows :

For the server :
nc -l <port number >
... where -l option stands for "listen" and the client connects to the server machine as follows :
nc <server ip address> <port number>
And in which all ways can you put it to use ? For one,
  • You can transfer files by this method between remote machines.
  • You can serve a file on a particular port on a machine and multiple remote machines can connect to that port and access the file.
  • Create a partition image and send it to the remote machine on the fly.
  • Compress critical files on the server machine and then have them pulled by a remote machine.
  • And you can do all this securely using a combination of netcat and SSH.
  • It can be used as a port scanner too by use of the -z option.
To see how all the above tasks are accomplished, check out the very nice compilation by G.Notaras who provides a couple of netcat examples . Just remember, the actual command is called 'nc' and not netcat.

 
 
 
 
Copyright © Sun solaris admin