...making Linux just a little more fun!

March 2009 (#160):


Mailbag

This month's answers created by:

[ Ben Okopnik, Kapil Hari Paranjape, René Pfeiffer, Neil Youngman, Rick Moen, Suramya Tomar, Thomas Adam ]
...and you, our readers!

Our Mailbag


Can you give your suggestion on this?

kathik kumar [karthikgiri1984 at gmail.com]


Thu, 26 Feb 2009 18:26:18 +0530

Hi,

I have installed my ipsec tools in my linux-server. I am observing some issues while testing.

These are the steps, i done:

============================
 
   http://ipsec-tools.sourceforge.net http://ipsec-tools.sourceforge.net for
installation (tar file location)
 
   a. Copy the file *.tar.gz to your linux machine
   b. Untar the file tar xvzf *.tar.gz
   c. Then issue ./bootstrap
   d. Issue ./Configure
   e. Go to src directory. Setkey is installed
   f. Setkey is available in the /sbin directory. so i changed the name and
Copy the setkey directory to /sbin.
   g. Then created set-key file.
 
[root@linux scripts]$ cat set-key
#!/sbin/setkey -f
flush;
 
spdflush;
 
add 10.0.0.1 11.0.0.2 ah 500 -A hmac-md5 0xabcd1234abcd1234abcd1234abcd1234;
add 11.0.0.2 10.0.0.1 ah 600 -A hmac-md5 0xabcd1234abcd1234abcd1234abcd1234;
 
#add 10.0.0.1 11.0.0.2 ah 500 -A hmac-sha1
0xabcd1234abcd1234abcd1234abcd1234abcd1234;
#add 11.0.0.2 10.0.0.1 ah 600 -A hmac-sha1
0xabcd1234abcd1234abcd1234abcd1234abcd1234;
 
add 10.0.0.1 11.0.0.2 esp 500 -E 3des-cbc
0x1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd -A hmac-md5
0x1234abcd1234abcd1234abcd1234abcd;
add 11.0.0.2 10.0.0.1 esp 600 -E 3des-cbc
0x1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd -A hmac-md5
0x1234abcd1234abcd1234abcd1234abcd;
 
spdadd 10.0.0.1/32[any] 11.0.0.2/32[any] any -P out ipsec
ah/transport//require esp/transport//require;
spdadd 11.0.0.2/32[any] 10.0.0.1/32[any] any -P in ipsec
ah/transport//require esp/transport//require;
 
spdadd 10.0.0.1/32[any] 11.0.0.2/32[any] any -P out ipsec
ah/tunnel/10.0.0.1-11.0.0.2/require;
spdadd 11.0.0.2/32[any] 10.0.0.1/32[any] any -P in ipsec
ah/tunnel/11.0.0.2-10.0.0.1/require;
 
  h. The issue is, The file has full permission, but it says permission
denied.
 
-rwxrwxrwx  1 root     root 1054 Feb 26 04:44 set-key
[root@linux scripts]$ ./set-key
./set-key: Permission denied.  -------------->>>>>>>>>>ISSUE
[root@linux scripts]$

Could you please let me know about this issue?

Thanks -Karthik

[ Thread continues here (15 messages/15.52kB) ]


[Apertium-stuff] English->Esperanto language pair released

Jimmy O'Regan [joregan at gmail.com]


Wed, 4 Feb 2009 18:12:14 +0000

---------- Forwarded message ----------

From: Jacob Nordfalk <jacob.nordfalk@gmail.com>
Date: 2009/2/4
Subject: [Apertium-stuff] English->Esperanto language pair released
To: Apertium-stuff <apertium-stuff@lists.sourceforge.net>

(English below / pli sube en la angla)

Ni ĵus publikigis novan lingvoparon, de la angla al Esperanto, apertium-eo-en. Temas pri la 27a publikigita aŭtomata tradukilo evoluiigita surbaze de la liber-programara motoro kaj ilaro por maŝina tradukado Apertium, la 4a en la angla kaj la 3a en Esperanto.

Oni povas jam nun provi la paron ĉe: http://www.apertium.org/index.php?id=translatetext&lang=eo

La komencaj celoj por la eldono de tiu versio estis:

* Por nescianto de la angla, esti kapabla kompreni la enhavon de
novaĵo pri ĝenerala temo.
* Esti kapabla kompreni, ĉu la koncerna novaĵo estas sufiĉe interesa
por esti konvene tradukita.
* Mallongaj frazoj devus esti akcepteble tradukitaj.

Ni kredas, ke ni atingis tiujn celojn sufiĉe akcepteble kaj ke la rezulto povas esti utila.

Entute, temas pri 'komunume' disvolvita pakaĵo, t.e. evoluigita per volontula laboro.

==Estonto==

La sekvaj paŝoj estos en diversajn direktojn:

'' - Plibonigi la aktualan version. - Labori pri la tradukilo de Esperanto en la anglan. - Evoluigi novajn ilojn surbaze de la tradukilo kaj la leksikonoj. ''

==Agnoskoj==

Koran dankon al ĉiuj evoluigintoj de la multaj malferma-fontaj iloj kaj liberaj lingvo-rimedoj, kiuj permesis starigi ĉi tiun novan tradukilon. Precipe dankon al ABC Enciklopedioj S.L., IULA of the Universitat Pompeu Fabra kaj Prompsit Language Engineering, S.L por la lingvoparoj Catalan->Esperanto kaj Spanish->Esperanto kiu donis la elirpunkto por tiu ĉi paro.

Koran dankon al Mike Leon (traduku.net), kiu liveris grandegan leksikonon kun 114 082 paroj da vortoj kaj idiomaĵoj.

Koran dankon al Francis Tyer, Prompsit Language Engineering, kaj Jimmy O'Reagan pro la konsiloj kaj pro la dizajno de la kerno de la sistemo de transdon-reguloj.

==Statistikoj==

;Leksiko:

'' Esperanto-vortaro: 22 462 lemoj English-vortaro: 13 733 lemoj Dulingva vortaro: 30 773 duopoj ''

;Reguloj:

'' 1a ŝtupo (chunk): 44 2a ŝtupo (inter-chunk): 8 3a ŝtupo (post-chunk): 6 ''

==Teknikaj notoj==

Vidu http://wiki.apertium.org/wiki/English_and_Esperanto

We've just released a new language pair, for English to Esperanto, apertium-eo-en. It's the 27th released automatic translator developped with the open-source Apertium machine translator engine, the 4th from English and the 3rd to Esperanto.

The pair is already available at http://www.apertium.org/index.php?id=translatetext&lang=eo

The stated release goals for this version were:

[ ... ]

[ Thread continues here (1 message/6.11kB) ]


can we assume that 1 block = 1 kibibyte when implementing Linux quota?

Mulyadi Santosa [mulyadi.santosa at gmail.com]


Fri, 27 Feb 2009 15:13:23 +0700

Hi all

As the subject says, is that assumption true? I asked that because I am confused whether Linux quota refers to filesystem block or it is another kind of block?

thanks in advance...

regards,

Mulyadi.

[ Thread continues here (2 messages/0.91kB) ]


Financial Post Story

Rick Moen [rick at linuxmafia.com]


Sat, 28 Feb 2009 14:35:33 -0800

Quoting Serge G. (galabs2000@gmail.com):

>    IT IS NOT A SPAM, but if you received that message second and plus
>    time JUST CLICK DELETE button and have a nice day.

Uh-huh.

[Stuff about cloning, from a possibly mentally ill individual snipped, about which see: http://jeffvrabel.com/2008/11/24/scarlet[...]ly-troubled-e-mail-i-have-just-received/]

In the headers, one notes:

 Received: from localhost ([127.0.0.1]:48981 helo=linuxmafia.com)
          by linuxmafia.com with esmtp   (Exim 4.61 #1 (EximConfig 2.0))
          id 1LdUqI-00032b-NR   ; Sat, 28 Feb 2009 11:27:27 -0800
 Received: from sendmail.canada.com ([199.71.40.31]:32455)
         by linuxmafia.com with esmtp   (Exim 4.61 #1 (EximConfig 2.0))
         id 1LdUpa-00032O-1z
         for <tag@lists.linuxgazette.net>; Sat, 28 Feb 2009 11:27:08 -0800
 Received: from gmail.com ([10.146.8.73]) by Sendmail.Canada.Com with Microsoft
         SMTPSVC(6.0.3790.3959); Sat, 28 Feb 2009 14:27:51 -0500
 From: "Serge G." <galabs2000@gmail.com>

So, the headers claim that it came from a typical throwaway GMail account (though that might be a lie), but in any event the mail somehow ended up at Canada.com, which is a commerical outfit billed as "Canadian news, entertainment, television, newspapers, free email and more". So, in addition to blacklisting the (alleged) GMail sender, I attempted to contact "abuse@canada.com" and "postmaster@canada.com", both of which are RFC-required mail drops for any domain that does SMTP mail.

Why? Because I really don't like having to blacklist entire large mail concerns, just because one bad apple sneaked abusive mail through. Those mail drops are the canonical places to get a provider's attention to exactly such problems. So, what happened next?

  This message was created automatically by mail delivery software. 
 
  A message that you sent could not be delivered to one or more of its 
  recipients. This is a permanent error. The following address(es) failed:
 
  postmaster@canada.com                                                         
     SMTP error from remote mail server after RCPT TO:<postmaster@canada.com>:
    host mx1.canada.vsi.net [64.22.133.10]: 550 Mailbox disabled for this recipient - <postmaster@canada.com>                                            
   abuse@canada.com
   SMTP error from remote mail server after RCPT TO:<abuse@canada.com>:        
    host mx1.canada.vsi.net [64.22.133.10]: 550 Account temporarily suspended   due to inactivity. User must login to reactivate mailbox.                      

So, wow! They've decided to disable postmaster, and to ignore the abuse mailbox to the point where the software automatically disabled delivery. Well, if they want to completely disregard key RFC requirements on all SMTP senders -- requirements so key that you need to go out of your way to disable them in commodity SMTP software -- then my server doesn't need to hear from them again.

[ ... ]

[ Thread continues here (2 messages/5.38kB) ]


How to make wget exclude a particular link when mirroring

Suramya Tomar [security at suramya.com]


Wed, 04 Feb 2009 18:36:50 +0530

Hey Everyone,

I am trying to mirror an Invision Powerboard forum locally on my system (With permission from the admin) using wget and I am having issues.

When I start downloading wget visits each and every link and makes a local copy (like its supposed to) but in this process it also visits the "Log out" link which logs me out from the site and then I am unable to download the remaining links.

So I need to figure out how to exclude the Logout link from the process. The logout link looks like: www.website.com/index.php?act=Login&CODE=03 So I tried the following:

wget -X "*CODE*" --mirror --load-cookies=/var/www/cookiefile.txt 
http://www.website.com

but it didn't work.

I can't exclude the index.php itself because all the links are based off the index.php with parameters.

I have tried searching the web but didn't find anything relevant.

Any ideas on how to do it?

Thanks,

Suramya

[ Thread continues here (30 messages/51.32kB) ]


Linux Groups?

Rex Backus [rabackus at sonic.net]


Sat, 28 Feb 2009 12:45:32 -0800

Hi Gang,

In SUSE Linux Unleashed (pub 2005) on p20, there is a listing for Groups of Linux Users Everywhere. The URL referenced is broken. Does this list still exist, and if so where?

[http://glue.linuxgazette.com/]

Thanks,

Rex Backus

[ Thread continues here (3 messages/2.56kB) ]


email post on LG

Ben Okopnik [ben at linuxgazette.net]


Mon, 23 Feb 2009 13:06:52 -0500

----- Forwarded message from Danny Holstein <dgholstein@embarqmail.com> -----

Date: Sun, 22 Feb 2009 12:12:56 -0500 (EST)
From: Danny Holstein <dgholstein@embarqmail.com>
To: ben@linuxgazette.net
Subject: email post on LG
Ben;

I read a post of yours on mass emailing on Linux Gazette -- I found the post since I was looking for information on mall emailing in Linux. Since we have a small gourmet foods shop in New Jersey and have compiled a customer list of nearly 1000 emails, we figured we'd use it to announce a sale and specials; nothing nefarious about our intentions. It turns out kmail and evolution aren't well-suited to the task, a significant proportion of emails have been changed (earthlink/mindspring -> embarqmail), mistyped into the database, or have simply been abandoned; we got about a 15% bounce rate.

I imagine a good mass mailer would have hooks for database import (MySQL on a LAMP server) and have a means for "fixing" the database based on bounces and "UNSUBSCRIBE" requests.

Regards;

Danny Holstein

----- End forwarded message -----

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *

[ Thread continues here (5 messages/6.63kB) ]


Hey James, Looking for a MySql DBA

Rick Moen [rick at linuxmafia.com]


Thu, 26 Feb 2009 18:08:36 -0800

[[[ Original message elided from the thread. -- Kat ]]]

Quoting NeoHire (Jobs) (jobs@neohire.com):

> Hey James 
> 
> I am helping a company in Santa Barbara find a MySql DBA for their company.
> They are privately held and doing very well. 
> 
> Take a look @ the role, and let me know your thoughts. 

"tag@lists.linuxgazette.net" has never been the address of Jim Dennis. It's always been the address of The Answer Gang, a public mailing list associated with _Linux Gazette_ magazine. Jim Dennis was for a very long time a member of that mailing list, but hasn't been subscribed for some years.

You might have gotten a clue about it being a mailing list from the word "list" as the hostname, by the way.

Please make sure you excise "tag@lists.linuxgazette.net" from all of your records as a means of reaching Jim: I simply never was that, and isn't now.

What you just did is the Internet equivalent of walking into a crowded cinema during a movie and yelling out "Hi, I heard Jim Dennis might be here. I want to talk to him about a job opening."

You really don't want to be known for that sort of thing. You really don't.


Disk Errors causing machine to freeze

Ben Whyte [ben at whyte-systems.co.uk]


Sat, 07 Feb 2009 11:55:03 +0000

Hi

When I am writing to my disk every now and again, the machine stops responding for a period. When I look in syslog I see messages like this

Feb  7 11:18:02 thor kernel: [ 1515.415879] ata1.00: exception Emask 0x0
SAct 0x0 SErr 0x0 action 0x6 frozen
Feb  7 11:18:02 thor kernel: [ 1515.415896] ata1.00: cmd
35/00:e0:57:5c:8e/00:03:08:00:00/e0 tag 0 dma 507904 out
Feb  7 11:18:02 thor kernel: [ 1515.415899]          res
40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb  7 11:18:02 thor kernel: [ 1515.415905] ata1.00: status: { DRDY }
Feb  7 11:18:02 thor kernel: [ 1515.415923] ata1: soft resetting link
Feb  7 11:18:03 thor kernel: [ 1515.964911] ata1.00: configured for UDMA/133
Feb  7 11:18:03 thor kernel: [ 1515.964911] ata1: EH complete
Feb  7 11:18:03 thor kernel: [ 1515.964911] sd 0:0:0:0: [sda] 1953523055
512-byte hardware sectors (1000204 MB)
Feb  7 11:18:03 thor kernel: [ 1515.964911] sd 0:0:0:0: [sda] Write
Protect is off
Feb  7 11:18:03 thor kernel: [ 1515.964911] sd 0:0:0:0: [sda] Mode
Sense: 00 3a 00 00
Feb  7 11:18:03 thor kernel: [ 1515.964911] sd 0:0:0:0: [sda] Write
cache: enabled, read cache: enabled, doesn't support DPO or FUA
Feb  7 11:21:07 thor kernel: [ 1709.741765] ata1.00: exception Emask 0x0
SAct 0x0 SErr 0x0 action 0x6 frozen
Feb  7 11:21:07 thor kernel: [ 1709.741780] ata1.00: cmd
35/00:e8:4f:76:ad/00:01:0a:00:00/e0 tag 0 dma 249856 out
Feb  7 11:21:07 thor kernel: [ 1709.741782]          res
40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Feb  7 11:21:07 thor kernel: [ 1709.741785] ata1.00: status: { DRDY }
Feb  7 11:21:07 thor kernel: [ 1709.741799] ata1: soft resetting link
Feb  7 11:21:08 thor kernel: [ 1710.444091] ata1.00: configured for UDMA/133
Feb  7 11:21:08 thor kernel: [ 1710.444091] ata1: EH complete
Feb  7 11:21:08 thor kernel: [ 1710.444091] sd 0:0:0:0: [sda] 1953523055
512-byte hardware sectors (1000204 MB)
Feb  7 11:21:08 thor kernel: [ 1710.444091] sd 0:0:0:0: [sda] Write
Protect is off
Feb  7 11:21:08 thor kernel: [ 1710.444091] sd 0:0:0:0: [sda] Mode
Sense: 00 3a 00 00
Feb  7 11:21:08 thor kernel: [ 1710.459700] sd 0:0:0:0: [sda] Write
cache: enabled, read cache: enabled, doesn't support DPO or FUA

So far this appears to cause the hard drive to collect errors and eventually for filesystem corruption to occur.

Has anyone seen this sort of thing before and are you able to shed any light on what might be happening.

This has been on going for a while now, and I have tried different drives and different cables.

Although it may not be immediately obvious its using SATA

Thanks

Ben

[ Thread continues here (6 messages/6.64kB) ]


GUI for idesk ??

J.Bakshi [j.bakshi at icmail.net]


Fri, 27 Feb 2009 22:32:22 +0530

Hello All,

Hope you all are well.

Is anyone of you using idesk presently ?

I have no idea about its present status. Is there any GUI available to configure the .lnk file and .ideskrc ? In my debian lenny apt shows only the idesk itself.

I would by happy if there is any GUI ( preferably .deb ) which can configure idesk.

Thanks Kindly CC to me

[ Thread continues here (4 messages/3.83kB) ]


free patent resource

[james at ryley.com]


Thu, 26 Feb 2009 22:42:56 -0500

Hi,

I saw that you had some trademark info at http://www.lg.ps.pl/issue97/TWDT.html. I don't know if you also have any interest in patents, but if so I wanted to suggest that you check out http://www.freepatentsonline.com and http://www.sumobrain.com

They are by far the best free patent searches on the web, and trademark data will shortly be added to them as well.

If you have a spot, a link would be great.

Sincerely, James

[ Thread continues here (2 messages/2.18kB) ]


Problems booting into Ubuntu 8.10

Deividson Okopnik [deivid.okop at gmail.com]


Thu, 12 Feb 2009 19:21:20 -0300

Hello everyone!

A student here brough to me a problem i couldnt solve, so i'm forwarding it to tag.

After an hibernation, his laptop froze and he had to do a hard-shutdown on it, and after that ubuntu stopped booting, giving him this kernel panic message:

[ 2.222518] Kernel panic - not sycing: VFS: Unable to mount root fs on
unknown-block(0,0)

After a few trys, he noticed that he can boot into other kernels he's got on his machine, but they don't work properly (weird beeps, auto-mount doesnt work, compiz - that kind of stuff)

Heres some messages:

Mounting something without su:
DBus error org.freedesktop.DBus.Error.NoReply: Did not receive a reply.
Possible causes include: the remote application did not send a reply,
the message bus security policy blocked the reply,
the reply timeout expired, or the network connection was broken.
 
Starting compiz:
root@ricardo-laptop:~/a# compiz
Checking for Xgl: not present.
Detected PCI ID for VGA:
Checking for texture_from_pixmap: not present.
Trying again with indirect rendering:
Checking for texture_from_pixmap: present.
Checking for non power of two support: present.
Checking for Composite extension: present.
Comparing resolution (1280x800) to maximum 3D texture size (2048): Passed.
Checking for Software Rasterizer: present.
Software rasterizer detected, abortingaborting and using fallback:
/usr/bin/metacity

--- Then several of the above errors ---

I checked his menu.lst, and heres the entry for the problematic kernel:

title		Ubuntu 8.10, kernel 2.6.27-11-generic
uuid		1ac63e0f-dff7-48f8-9506-ce783a5dd383
kernel		/boot/vmlinuz-2.6.27-11-generic
root=UUID=1ac63e0f-dff7-48f8-9506-ce783a5dd383 ro locale=pt_BR quiet
splash
initrd		/boot/initrd.img-2.6.27-11-generic
quiet

I noticed that weird root=UUID= on kernel, and changed it to root=/dev/sda2 UUID=**** but it didnt work

Well, this is all for now - thanks for reading Deividson

[ Thread continues here (12 messages/21.16kB) ]


Need for a 'viral' marketing campaign to introduce Linux (especially Ubuntu) but it doesn't matter which to Windows sufferers

Charles Norrie [cbnorrie at hotmail.com]


Mon, 2 Feb 2009 07:54:56 +0000

I am concerned that despite the Conficker affair, the message about using Ubuntu isn't getting through especially to industry commentators. So I wrote to Mark Shuttleworth.

Dear Mark Shuttleworth,

Marketing Ubuntu

I am still delighted with Ubuntu, and believe me after using computers for 40 years, it's quite the best OS/package I've ever come across.

But the message isn't getting out there. I became aware how bad it was when I did a single word Google News search for 'Conficker'.

You and I know that if only we could get people to put Linux on their pcs their malware problems would vanish. But that message is not getting through.

Here are the results:

[ ... ]

[ Thread continues here (3 messages/16.93kB) ]


Device driver program

Maneesh M P [maneesh.poyil at gmail.com]


Wed, 11 Feb 2009 00:55:35 +0530

[[[ How surprising that this was sent in .html, eh? -- Kat, rolling her eyes in irony. ]]]

Hello,

I have read the article of yours from http://linuxgazette.net/136/anonymous.html . Sounds interesting and it was very informative too. Thanks for sharing it.

I have a requirement, i need a device driver program which reads ctrl +k, this key sequence will send a SIGUSR1 signal to another application program which will just do a multiplication or sort of few numbers once its recieve the SIGUSR1 signal.

We have written the application program, which is pretty easy though. Now in need of a device driver program which reads ctrl + m and send SIGUSR1 signal.

I will be really thankful if you could help me in this.

- Regards, Maneesh(=E0=B4=AE=E0=B4=A8=E0=B5=80=E0=B4=B7=E0=B5=8D )

[ Thread continues here (9 messages/10.48kB) ]



Talkback: Discuss this article with The Answer Gang

Published in Issue 160 of Linux Gazette, March 2009

Talkback

Talkback:155/moen.html

[ In reference to "Book Review: Understanding Open Source and Free Software Licensing" in LG#155 ]

Jimmy O'Regan [joregan at gmail.com]


Fri, 27 Feb 2009 00:03:59 +0000

2008/9/22 Rick Moen <rick@linuxmafia.com>:

> the licensed work.)  Is it possible to donate a work of original
> ownership directly to the public domain, despite the lack of any legal
> mechanism for doing so?  Is it desirable to have a choice-of-law

Probably not, but you can now dedicate a work to 'the Commons': http://creativecommons.org/licenses/zero/1.0/

'The person who associated a work with this document has dedicated
this work to the Commons by waiving all of his or her rights to the
work under copyright law and all related or neighboring legal rights
he or she had in the work, to the extent allowable by law.'

[ Thread continues here (2 messages/3.20kB) ]


Talkback:158/vogel.html

[ In reference to "Using Hyperestraier to search your stuff" in LG#158 ]

Karl Vogel [vogelke+unix at pobox.com]


Fri, 20 Feb 2009 20:23:02 -0500 (EST)

Greetings:

I ran into some problems while searching for portions of words in one of my indexes. If the word "workstation" was present, I wanted to be able to search for (say) "orksta" without getting 0 hits. I can use substrings like leading or trailing asterisks in a command-line search by adding the "-sf" option:

  me% estcmd search -sf -vu -max 40 $db/srch "$pattern"

I'd rather avoid the wildcard stuff when doing a browser search using estseek.cgi, and I found that adding a synonym list works really well. To build a wordlist for synonyms, run "estcmd words" on your search index:

  me% estcmd words srch | awk '{print $1}' | sort -u > /path/to/synonyms

Here are the changes to estseek.conf:

   1  # phraseform: specifies the phrase form.  "1" is usual form, "2"
   2  # is simplified form, "3" is rough form, "4" is union form, "5"
   3  # is intersection form.
   4  phraseform: 2
   5  
   6  # candetail: specifies whether to enable detail display of a
   7  # document.  "true" or "false".
   8  candetail: true
   9  
  10  # candir: specifies whether to enable directory display of a
  11  # document.  "true" or "false".
  12  candir: true
  13  
  14  # If you want query expansion, enable an outer command by editing
  15  # qxpndcmd in estseek.conf.  It specifies the absolute path of
  16  # a command which outputs synonyms of a word specified by the
  17  # environment variable "ESTWORD".
  18  qxpndcmd: /usr/local/share/hyperestraier/filter/myxpnd

Here's the expansion script "myxpnd":

   1  #!/bin/ksh
   2  # myxpand: list synonyms
   3  
   4  # set variables
   5  LANG=C ; export LANG
   6  LC_ALL=C ; export LC_ALL
   7  PATH="/usr/local/bin:/bin:/usr/bin" ; export PATH
   8  
   9  # show help message
  10  case "$1" in
  11      --help) echo 'List synonyms of a word'; exit 0 ;;
  12      *)      ;;
  13  esac
  14  
  15  # list synonyms
  16  exec fgrep "$ESTWORD" /path/to/synonyms
  17  exit 0

Here's part of a search form that works:

  <form method="get" action="estseek.cgi">
    <input type="text" name="phrase" value="" size="20" />
    <input type="hidden" name="phraseform" value="2" />
    <input type="hidden" name="perpage" value="10" />
    <input type="hidden" name="qxpnd" value="1" />
    <input type="hidden" name="navi" value="0" />
    <input type="submit" value="Search" />
  </form>
-- 
Karl Vogel                      I don't speak for the USAF or my company
 
WEDDING DRESS FOR SALE:  Worn once by mistake.  Call Stephanie.
                                                    --seen in the want-ads


Talkback:159/okopnik.html

[ In reference to "Hyperestraier Redux - A User-friendly Approach" in LG#159 ]

Ben Okopnik [ben at linuxgazette.net]


Tue, 3 Feb 2009 21:59:16 -0500

[ Karl, I hope you don't mind me copying this exchange to The Answer Gang; I'd like for the error that you pointed out to be noted in our next issue, and this is the best and easiest way to do it. If you have any further replies, please CC them to 'tag@lists.linuxgazette.net'. ]

On Tue, Feb 03, 2009 at 02:59:57PM -0500, Karl Vogel wrote:

> Very cool follow-up article!

Thanks, Karl; I appreciate that. That's a very, very fun program - again, thanks for introducing me to it!

[ ... ]

[ Thread continues here (4 messages/16.55kB) ]


Talkback:144/lg_mail.html

[ In reference to "Mailbag" in LG#144 ]

Kapil Hari Paranjape [kapil at imsc.res.in]


Sat, 21 Feb 2009 11:31:46 +0530

Dear TAG-ers,

I am enclosing a qeury received regarding #144.

Regards,

Kapil.

P.S. (to aditya) please do not mail TAG members directly. Use the mailing list address as above instead.

----- Forwarded message from Aditya Bhiday <aditya.bhiday@gmail.com> -----

Date: Sat, 21 Feb 2009 11:18:15 +0530
Subject: Regarding Proxy Tunneling (TLDP)
From: Aditya Bhiday <aditya.bhiday@gmail.com>
To: kapil@imsc.res.in
Hi,

I came across a post at http://tldp.org/LDP/LGNET/144/misc/lg/qu[...]om_being_used_as_a_socks_proxy.htmlwhich said that

"AllowTcpForwarding Specifies whether TCP forwarding is permitted. The default is "yes". Note that disabling TCP forwarding does not improve security unless users are also denied shell access, as they can always install their own forwarders."

I was just experimenting around with tunneling and as to how to block it. Please could explain to me how one can install their own forwarders if ssh tunneling is blocked, or the name of such a forwarding software?

Thanks,

Regards, Aditya Bhiday

----- End forwarded message -----

[ Thread continues here (6 messages/4.93kB) ]


Talkback:159/dokopnik.html

[ In reference to "Installing VMWare Server 2 on Ubuntu Server 8.10 (Intrepid Ibex)" in LG#159 ]

Stack, David [Stack at sbmc-law.com]


Fri, 13 Feb 2009 23:51:46 -0800

Hello

Thanks for the article. I'm attempting to make this work with the 64 bit version of Ubuntu server 8.10 and the 64bit version of vmware server 2.0. The install went fine but I cannot access the vmware console on the IP of my ubuntu server on port 8222? I know it's alive on my network as it accessed packages from the internet and I have a windows server file share mounted. Any Idea's what to check? Thanks

Dave Stack

-- 
   StackAble IT Solutions LLC
   For SBMC P.S.
   (509) 220-8517

[ Thread continues here (3 messages/2.74kB) ]


Talkback:115/orr.html

[ In reference to "WSGI Explorations in Python" in LG#115 ]

Mike Orr [sluggoster at gmail.com]


Tue, 10 Feb 2009 14:00:01 -0800

Here's a follow-up to my 2005 article "WSGI Explorations in Python" (http://linuxgazette.net/115/orr.html). Michael Will asked me what had happened since then, so I wrote this:

There is no state-of-the-Python-web overview that I know of, but a lot has happened since I wrote that article. Pretty much all new frameworks are written for WSGI, and the older ones have been retrofitted. (CherryPy can run as a WSGI server, Plone can run as an application, parts of Zope have been extracted to independent Repoze components, and Quixote has a WSGI gateway floating around somewhere.) Django works with WSGI sort of, and has been ported to Google App Engine via WSGI.

I'm involved with Pylons, a framework that's fully WSGI and modular to the core, built on top of Paste, which is a low-level WSGI library. TurboGears 2 is being built on top of Pylons. This means that different frameworks with different goals and target users can share the same technology, and essentially makes every TG developer a Pylons developer, doubling our developer base.

There's a group of WSGI framework developers including Pylons/TG/Repoze.BFG developers that is designing a new framework to potentially supercede all of them, with plug-in personalities to reflect their different application styles. This is still at the idea stage but may have some alpha code by the end of the year. If so it could point the way to the next generation of frameworks.

Another big issue is Python 3. Over the next year frameworks will either be ported to Python 3 or replaced by frameworks written for Python 3. (Though the Python 2 frameworks may continue in use for several years.) This has to be done on a dependency basis; e.g., Pylons can't upgrade until all the components it depends on have upgraded.

-- 
Mike Orr <sluggoster@gmail.com>



Talkback: Discuss this article with The Answer Gang

Published in Issue 160 of Linux Gazette, March 2009

2-Cent Tips

2-cent Tip: Reading MHT files in Linux

Ben Okopnik [ben at linuxgazette.net]


Fri, 13 Feb 2009 20:08:48 -0500

Just ran across Yet Another Proprietary Format from Micr0s0ft: .mht files. Seems that Internet Explorer saves emails and HTML as an ugly mess that somewhat resembles an email; according to Wikipedia, there's no single standard, and the state of the state is best described as 'sauve qui peut' (which translates, at least in Redmond, as "all your ass are belong to me!") Bleh.

Searching the Web shows that there are a lot of people - the just-converted-to-Linux newbies, particularly - who have loads of these things and don't know what to do with them. Some people recommend Opera (I suppose a couple of hours of Kiri Te Kanawa is good for relieving all kind of stress...); some have had luck with various conversion utilities. I looked at it, and it looked something like a mangled email header, soooo...

I didn't go searching for more than just the one file that I had, but here's what worked fine for opening it:

# Convert line-ends to Unix format
flip -ub file.mht
# Prepend a standard 'From ' mail header to the file
sed -i '1i\'"$(echo From $USER $(date))" file.mht
# You should now be able to open it with your favorite MUA
mutt -f file.mht

It worked fine for me.

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *

[ Thread continues here (13 messages/18.98kB) ]


2-cent Tip: Quick and Free reminders on your cellphone

Deividson Okopnik [deivid.okop at gmail.com]


Mon, 16 Feb 2009 14:07:49 -0300

Not very Linux related, but this is a nice tip anyway.

If you use a future mail page (like http://www.futureme.org/ ) in conjunct with a email to sms gateway (you can get a list here: http://www.mutube.com/projects/open-email-to-sms/gateway-list/ ), you can send reminders in a free, fast way, directly to your cellphone (and futureme doesnt even need you to signup for an account)

[ Thread continues here (2 messages/2.03kB) ]


2-cent Tip: Converting fonts to images

Ben Okopnik [ben at linuxgazette.net]


Sat, 14 Feb 2009 10:00:18 -0500

On Sat, Feb 14, 2009 at 12:40:15AM -0500, Matt Giwer wrote:

> 	I don't know this qualifies as one but I have used it quite a bit and I 
> have not seen it mentioned elsewhere. Nor have I found it in the docs. 
> This uses the convert function from ImageMagick.
>
> 	Why? Because many years ago I became addicted to collecting truetype 
> files. There are only a relative few significantly different ones. For 
> the majority they are the same files with a different names. Most of the 
> rest just don't look different enough to matter. I used this to find the 
> one or two gems per thousand.
>
> 	This works to produce a jpg, or any other image format, of the font. It 
> also works on otf files.

You may not be aware of this, but TrueType files can be viewed with the 'display' program - just as if it were another type of image.

> convert font_name.ttf font_name.ttf.jpg
>
> 	Generally a filed named ttf-convert
>
> #!/bin/bash
> while [ $# -gt 0 ];
> do
> convert $1 $1.jpg

This will, of course, fail if the file is called something like 'MS Verdana.ttf' (note the space.) It's always a good idea to double-quote variables used as arguments.

> shift;
> done

Um... wrong tool for the job, I'm afraid. A "while" loop is a state-tester (i.e., checks for true/false and loops); the "for" loop is an iterator, which walks ("iterates") over a list of arguments - which is the exact requirement here.

# Loop over the list of all command-line arguments
for f
do
	/usr/bin/convert "$f" "$f.jpg"
done

...but since it's really a one-liner, I would just tend to type things like

for f in *ttf; do convert "$f" "$f.jpg"; done

right on the command line. It's not much harder to remember than yet another script name. :)

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *


2-cent Tip: Easy way to make security context of two files/directories exactly the same

Mulyadi Santosa [mulyadi.santosa at gmail.com]


Sun, 8 Feb 2009 00:23:52 +0700

If you're administering a Linux system with SELinux policy being enforced, you'll know that most of the times you need to set security context of certain files/directories so it can be properly accessed by daemons or users.

If the task is something like "I create new directory /var/apache/html to put HTML files for my new Apache virtual host", you should make security context of /var/apache/html exactly like /var/www/html (assuming it is the default DocumentRoot of Apache). This command does that for you:

# chcon --reference=/var/www/html /var/apache/html

For additional information, chmod and chown also employ the same option too. e.g:

# chmod --reference=/var/www/html /var/apache/html
makes the access permissions on both directories exactly the same.

regards,

Mulyadi.


2-cent Tip: What was that (masked) script's name?

Ben Okopnik [ben at linuxgazette.net]


Sat, 14 Feb 2009 10:56:06 -0500

Over the years, I've written hundreds of scripts to help me with various tasks. Many of them are still very useful - but whether my memory isn't what it used to be (hmm... I wonder if I can write a script to fix that...), or whether that's just too many things to keep in my head, I've often found myself searching and cursing when I couldn't find the script that I needed and knew I'd already written. Too many scripts!

To deal with this, I've started copying something that I already do elsewhere. When I teach, I give my students a group of scripts that I've made up for each class - and in each of these scripts, I insert an explanatory comment on line 3 (1 is the shebang, while 2 is a comment containing the author credit and a timestamp.) I try to come up with keywords that will be obvious even if I can't remember what I wrote originally. I also include a 'README' script which reports the third line of every file in the directory. There's even a method for ignoring files and adding your own descriptions (do try to avoid the latter, for obvious reasons.)

So, all you need to do is get in the habit of commenting your scripts in clear and obvious ways on the third line. Once you've done that, this little script will save you much time and frustration. Enjoy!

#!/bin/sh
# Created by Ben Okopnik on Sun Dec 10 08:05:39 EST 2005
# Run this script to see the descriptions of all files in this directory
 
reject='a.out regex.7 access.log'
printf "%-25s%s\n" "regex.7"    "# Henry Spencer's regular expressions"
printf "%-25s%s\n" "access.log" "# Typical Apache log file"
 
for n in *
do
    [ -f "$n" -a -z "`echo $reject|grep \"\<$n\>\"`" ] || continue
    printf "%-25s%s\n" "$n" "`/bin/sed -n '3p' \"$n\"`"
done
-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *



Talkback: Discuss this article with The Answer Gang

Published in Issue 160 of Linux Gazette, March 2009

News Bytes

By Deividson Luiz Okopnik and Howard Dyckoff

News Bytes

Contents:

Selected and Edited by Deividson Okopnik

Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net.


News in General

lightning boltRed Hat and Redmond to Extend Virtualization Interoperability

Red Hat announced in February that in response to strong customer demand, it has signed reciprocal agreements with Microsoft Corporation to enable increased interoperability for the companies' virtualization platforms. Each company will join the other's virtualization validation/certification program and will provide coordinated technical support for their mutual server virtualization customers. Other than for testing and certification, no funds will be exchanged.

With the efforts being undertaken today through this announcement, Red Hat and Microsoft customers will have the ability to run Microsoft Windows Server and Red Hat Enterprise Linux virtual servers on either host environment with configurations that will be tested and supported by both virtualization and operating system leaders.

The key components of the reciprocal agreements are:

The agreements establish coordinated technical support for Microsoft and Red Hat's mutual customers using server virtualization, and the activities included in these agreements do not require the sharing of IP. Therefore, the agreements do not include any patent or open source licensing rights, and additionally contain no financial clauses, other than industry-standard certification/validation testing fees.

The original Red Hat webcast should be available for replay at http://www.redhat.com/promo/webcast/216.

For additional information, please visit http://www.redhat.com/promo/svvp.

lightning boltMoonlight Now on the Linux Desktop

Novell released in February the first open source implementation of Microsoft's Silverlight media framework: Moonlight 1.0.

The first and only open source project that provides Linux users access to Microsoft Silverlight content, Moonlight demonstrates Novell's commitment to making Linux a first-class platform for multimedia and Rich Internet Applications. Moonlight provides the platform Linux users need to use Silverlight and Windows Media content. In combination with Banshee, a Novell-sponsored project to produce an open source media player, Moonlight is part of a complete multimedia solution on Linux.

Available for all major Linux distributions, including openSUSE, SUSE Linux Enterprise, Fedora, Red Hat, and Ubuntu, Moonlight is in part a result of the existing technical collaboration between Microsoft and Novell that extends interoperability between Windows and Linux. Windows Media Video (.wmv), Windows Media Audio (.wma), and MP3 files are supported through the Microsoft Media Pack, a Microsoft-delivered set of media codecs that brings optimized and licensed decoders to every Linux user using Moonlight. Additionally, it allows developers to write Rich Internet Applications for multiple platforms.

Moonlight has already proven useful to tens of thousands of Linux users. A pre-release of Moonlight was delivered on January 19, 2009 to allow Linux users to stream Barack Obama's Inauguration. More than 20,000 Linux users downloaded Moonlight to watch the Silverlight broadcast.

"Moonlight brings the benefits of Silverlight's popular multimedia content to Linux viewers,"said Miguel de Icaza, Mono project founder and Developer Platform vice president at Novell. "This first release delivers on the goal of breaking down barriers to multimedia content and creating parity in the user's viewing experience regardless of whether the user is on Windows or Linux."

For more information on Moonlight, visit http://go-mono.com/moonlight.

lightning boltSun Establishes Open Source Storage Encryption Standard

This past February, Sun Microsystems released the first generic communication protocol between a key manager and an encrypting device into the open source community. This allows better management of encrypted keys and can avoid licensing fees for proprietary encryption schemes. The protocol is implemented as a complete toolkit and is downloadable from the OpenSolaris website: http://opensolaris.org/os/project/kmsagenttoolkit/.

By releasing the Sun protocol as Open Source, Sun is taking steps towards unifying the technology. Sun continues to work with partners and with appropriate standards bodies such as IEEE 1619.3 Working Group and OASIS to further develop and formalize the interface as an industry standard. RSA is currently developing a solution using this protocol to work with their RKM key manager. IBM's drive division is working on supporting this protocol for their IBM LTO4 drive shipped in Sun Libraries. Additionally, Sun has shared this protocol with numerous other industry partners including computer OEMs, back up application providers, disk array and switch manufacturers.

Governments, finance, healthcare, retail, and other vertical markets need to comply with current regulatory laws that create mandates to protect sensitive stored data. To support these requirements, this protocol is available to customers using the Sun StorageTek KMS 2.0 Key Manager and Sun StorageTek Enterprise Drives and Sun StorageTek LTO4 drives shipped in Sun libraries. A number of additional partners are developing products based on this protocol, including EMC, whose RSA security division has talked about releasing it as an option on their RKM Key Manager.

"Open Storage solutions allows customers to break free from the chains of proprietary hardware and software and this new protocol extends this lifeline into the expensive and highly fragmented encryption market," said Jason Schaffer, senior director, storage product management, Sun Microsystems, Inc. "Open source equals customer value for encryption solutions and Sun now offers the only solution on the market that works across multiple vendors and suppliers."

lightning boltIBM Petaflop Supercomputer Europe's Most Powerful

The German research center Forschungszentrum Juelich has selected IBM (NYSE: IBM) to develop the first supercomputer in Europe capable of one Petaflop, or one thousand trillion operations per second. IBM will partner with Juelich-based Gauss Centre for Supercomputing to install the new IBM Blue Gene/P System in the first half of this year.

This new system - financed by the German Government - will help assure Forschungszentrum Juelich remains one of Europe's most renowned research centers and continues to play an important role in the global high performance computing research community.

This new Blue Gene System is the first to include new water cooling technology, created by IBM Research, that uses room temperature water to cool the servers. As air moves through the server racks, heat is removed as it passes through the water-based cooling system before it enters the next rack of servers. This result is a 91 percent reduction in air conditioning units that would have been required to cool Forschungszentrum Juelich's data center with an air-cooled Blue Gene.

"With speeds over a Petaflop, this new Juelich-based supercomputer offers the processing ability of more than 200,000 laptop computers," explains Professor Thomas Lippert, lead scientist of the Juelich supercomputing center. "In addition to raw power, this new system will be among the most energy efficient in the world."

The new Blue Gene/P System will include nearly 295,000 POWER processors in 72 racks and also include over 144 terabytes of memory. It will significantly add to Forschungszentrum Juelich's existing supercomputing capability at the Center, which includes an additional six petabytes of hard disk drive space, the amount equivalent of more than one million DVDs.

Inauguration and naming of the new systems will take place at an opening ceremony in mid-2009. The new supercomputer will be the first for the German Gauss Center for Supercomputing (GCS).

lightning boltCuba Shows Linux-based National OS at I.T. Convention

[based on on materials from Reuters and http://www.escambray.cu, the digital newspaper of Sancti Spiritus province, Cuba.]

A local Linux variant developed by Havana computer science students, will become the national distro of Cuba.

NOVA, a Linux-based operating system apparently based on Gentoo, supports standard office work applications, playing music and video archives and surfing the Web. The initiative is aimed at saving dollars in terms of paying for software program licenses and achieving technological independence from US companies.

[ Wait, wait... this is like a repeat nightmare. Isn't there a standard story about how the Chevrolet Nova didn't sell well in Mexico because 'no va' in Spanish means 'no go'??? Only this time, it's not clueless American GM executives deciding on the name... -- Ben ]

The Cuban News Agency reported that the NOVA operating system and other free technologies are aimed at inexpert users or at those who are currently migrating to this Linux-based software programs from Windows. According to reports from Reuters, Cuban ministries are also concerned about security for the Windows-based computers they already have and believe US intelligence agencies have backdoor access to many of the Windows applications.

The NOVA OS, developed by the University of Information Technologies (UCI) in Havana, was one of the 23 Cuban products on display at the 13th I.T. Convention and Fair. The NOVA team is based at the University and offers technical support, security patches and software updates.

There are over 1 million computer users in Cuba and currently about 20% use Linux. With the adoption of NOVA, the government there hopes exceed 50% Linux usage in 5 years.

This year's I.T. Convention was divided into two parts. The first consisted of a scientific event with 80 lectures, mainly by delegates and representatives from international organizations and institutions. The second was an exhibition with the participation of more than 140 Cuban and foreign entities.

A YouTube posting shows the OS in operation: http://www.youtube.com/watch?v=uTXIzaxfox4.

lightning boltIntrahealth Open Initiative to Fund Open Source Training for Health Workers in Africa

IntraHealth OPEN is a global initiative that engages African technology, health leaders and students to develop and sustain software applications tailored to meet urgent health care needs. The project will strengthen African health professionals' ability to identify and implement appropriate technology solutions that make health services more cost-effective and accessible.

IntraHealth International has a global leadership role in developing and implementing open systems to support accessible, high-quality health care in Africa and on introducing systems to better plan and manage the healthcare workforce in Africa. IntraHealth currently has open source initiatives under way in nine countries in Africa.

IntraHealth International is partnering with Grammy Award-winning artist Youssou N'Dour on a charity album titled "OPEN Remix" benefiting a new initiative created to address critical health issues in Africa by putting the latest open source software technologies directly in the hands of health workers.

Nas, Duncan Sheik and Peter Buck of R.E.M. are among the headliners donating remixes of N'Dour's song "Wake Up" to raise funds and visibility for the IntraHealth OPEN Initiative.

Starting February 11th and rolling through Spring, free downloads of the songs will be available with a suggested donation through distribution partners including Rhapsody, iLike, Amazon MP3, and through IntraHealth's website at http://www.intrahealth.org/open.

With growing connectivity and mobile phone use increasing at twice the global rate, Africa can take advantage of cutting edge eHealth technologies. Open source technology is ideally suited for application in low-resource environments since it is inexpensive, flexible, and collaborative.

IntraHealth has been collaborating with African governments and private institutions to design and apply open source solutions to strengthen their ability to use health information for strategic health policy and planning. Using web-based interfaces, mobile phones and PDAs, open source software systems can provide far-reaching and innovative tools to support increased efficiency, productivity, and performance of health services and information to meet the needs of families and communities across the continent.

To help shepherd its growing initiative, IntraHealth has assembled an OPEN Council made up of supporters and leaders in the fields of entertainment, technology, and public health. The OPEN Council members include recording artists and representatives from VH1, MTV Europe Foundation, ONE, Wikia, AEG Live, Rhapsody. iLike, Bowery Presents, Indaba Music, Tech Soup Global, Red Hat, inSTEDD, The Linux Foundation, GNOME Foundation, Creative Commons, Open Source Software Institute, OSU Open Source Lab, O'Reilly Media, Motorola, Novell, Brightkite, The David and Lucile Packard Foundation, D-Tree International, Ushahidi, the Center for Children and Technology and Reseau Africain de L'Education Pour La Sante. The group is seeking funding and partners to help expand the program.

For more information on the program and the music created to support the initiative http://www.marketwire.com/press-release/Intrahealth-948359.html.


Conferences and Events

eComm 2009
March 3-5, SF Airport Hyatt, San Mateo, CA
http://ecommconf.com/
DrupalCons 2009
March 4-7, Washington, DC
http://dc2009.drupalcon.org/
TheServerSide Java Symposium
March 8-20, Caesars Palace, Las Vegas, NV
http://javasymposium.techtarget.com
SD West 2009
March 9-13, Santa Clara, CA
http://www.sdexpo.com/
O'Reilly Emerging Technology Conference
March 9-12, Fairmont Hotel, San Jose, CA
http://conferences.oreilly.com/etech?CMP=EMC-conf_et09_int&ATT=EM5-NL
ManageFusion 09
March 10-12, MGM Grand Hotel, Las Vegas, NV
http://www.managefusion.com/agenda.aspx
VEE 2009 Conference on Virtual Execution Environments
March 11-13, Crowne Plaza, Washington, DC
http://www.cs.purdue.edu/VEE09/Home.html
Orlando Scrum Gathering 2009
March 16-18, Gaylord Resort, Orlando, FL
http://www.scrumgathering.org
CommunityOne East 2009 plus Training Day
March 18-19, Marriott Marquis Hotel, New York, NY
http://developers.sun.com/events/communityone/2009/east/sessions.jsp
Forrester's IT Infrastructure & Operations Forum
March 18-19, San Diego, CA
http://www.forrester.com/events/eventdetail?eventID=2372
ApacheCon Europe 2009
March 23-27, Amsterdam, Netherlands
http://eu.apachecon.com/c/aceu2009
EclipseCon 2009
March 23-26, Santa Clara, CA
http://www.eclipsecon.org/2009/home
Open Source Business Conference (OSBC)
March 24-25, San Francisco, CA
http://www.infoworld.com/event/osbc/
MDM Spring Summit 2009
March 29-31, San Francisco, CA
http://www.mdm-summit.com/
TechTarget Advanced Virtualization roadshow
March-December 2009, Various Cities
http://go.techtarget.com/r/5861576/5098473
International Virtualization and Cloud Computing Conferences
March 30-April 1, Roosevelt Hotel, New York City, NY
http://www.virtualizationconference.com/
http://cloudcomputingexpo.com/
USENIX HotPar '09 Workshop on Hot Topics in Parallelism
March 30-31, Claremont Resort, Berkeley, CA
http://usenix.org/events/hotpar09/
ESC Silicon Valley 2009 / Embedded Systems
March 30-Apr 3, San Jose, CA
http://esc-sv09.techinsightsevents.com/
Web 2.0 Expo San Francisco
Co-presented by O'Reilly Media and TechWeb
March 31-April 3, San Francisco, CA
http://www.web2expo.com/sanfrancisco
Software Test & Performance Conference (STPCon Spring)
March 31-April 2, San Mateo, CA
http://www.stpcon.com/
Linux Collaboration Summit 2009
April 8-10, San Francisco, CA
http://events.linuxfoundation.org/events/collaboration-summit
Black Hat Europe 2009
April 14-17, Moevenpick City Center, Amsterdam, Netherlands
http://www.blackhat.com/html/bh-europe-09/bh-eu-09-main.html
MySQL Conference & Expo
April 20-23, Santa Clara, CA
http://www.mysqlconf.com/
RSAConference 2009
April 20-24, San Francisco, CA
http://www.rsaconference.com/2009/US/Home.aspx
USENIX/ACM LEET '09 & NSDI '09
6th Symposium on Networked Systems Design and Implementation Co-located w/ 2nd Workshop on Large-Scale Exploits and Emergent Threats
April 21-24, Park Plaza Hotel, Boston, MA
http://usenix.org/events/nsdi09/
SOA Summit 2009
May 4-5, Scottsdale, AZ
http://www.soasummit2009.com/
a href="http://www.railsconf.com/">http://www.railsconf.com/
STAREAST - Software Testing, Analysis & Review
May 4-8, Rosen Hotel, Orlando, FL
http://www.sqe.com/go?SE09home
EMC World 2009
May 18, Orlando, FL
http://www.emcworld.com/
Interop Las Vegas 2009
May 19-21, Las Vegas, NV
http://www.interop.com/lasvegas/
Preliminary Agenda Announced for LF's 2009 Collaboration Summit

The Linux Foundation has released a preliminary agenda for its 3rd Annual Collaboration Summit which will take place April 8-10, 2009 in San Francisco. Confirmed Keynotes and Roundtable Panels include:

In addition, confirmed sessions, mini-summits and workgroup meetings include:

The Collaboration Summit is an exclusive, invitation-only gathering of the brightest minds in Linux, including core kernel developers, distribution maintainers, ISVs, end users, system vendors and other community organizations. Use this link to request an invitation to attend this special event: http://events.linuxfoundation.org/component/registrationpro/?func=details&did=2

The 3rd Annual Collaboration Summit will be co-located with the CELF embedded Linux Conference and the Linux Storage and Filesystem Workshop.

For more information on the Linux Foundation Collaboration Summit, please visit: http://events.linuxfoundation.org/events/collaboration-summit.


Distro News

lightning boltMandriva Linux 2009 Spring Beta Out

The beta version for Mandriva Linux 2009 Spring (2009.1) is now available. This beta version provides some updates on major desktop components of the distribution, including KDE 4.2.0, GNOME 2.25.90, Xfce 4.6 RC1, X.Org server 1.5, OpenOffice.org 3.0.1, and Qt 4.5.0 RC1. It also has an early version of SpeedBoot to reduce startup time.

See more at: http://wiki.mandriva.com/en/2009.1_Beta

lightning boltDebian GNU/Linux 5.0 Released

The Debian Project has released Debian GNU/Linux version 5.0 (codenamed "Lenny") after 22 months of development work. Debian GNU/Linux is based on the 2.6.26 Linux kernel and supports a total of twelve processor architectures including Intel and AMD (both 32bit and 64bit), PowerPC, HP PA-RISC, MIPS, ARM and IBM S/390 platforms.

Live CD, USB images, CD, DVD and newly added Blu-ray disc images are available to download, via FTP, HTTP or BitTorrent.

Debian 5.0 includes the KDE, GNOME, Xfce, and LXDE desktop environments. It also features compatibility with the FHS v2.3 and software developed for version 3.2 of the LSB.

Debian 5.0 "Lenny" adds support for Marvell's Orion platform which is used in many storage devices. Additionally, "Lenny" now supports several Netbooks, in particular the Eee PC by Asus. "Lenny" also contains the build tools for Emdebian which allow Debian source packages to be cross-built and shrunk to suit embedded ARM systems. Debian GNU/Linux 5.0 "Lenny" includes the new ARM EABI port, "armel".

This release includes numerous updated software packages, such as the K Desktop Environment 3.5.10 (KDE), an updated version of the GNOME desktop environment 2.22.2, the Xfce 4.4.2 desktop environment, LXDE 0.3.2.1, the GNUstep desktop 7.3, X.Org 7.3, OpenOffice.org 2.4.1, GIMP 2.4.7, Iceweasel 3.0.6 (an unbranded version of Mozilla Firefox), Icedove 2.0.0.19 (an unbranded version of Mozilla Thunderbird), PostgreSQL 8.3.6, MySQL 5.0.51a, GNU Compiler Collection 4.3.2, Linux kernel version 2.6.26, Apache 2.2.9, Samba 3.2.5, Python 2.5.2 and 2.4.6, Perl 5.10.0, PHP 5.2.6, Asterisk 1.4.21.2, Emacs 22, Inkscape 0.46, Nagios 3.06, Xen Hypervisor 3.2.1 (dom0 as well as domU support), OpenJDK 6b11, and more than 23,000 other ready-to-use software packages (built from over 12,000 source packages).

With the integration of X.Org 7.3, the X server autoconfigures itself with most hardware. Newly introduced packages allow the full support of NTFS filesystems and the use of most multimedia keys out of the box. Support for Adobe Flash format files is available via the swfdec or Gnash plugins. Overall improvements for notebooks have been introduced such as out of the box support of CPU frequency scaling.

Further improvements in system security include the installation of available security updates before the first reboot by the Debian Installer, the reduction of setuid root binaries and open ports in the standard installation, and the use of GCC hardening features in the builds of several security-critical packages.

lightning boltLightweight AntiX 8.0 is Released

The antiX-team has released AntiX MEPIS 8 'Intifada' - a fast and lightweight desktop OS and liveCD based on SimplyMEPIS - is now available at Mepis mirrors in the released/antix directories in full and base editions.

This release defaults to a fully customized icewm desktop (fluxbox is also installed). In addition to the SimplyMEPIS 8.0 foundation with its 2.6.27-15 kernel and Assistants, antiX has an improved antiX Control Centre, new scripts for screenshots, and phonebook. There are improved and extended themes and artwork for icewm and fluxbox.

Localization is improved in this version and there are updated applications such as pidgin(2.4), iceweasel(3.0.6), abiword(2.6.4), gnumeric(1.8-3), rox-filer, and claws-mail(3.5), an updated ceni and wicd for wired/wireless connections, UMTSmon [a simple connect program for users using 3g usb modems], firehol firewall, and Dillo 2. New apps include: zim (wiki), Gjots (notes), Grsync, parted and luckyBackup.

AntiX is designed to work on computers with as little as 64 MB RAM and Pentium II or equivalent AMD processors, but not K5/K6 processors. However, 128 MB RAM is the recommended minimum.

lightning boltUbuntu 8.04.02 Maintenance, 9.04 Alpha 3 releases out

Ubuntu 8.04.2 LTS, the second maintenance update to Ubuntu's 8.04 LTS release, is now available. This release includes updated server, desktop, and alternate installation CDs for the i386 and amd64 architectures.

In all, over 200 updates have been integrated, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining compatibility with Ubuntu 8.04 LTS.

This is the second maintenance release of Ubuntu 8.04 LTS, which will be supported with maintenance updates and security fixes until April 2011 on desktops and April 2013 on servers.

To get Ubuntu 8.04.2 LTS, visit: http://www.ubuntu.com/getubuntu/download.

The release notes, which document caveats and workarounds for known issues, are available at: http://www.ubuntu.com/getubuntu/releasenotes/804.

See http://www.ubuntu.com/usn for a full list of Ubuntu security updates. As of 2009-01-20, all updates affecting packages on the CD have been applied.

Also, a complete list of post-release updates can also be found at: https://wiki.ubuntu.com/HardyReleaseNotes/ChangeSummary/8.04.2.


Software and Product News

lightning boltSun Gives GlassFish Portfolio Integrated LAMP Stack

Sun announced in February its Sun GlassFish Portfolio, an open source, high-performance Web platform, based on its GlassFish application server.

Built on leading open source projects - including Apache Tomcat, Ruby, PHP, Liferay Portal and GlassFish - the Sun GlassFish Portfolio packages these components into a complete, pre-integrated and fully-tested open source platform. This includes a packaged version of what was the WebSynergy project - Sun's portal software combined with LifeRay's portal. It is now available as Web Space portal, a partnered product. Web Space portal allows uses to create their own spaces for collaboration and social networking and is viewed as a FOSSw alternative to products like Sharepoint. The FOSSw version of Web Space is available at https://portal.dev.java.net.

"Companies are under huge pressure to do more with less, and more and more frequently they are turning to scalable, supported open source technologies," said Mark Herring, vice president, Software Infrastructure, Sun Microsystems. "With GlassFish Portfolio, Sun is providing customers with the highest-scaling, highest performance and lowest cost, open source platform. Companies developing Web applications with the Sun GlassFish Portfolio can expect to deploy quickly, see a seven-fold improvement in application price/performance at only 10 percent of the cost - over proprietary offerings."

The Sun GlassFish Portfolio offers a range of support and services options for customers, from basic patch services to 24x7 mission critical support. The Sun GlassFish Portfolio is available immediately via a flexible subscription-based pricing model starting at $999 per server (USD), making it a cost-effective solution. Also included is unlimited server pricing which can save 80-90% of costs compared to competitiors with fixed, per-server licensing.

To purchase the Sun GlassFish Portfolio, visit: http://www.sun.com/glassfish.

According to Sun, this combines the most downloaded open source application server, GlassFish, with the world's most popular open source database, MySQL, and allows customers to develop a unified open source strategy with simplicity-of-use for developers and administrators. GlassFish Portfolio and MySQL Enterprise are both available from Sun with consistent pricing and subscription support model that mirrors support at silver/gold/platnum levels.

lightning boltJBoss RESTEasy 1.0 Framework Goes GA

The GA (General Availability) release of JBoss RESTEasy occurred in late January. JBoss RESTEasy is a framework that allows writing RESTFul Web Services in Java. It offers a fully portable implementation of the JAX-RS specification for RESTful Web Services over HTTP.

RESTEasy can run in any servlet container, but tight integration with the JBoss Application Server is also available. While JAX-RS is only a server-side specification, RESTEasy brings JAX-RS to the client through the RESTEasy JAX-RS Client Framework. This client-side framework maps outgoing HTTP requests to remote servers using JAX-RS annotations and interface proxies.

Features include:

  • Fully certified JAX-RS implementation;
  • Portable to any app-server/Tomcat that runs on JDK 5 or higher;
  • Embeddedable server implementation for junit testing;
  • Rich set of providers for: XML, JSON, YAML, Fastinfoset, Atom, etc;
  • JAXB marshalling into XML, JSON, Fastinfoset, and Atom as well as wrappers for arrays, lists, and sets of JAXB Objects;
  • Asynchronous HTTP (Comet) abstractions for JBoss Web, Tomcat 6, and Servlet 3.0;
  • EJB, Spring, and Spring MVC integration;
  • Client framework that leverages JAX-RS annotations so that you can write HTTP clients easily (JAX-RS only defines server bindings).

    All documentation and download links are available at RESTEasy's JBoss.org project page.

    lightning boltLeftHand Networks Delivers Entry-level SAN for Virtualized Environments

    LeftHand Networks, an HP company, now has an entry-level storage area network (SAN) solution that improves performance and reduces storage costs within virtualized server environments for midsize businesses.

    The LeftHand SAS Starter SAN helps customers efficiently deploy their first virtualization projects by simplifying the process of creating, accessing and managing shared storage resources.

    The SAS Starter SAN includes management tools that provide replication, thin provisioning and snapshot capabilities. These capabilities help ensure data availability even as the environment becomes more complex. In addition, customers can purchase only the capacity they need, easily increasing both the capacity and performance of their system without experiencing any disruption. The SAS Starter SAN can scale to more than 80 network ports and 320 disk drives to accommodate business growth.

    LeftHand also introduced a higher capacity SATA Starter SAN that offers a 25 percent reduction in the cost per gigabyte (GB) of disk drive capacity than the previous generation. The solution's storage capacity has increased 33 percent to 12 terabytes (TB) while maintaining its current list price. The SATA Starter SAN is ideal for customers who want to consolidate storage for Microsoft Windows or Linux servers.

    Pricing starts at $35,000 for the 4.8 TB SAS Starter SAN and $30,000 for the 12 TB SATA Starter SAN. More information on LeftHand Networks SAN solutions is available at: http://www.lefthandnetworks.com.

    lightning boltBit Defender Updates Its Free AV Scanner for Unices and Linux

    BitDefender launched a new version of Antivirus Scanner for Unices, an on-demand antivirus and antispyware scanner for Linux and FreeBSD. BitDefender Antivirus Scanner for Unices is free for personal use.

    The new and improved features of BitDefender Antivirus Scanner for Unices increase user protection. The main features of the BD sacnner include:

    • On-demand antivirus and antispyware protection;
    • Script and extension-based integration with various applications and services;
    • Classic command line scanner complete with a graphical user interface for better integration with desktop environments;
    • Automatic addition of the scanner's GUI to the system menu;
    • Open source plugins for three popular file managers: Konqueror (KDE), Nautilus (GNOME) and Thunar (Xfce);
    • Action setting based on scan result type.

    lightning boltsyslog-ng, Open Source Edition, Version 3.0 Released

    Following the release of the commercial version of syslog-ng last fall, its Open Source 3.0 Edition is now available. The syslog-ng application is a mature system logger that is the default logging solution of the SUSE distributions and is estimated to run on hundreds of thousands of Linux and Unix computers.

    Version 3.0 contains several new features that strengthen the range of syslog-ng's functionalities including support for the new syslog standard protocol and message format developed by the Internet Engineering Task Force (IETF). The application allows for filtering and segmenting incoming log messages based on their parameters and content. The new version can even separate the content of the messages to name-value pairs, making it easy to process structured messages. It is also can rewrite and modify the messages when needed.

    The syslog-ng application can directly send the log messages to SQL databases and log analyzing applications. Currently the MySQL, Microsoft SQL (MSSQL), Oracle, PostgreSQL, and SQLite databases are supported.

    The syslog-ng application is optimized for performance and can process up to 75,000 messages per second real-time, that is, over 24 GB raw logs per hour on entry-level server hardware.

    The application natively supports the reliable TCP and the encrypted TLS protocols.

    The commercial syslog-ng Premium Edition features buffering the messages on hard disk, storing messages in encrypted log files, reading messages from arbitrary files, and also support for Microsoft Windows operating systems.

    Further information: http://www.balabit.com/network-security/syslog-ng/.
    Download: http://www.balabit.com/network-security/syslog-ng/opensource-logging-system/upgrades/


  • Talkback: Discuss this article with The Answer Gang


    [BIO]

    Deividson was born in União da Vitória, PR, Brazil, on 14/04/1984. He became interested in computing when he was still a kid, and started to code when he was 12 years old. He is a graduate in Information Systems and is finishing his specialization in Networks and Web Development. He codes in several languages, including C/C++/C#, PHP, Visual Basic, Object Pascal and others.

    Deividson works in Porto União's Town Hall as a Computer Technician, and specializes in Web and Desktop system development, and Database/Network Maintenance.



    Bio picture

    Howard Dyckoff is a long term IT professional with primary experience at Fortune 100 and 200 firms. Before his IT career, he worked for Aviation Week and Space Technology magazine and before that used to edit SkyCom, a newsletter for astronomers and rocketeers. He hails from the Republic of Brooklyn [and Polytechnic Institute] and now, after several trips to Himalayan mountain tops, resides in the SF Bay Area with a large book collection and several pet rocks.

    Howard maintains the Technology-Events blog at blogspot.com from which he contributes the Events listing for Linux Gazette. Visit the blog to preview some of the next month's NewsBytes Events.


    Copyright © 2009, Deividson Luiz Okopnik and Howard Dyckoff. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

    Published in Issue 160 of Linux Gazette, March 2009

    Away Mission - 2008 in Review - part 2

    By Howard Dyckoff

    March is conference madness month in the San Francisco Bay Area with week-to-week events that sometimes overlap. For the last few years, Open Source developers and entrepreneurs have had EclipseCon, Dr. Dobb's Software Development Conference (SD West), and the Open Source Business Conference to choose from. Starting in 2008, if that audience was interested in FOSS on mobile platforms, there was also the Emerging Communications Conference (eComm), which was reviewed in part 1 last month.

    EclipseCon-08

    This conference actually went up a little in value in 2008 even with a higher price because the tutorials were open to all conference attendees.

    While EclipseCon is primarily about the Eclipse platform and the tools for its myriad projects, it is also about community and the larger Open Source ecosystem. There are sessions on creating and nurturing communities, on the future directions of Java, and on the new scripting languages that are supported on the Eclipse platform. There are presentations and panels on software deployment, modeling languages, and even SQL. In short, it's a really big tent. The tracks were well defined and taken all together were very comprehensive. Among the track topics were Eclipse as a Platform, OSGi, SOA, Modeling, Performance, Web Tools, and Emerging Technologies among others. However, some topics have sessions that are scattered across different rooms. The EclipseCon color codes help a bit with this to a degree, but with 8-10 simultaneous sessions, you could be in for a little exercise.

    IBM held its own track on Tuesday, featuring its OpenUP Agile toolset and the Agile-oriented Jazz collaboration suite, both heavily based on Eclipse. In Jazz, and in Rational Team Concert, the team process is selected from process templates including Scrum, OpenUP, and the Eclipse Way (or EPF, the Eclipse Process Framework). You can also mix elements from these processes in custom configurations reflecting the actual work process at your own company or organization. Jazz uses Eclipse technology on both the client and the server, and Team Concert provides agile teams with an integrated tool set including source control, work item tracking and build management with continuous integration. The EclipseCon sessions are summarized here.

    This is a very good set of collaboration tools, having been tested within Eclipse Projects for more than a year and also getting the benefit of IBM's own experience with the Rational RUP methodology, now available as OpenUP. Some of this effort is occurring via the Project Zero website at http://projectzero.org.

    The presentations from the IBM Track are not in the EclipseCon archive for 2008, but here's a link to a representative Jazz web site at IBM Rational.

    I also attended a great late session on Groovy, a dynamic scripting language that runs on JVMs. Groovy reduces the usual Java scaffolding for developing code and simplifies Java syntax. Semicolons are optional and it doesn't require getters and setters for each property; Groovy just does the right thing. Groovy also simplifes testing right out of box, working directly with JUnit. Groovy objects are called POGOs which are like Java POJOs on steroids.

    POGOS write their own property accessors and provide a default constructor with named parameters. And like JRuby, Groovy compiles down to byte code. Check out the PPT-formatted presentation here: Introduction to Groovy.

    EclipseCon offers lots of BOFs at night and several evening receptions with food and drinks. I especially recommend the annual Poster Session where new projects are presented by their developers with a pleasant dinner buffet. The Higgins credential framework, for example, was just such a poster presentation four years ago and is now widely used.

    Each morning had Euro breakfasts: brioche, bagels, and pastry with coffee and tea. Sit-down lunches were the pattern again, with each table sporting signage for different Eclipse topics and projects (EMF,RCP, AJAX, etc.) There was plenty of choice here and lots of special meal categories were coordinated. And beside networking opportunities, the food was pretty good. The conference bag for 2008 was unfortunately just a canvas tote, but it did have a zipper closure. Previous years have had distinctive backpacks.

    EclipseCon generally gives attendees USB thumb drives with about half of the conference presentations. The other presentations are available a day or two after their sessions and can be added at update stations. Some sessions show up weeks later. I personally think that is a problem for attendees planning their day with so many sessions overlapping. Moreover, many of the sessions on the USB drive get revised right before being presented. Consequently, the update sessions are very busy and are a bit of a bottle neck.

    I also have to report a flaw in what is normally a very well organized mid-sized conference. The check-in process on the first tutorial day backed up badly and many attendees had to wait in line for 20-35 minutes. Clearly, they didn't have enough stations and personnel to check IDs and hand out badges and bags. But it was also asymmetric! They had 6 stations set up, each handling 4 letters of the alphabet, the last station also handling Y and Z. Last year, the A-D group was twice as long as any other and took the longer 35 minute interval. They should have gone through the list of registrants and set the letter groups up for roughly equal numbers of attendees. So plan on showing up at least 30 minutes before your tutorial if you want to attend EclipseCon this year.

    Last year, EclipseCon was be held March 17-20, the week before the Open Source Business Conference (OSBC), but for 2009, EclipseCon is March 24-27, and overlaps OSBC.

    EclipseCon 2008 materials can be found here: http://www.eclipsecon.org/2008/.

    The Business of Open Source - OSBC

    OSBC started a few years ago by discussing issues in OSS and legal ramifications. In prior years, a lot of energy went into software patents and the then-new GPLv3. But in 2008, with few dragons left to slay, it was mostly about successful adoption of OSS at the enterprise level. There were several presentations on best practices and several case studies. For example, Weather.com and Kaplan.com, the learning center company, revealed the works behind the curtains a bit.

    There are four tracks that run consistently over the two days of the conference: one for CTOs, one for CFOs, one for lawyers or those interested in legal topics, and a product and service track which also doubles as a track for emerging trends. Each track has a dedicated room in close proximity to the other sessions. There is little exercise in changing tracks here.

    The agenda page lists the four parallel sessions and also has links to the those presentations that are available online. Unfortunately, some of the better presentations are not included, such as the opening keynote.

    A more concrete presentation on the merits of going Open Source showed how two developers moved the back end at the New York Times to FOSS. That led to indexing 11 million PDF files, over 1.5 TB of data, in 24 hours with Amazon Web Services and Hadoop for only $240. Now those files can be served with the existing infrastructure and no new hardware. That presentation is here: http://akamai.infoworld.com/event/osbc/08/docs/CIO-CTO-Gottfrid-Harris.pdf

    OSBC takes place at the Palace Hotel on Market Street in San Francisco, near an exit of the Montgomery Street BART station and near several major bus lines. This is a great location and allows attendees to use public transportation easily. It is also a nice venue with a touch of Victorian ambiance. However, power for laptops is an issue. Some rooms have power strips near the front, but this is up to the organizers and was not true of all session rooms. There are no power strips in the keynote room. The moral here is to bring your own extension cord and maybe a multi-outlet adapter to encourage sharing. We can all get along, right?

    OSBC has usually provided a full breakfast buffet in the mornings, which is taken away just before the keynotes start. So come fairly early to enjoy it and to get a good seat.

    The Open Source Business Conference will be held March 24-25, 2009. Last year, OSBC followed EclipseCon, but in 2009, they will overlap.

    For 2009, OSBC will have keynotes from Ron Hovsepian, CEO of Novell; Jim Whitehurst, CEO of Red Hat; Jonathan Schwartz, CEO of Sun Microsystems; and a mystery speaker from Microsoft (yes, Redmond has a presence at OSBC.) There will also be a keynote panel on the topic of "End Users and Linux: Do We Have a Participation Problem?" led by Jim Zemlin, Executive Director of the Linux Foundation.

    SDWEST-08

    Software Development West, or SD West, usually occurs in early March. Like EclipseCon, SD West is held at the Santa Clara Convention Center. But the perks and grub are more limited. The dates for 2009 are March 9-13. This year, SD West is co-located with the Mobile Connect conference.

    The tracks here are expansive and cover most of the software development lifecycle. The SD conference organizers claim to provide a global view of the software industry and technological developments. There are separate tracks for Java, C++, .Net, Web services, Agile processes, and security, among others. New this year is the topical Cloud Computing and also a track on Parallelism in software.

    Another nice feature is the range of vendor-sponsored free technical sessions. This year some of those sessions will be conducted by Coverity and Open Make.

    Unfortunately for our readers, SD conference archives and presentations are only available to conference attendees. The parent company for SD and Dr. Dobbs - CMP - resells the content. So you have to go to SD West to get it.

    I attended a lot of sessions on Agile process, modeling, and data architecture. Many of these sessions were led by Scott Ambler or other Agile leaders. I was generally satisfied with the sessions. I also attended sessions on Web services and XML servers and distributed Ruby applications which discussed SOAP4R, DRb and Rinda (which is a Ruby version of Linda, a primitive paradigm for parallel processing).

    An informative session on the developments surrounding HTML 5 was led by Elliot Rusty Harold. Work around this emerging standard began accelerating in 2008, because Microsoft finally decided to support the new standards as the default behavior in IE-8. If they hadn't, it would have been a big stumbling block.

    Another somewhat tongue-in-cheek session discussed "Anti-Patterns in Software Development".

    The presenter, Rob Daigneau, was Director of Architecture at Monster.com and currently host of www.designpatternsfor.net (he will also appear again at this year's SD West, speaking on "Design Patterns for Domain Services"). Fortunately, he posts his presentations on that Web site and a 2007 version of the slides are available there as well.

    Anti-patterns look like a good idea but backfire badly when applied. Here, the anti-patterns referred to are ones of human behavior in SW projects, based on roles and on personality types. Many of these behaviors impede the success of a SW project. Among the personalities discussed were Code Controllers, Perfectionists, Workaholics, and other behavior patterns for managers and leaders.

    The underlying anti-patterns include a lack of trust in team mates, or optimizing too soon and too often, or - for Workaholics - too readily accepting a culture of burnout at the companies we work for.

    Daigneau spoke about larger organizational factors: a Fight Club mentality that no one talks about with conflicts between groups or between tech leads and architects and even between top executives. Daigneau noted that "fish rot from the head down", meaning that real and sustained changes require acceptance and support from top executives, who ultimately have responsibility for lingering organizational problems.

    What if the boss is a repeat offender? Daigneau says there are 3 choices: adapt and adopt your boss' point of view, getting your boss or bosses to change their practices, or to move on. That's the way the project crumbles!

    SD West has a lot of traditions and one of those is its campy Developer Bowl, where teams from leading ISVs compete in a contest of technical and industry trivia. Over the years the questions and the answers have gotten pretty broad. Two years back, when Microsoft fielded a team, they appeared in costume: Darth Vader and two Imperial Storm Troopers.

    There are two matched competitions and then the winners of each face off. The Developer Bowl in 2008 featured the return of Team Google, last year's winner. But first up was CodeGear vs. IBM, last year's second place winner. IBM passed them with ease to win the first set.

    Second up was Google and Intel. Tim Matson of Intel introduced himself as a kayaking instructor after his more senior team mates had introduced themselves with variously 12 to 22 years of experience at Intel. The Googlers responded with a wry ploy of each claiming 42 years of work at Google, an indirect tribute to Douglas Adams of "Hitchhiker's Guide to the Universe" fame. Intel, initially behind, pulled ahead on the last question after a long tie.

    In what was a close match with a tie score, IBM pulled ahead of Intel during overtime. Several of the last computer trivia questions stumped both teams, e.g.: "How are Dr. Dobbs and Kurt Cobain related?" (Answer, via Courtney Love who was a relation of Dr. Dobbs). IBM prevailed in the end after receiving the question about the number of "pockets" on an old punch card reader - easy trivia for old IBMers. (The answer was a non-metric '12', something typical of original IBM equipment. And for readers not old enough to remember, the size of an IBM punch card matched the size of a 1950's dollar since every card reader represented cash for IBM.)

    [ Unfortunately, this is incorrect: the IBM card of the 1950s - actually designed in 1928 - was 7.375" by 3.25" in size, while the US dollar bill, the current size of which was set in 1929, is 6.14" by 2.61". The origin of this myth is relatively obvious: the original Hollerith card of 1887 was indeed the same size as the (large) US paper dollar of that time. Suggestions for the choice of size were that the inventor, H. Hollerith, felt that people would treat it with respect due to the similarity; that boxes for currency were cheaply available; and that equipment for handling media of that size was already being made for the US Treasury. However, there is no actual evidence or support for any of these, and they must remain urban legends. -- Ben ]

    The link to find out more about SD West is http://www/sdexpo.com, which lists early-bird discounts and also has an embedded video player with highlights from 2008 at the bottom of the Web page. Join the March Madness for OSS if you dare!


    Talkback: Discuss this article with The Answer Gang


    Bio picture

    Howard Dyckoff is a long term IT professional with primary experience at Fortune 100 and 200 firms. Before his IT career, he worked for Aviation Week and Space Technology magazine and before that used to edit SkyCom, a newsletter for astronomers and rocketeers. He hails from the Republic of Brooklyn [and Polytechnic Institute] and now, after several trips to Himalayan mountain tops, resides in the SF Bay Area with a large book collection and several pet rocks.

    Howard maintains the Technology-Events blog at blogspot.com from which he contributes the Events listing for Linux Gazette. Visit the blog to preview some of the next month's NewsBytes Events.


    Copyright © 2009, Howard Dyckoff. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

    Published in Issue 160 of Linux Gazette, March 2009

    The Unbearable Lightness of Desktops: IceWM and idesk

    By Ben Okopnik

    For the last couple of years, I've been running Ubuntu on my HP Pavillion zv6000 laptop (AMD Athlon™ 64 Processor, 374MB RAM, 40GB HD) - and have often found myself grousing at the grindingly slow performance, particularly when I wanted to run any heavy-duty applications. Until recently, I took it for granted that running anything like GIMP or even Firefox meant essentially constant swapping, huge latency, stuttering in my audio programs, skipped frames in video apps... in short, behavior that I would have expected out of some ancient decrepit piece of hardware rather than a 64-bit powerhouse that this laptop is supposed to be. Even though I got more or less used to it, there was always that bit of buried outrage that one of the friendliest Linux distros out there consigned me to a life of crawling while everyone else was sprinting. If a Windows user ever asked me "so, is Linux faster than Windows?", I'd have to point them to other people's machines - since my own was anything but an effective demonstrator. It was time, and past time, to put some effort into resolving the situation.

    SCaLE logo Sometimes, it takes a particular event to push us out of a rut; in my case, the trigger happened when I recently attended the Ubuntu Bug Jam at SCaLE (the Southern California Linux Expo.) Ubuntu Bug Jam logo The local Ubuntu team was comprised of a very nice group of helpful folks who helped me out with an odd question that I had regarding font viewers under Ubuntu, and who got me thinking of other problems that I wanted to address - including this one. My questions about speeding up my machine didn't produce any definitive answers, but the process did get me thinking in this direction - which was enough of a start. Thanks, guys!

    Back to the Future, or Forward to the Past?

    The more I thought about it, the more I recalled my old (previous to Ubuntu) setup: I was running Debian, with IceWM as the window manager and 'idesk' as an icon manager. The combination was highly flexible, very powerful, and gave me everything that I wanted in a GUI (interestingly enough, I found both GNOME and KDE lacking in several respects - even though they had much more functionality in other ways.)

    Installing both of the above under Ubuntu was trivially simple:

    sudo apt-get install icewm idesk
    

    ...which, of course, produced no visible result. Unsurprising, since GNOME was still running; fortunately, the GNOME display manager (GDM) provides us with a nice mechanism for changing WMs. All you have to do is restart X - whether by logging out or by hitting the magic 'Ctrl-Alt-X' key combo which kills the X server - and select 'Options/Select session'. This allows you to start X with a different window manager, and to make that change temporary or permanent. Initially, I used the "Just for this session" version, but once I had everything configured the way I wanted it, I made the change permanent.

    Configuring IceWM

    Like any properly-written application, IceWM will run just fine without any special configuration - but configuring it "kicks it up to the next level". From a very basic, boring desktop with minimal functionality, it can quickly become a sleek, beautiful, responsive environment for any kind of GUI operations that you can envision for a GUI manager. Now, I'll be honest: I've looked at the IceWM GUI configurator (icemc) and... was not impressed. No insult to the people who put it together, but I find that a plain text editor works better, at least for me. The syntax of the various toolbar and key configuration files is so simple and intuitive, especially if you copy and edit the sample files that come with IceWM, that there's no reason to add another layer of complexity between you and the results you want.

    Some of the most basic things - e.g., choosing a theme or a focus method - are built right into the IceWM menus. As an example, in this image, I've replaced IceWM's basic greenish-gray background with a snappy-looking "Area51" theme just by choosing it from the menu (available either from the button in the lower left corner or right-clicking on the desktop):

    IceWM: Area51 theme
    IceWM: Area51 theme

    I actually recommend choosing a theme for yourself before you start hacking the rest of the settings - this will create a ~/.icewm directory and place a 'theme' file in it containing the theme you've chosen. Since that's what we need to do anyway, this saves us a bit of work.

    By default, IceWM will read the configuration files in /etc/X11/icewm. If you're going to tweak a lot of things, I'd suggest copying the files from there as well as from /usr/share/icewm and modifying them: the format is rather obvious in all cases, and it's a lot easier to change what already exists than to try to remember something new and unfamiliar. In practice, at least for me, this comes down to modifying only three files: 'preferences', 'toolbar', and 'keys'. The first one modifies the default preferences for how IceWM itself acts and runs, the second determines the mini-icons on the toolbar, and the third - in some ways the most useful - allows you to define key combinations which will execute the programs that you set for them. This last was, in fact, the functionality that I found either missing or badly restricted in other WMs. Again, since the default files are heavily commented, and the default state for each setting is shown, tweaking these is very easy. Let's start with ~/.icewm/preferences; please note that I've deleted all but the prefs that I want to be different from the default setting.

    #  Support mouse wheel
    UseMouseWheel=1 # 0/1
    
    #  Clock Time format (strftime format string)
    TimeFormat="%T"
    
    #  Show APM/ACPI/Battery/Power status monitor on task bar
    TaskBarShowAPMStatus=1 # 0/1
    
    #  Beep when new mail arrives
    TaskBarMailboxStatusBeepOnNewMail=1 # 0/1
    
    #  Count messages in mailbox
    TaskBarMailboxStatusCountMessages=1 # 0/1
    
    #  Execute taskbar applet commands (like MailCommand, ClockCommand, ...) on single click
    TaskBarLaunchOnSingleClick=0 # 0/1
    
    #  Icon search path (colon separated)
    IconPath="/home/ben/Pics/icons:/usr/share/icons:/usr/share/pixmaps"
    
    #  Command to run on mailbox
    MailCommand="/usr/local/bin/rmutt"
    
    #  Command to shutdown the system
    ShutdownCommand="/usr/bin/gksudo /sbin/halt"
    
    #  Command to reboot the system
    RebootCommand="/usr/bin/gksudo /sbin/reboot"
    
    #  Network device to show status for
    NetworkStatusDevice="ppp0 eth0 wlan0"
    
    # Workspace names
    WorkspaceNames=" 1 ", " 2 ", " 3 ", " 4 "
    
    #  Show workspace switching buttons on task bar
    TaskBarShowWorkspaces=0 # 0/1
    

    Hopefully, all of the above is clear enough; the names of the settings are pretty self-explanatory, and the comments make the rest of it obvious.

    One of the things I decided not to set here, although the capability exists, is the desktop background. Instead, as you will see later in this article, I've left that piece to be handled by idesk because I prefer the way that it handles backgrounds - and it's nice to have that kind of choices.

    Next, I set up the toolbar file. By default, it shows only two mini-icons: xterm and Mozilla. Since the toolbar is always visible no matter what windows may cover the desktop, this is a good place to put your "must-have" apps - and that's what I did, reserving the desktop for things that are less immediate.

    prog "BackFlip" "kmultiple_16x16.xpm" /usr/local/bin/bkgr
    prog "Lock screen" "decrypted_16x16.xpm" /usr/bin/xlock -random
    prog "Mix" "music_32ndnote_16x16.xpm" /usr/bin/tkmixer
    prog "Firefox" "mozilla_16x16.xpm" /usr/bin/firefox-2
    prog "JPilot" "pda_black_16x16.xpm" /usr/bin/jpilot
    prog "Xcal" "spreadsheet_16x16.xpm" /usr/local/bin/Xcal
    prog "Snap" "image_16x16.xpm" /usr/local/bin/snap
    prog "Cell-up" "presence_online_16x16.xpm" /usr/local/bin/aircard-up
    

    Again, we have a simple layout: the popup label for the icon, the icon image, and the program to run. Here, I made use of a number of scripts that I've written over the years as well as standard Linux programs: "bkgr" for changing the backgrounds at will, "Xcal" to display a convenient and simple year-long calendar that I got from a 2-cent tip here in LG, and so on. The icons that I used are located in my ~/Pics/icons directory - but IceWM knows about that because I set the "IconPath" variable in the 'preferences' file, above. No problem at all - except that I had to consult my wife, the resident graphics designer, about the choice of icons. Oh well - we all have our strengths...

    Last, I set up the 'keys' file:

    key "Alt+Ctrl+m" /usr/local/bin/rmutt
    key "Alt+Ctrl+l" /usr/bin/xterm -e lynx
    key "Alt+Ctrl+x" /usr/bin/xterm
    key "Alt+Ctrl+g" /usr/local/bin/urlgrab
    key "Alt+Ctrl+u" /usr/local/bin/urlsee
    key "Alt+Ctrl+k" /usr/bin/xkill
    key "Alt+Ctrl+a" /usr/local/bin/voladj -u
    key "Alt+Ctrl+z" /usr/local/bin/voladj -d
    

    Again, obvious syntax and lots of scripts - 'rmutt' to set up and fire off my Mutt email reader, 'urlgrab' to run "wget" on a currently-highlighted URL (via 'xclip'), 'urlsee' to browse a URL, and 'voladj' to adjust the audio volume on the system. Few other window managers make it this easy!

    One of the really nice bits of running IceWM is the ability to test its configuration quickly. Most of the changes - as far as I can tell, everything except 'preferences' - happen immediately, as soon as you save the relevant file; testing the changes in 'preferences' simply requires selecting 'Window Managers/IceWM' on the menu, which restarts IceWM (a matter of two seconds or so.) Instant feedback is great - especially if you're used to having to restart GNOME or KDE, which takes a long time by comparison.

    Configuring idesk

    Now that IceWM is up and running, you may decide that you want desktop icons as well. Frankly, I see a number of arguments against doing so: given the combination of the menu button (as well as right-clicking anywhere on the desktop, which brings up the same menu), the toolbar icons, and the fact that you can run programs by clicking the mail icon, the network monitors, the CPU monitor, and the clock, and considering that the desktop itself will be covered 99% of the time, it seems like gilding the lily. However, if that's what you want to do, then I suggest sticking with an equally lightweight option: idesk. This is a simple but capable icon manager; its only weak point, in my opinion, is the lack of easy configuration for its icon files - so, as is often the case, I've made up the lack by writing a bit of code. This configurator is slightly clumsy with regard to setup, but works pretty well.

    There are two parts to setting up idesk: you need to create a ~/.ideskrc file and the individual .lnk files in a directory called ~/.idesktop. Sample files for these can be copied from '/usr/share/idesk/default.lnk', but I'll make it even simpler than that (keep reading, and you'll see!) Everything that goes into those is well documented in /usr/share/idesk/README, but you don't actually need to go in there unless you run into problems.

    To make all of the above easy, I used a combination of the 'thttpd' web server, a bit of CGI, and the Mozilla browser - along with some canned links. Here's all you need to do to make it work.

    Setting up

    First, make sure to install the 'thttpd' web server and the 'gnome-icon-theme' package. Next, copy the default .ideskrc file and create your .idesktop directory with a 'builder' directory under it:

    sudo apt-get install thttpd gnome-icon-theme
    cp /usr/share/idesk/dot.ideskrc ~/.ideskrc
    mkdir -p ~/.idesktop/builder
    

    Now, place the files from this tarball into the '~/.idesktop' directory. This will also create a symlink in '~/.idesktop/builder' called '24x24' that points to /usr/share/icons/gnome/24x24:

    tar xvzf builder.tgz -C ~/.idesktop
    

    At this point, you have the minimal setup necessary to run idesk. Start it from a terminal, watching for errors:

    idesk
    

    You'll probably see warnings when you do:

    Idesk starting in :0.0
    [idesk] Background's file not found.
    [idesk] Background's source not found.
    

    These are not critical, but they make a good reminder: you may want to add a background image to ~/.ideskrc at this point (all of the following should be added at the end of the 'table Config' section.) You could specify either a Background.File: /absolute/path/to/image/file.ext or a Background.Source: /directory/with/images for a randomly-chosen background that changes every so often (Background.Delay: 5 would set it to every 5 minutes; the default is 0 - i.e., never changes - which would just give you a random image once per session.) You might also consider setting Background.Mode: MODE, which will modify how the images are displayed; the valid modes are Stretch, Scale, Center, Fit, and Mirror.

    Once idesk is up and running, double-click the 'Desktop' icon that appears on your desktop; this will start 'thttpd', bring up a Mozilla browser, and show you the following interface:

    the Builder interface
    The Builder interface

    Simply enter the caption (i.e., the label which will show under the icon), the name of the program - preferably including its path - and click on an icon. Builder will create the .lnk file based on your choices and show you the contents; you can then return to Builder to create another icon. When you're done, just close the browser; this will also kill the web server. To see your new icons, you'll need to restart idesk; this simply requires double-clicking any of its icons with both mouse buttons. To edit a link file manually, you need to double-right-click its icon; running the program, as always, takes a double-left-click.

    The Wrap-up

    In this article, I've tried to present a cohesive setup for anyone interested in converting their default Ubuntu setup to a lighter, faster, sleeker configuration. Obviously, I can't guarantee any specific gains if you do this, but on my machine, I've gone from a CPU that was always showing a load of 1 or even slightly more while completely idle to one that acts as a Linux machine should:

    'top' output: load averages with IceWM
    'top' output: load averages with IceWM

    No longer is running Mozilla a concern, nor do I see much (if any) swapping when I run GIMP, etc. In fact, for the first time ever on this machine, I've been able to install and run 'openarena' (a video-heavy first-person shooter) and have it actually work - no lag! - instead of chugging slowly one frame at a time. Applications open with unbelievable speed, and even flipping open an xterm feels like neck-snapping acceleration. For this happy camper, IceWM and idesk provide a comfortable, competent, and above all fast graphical environment for Linux. Give it a try!


    Talkback: Discuss this article with The Answer Gang


    picture

    Ben is the Editor-in-Chief for Linux Gazette and a member of The Answer Gang.

    Ben was born in Moscow, Russia in 1962. He became interested in electricity at the tender age of six, promptly demonstrated it by sticking a fork into a socket and starting a fire, and has been falling down technological mineshafts ever since. He has been working with computers since the Elder Days, when they had to be built by soldering parts onto printed circuit boards and programs had to fit into 4k of memory (the recurring nightmares have almost faded, actually.)

    His subsequent experiences include creating software in more than two dozen languages, network and database maintenance during the approach of a hurricane, writing articles for publications ranging from sailing magazines to technological journals, and teaching on a variety of topics ranging from Soviet weaponry and IBM hardware repair to Solaris and Linux administration, engineering, and programming. He also has the distinction of setting up the first Linux-based public access network in St. Georges, Bermuda as well as one of the first large-scale Linux-based mail servers in St. Thomas, USVI.

    After a seven-year Atlantic/Caribbean cruise under sail and passages up and down the East coast of the US, he is currently anchored in northern Florida. His consulting business presents him with a variety of challenges such as teaching professional advancement courses for Sun Microsystems and providing Open Source solutions for local companies.

    His current set of hobbies includes flying, yoga, martial arts, motorcycles, writing, Roman history, and mangling playing with his Ubuntu-based home network, in which he is ably assisted by his wife and son; his Palm Pilot is crammed full of alarms, many of which contain exclamation points.

    He has been working with Linux since 1997, and credits it with his complete loss of interest in waging nuclear warfare on parts of the Pacific Northwest.


    Copyright © 2009, Ben Okopnik. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

    Published in Issue 160 of Linux Gazette, March 2009

    Joey's Notes: Bash shell basics

    By Joey Prestia

    Joey's Notes image

    It is often said that the real power of Linux lies at the command line - and this is true. Unfortunately, many new Linux users start out trying Linux as a desktop operating system and get frustrated when they discover the frequent necessity of using the command line - an unfamiliar and thus uncomfortable environment. This usually happens when they need to install from a tarball (compileable source code in a tar.gz file) and discover that they have to type commands or navigate to a specific directory. To make matters worse, the directory-separator slash (/) is the wrong way if they're used to DOS.

    Or perhaps they need to get video playback to work right - something that can still be quite challenging under Linux. These types of things can be quite difficult for new users because of numerous differences in the operating systems. My purpose here is to try to make the transition easier by explaining some basics that should help, whether you are trying to learn Linux formally or just starting out as a desktop user trying to break away from Microsoft.

    In reality, Linux will become a whole lot easier if you get familiar with just a few key concepts. Among those things is the directory structure: for example, there's no more C:\ drive. Instead, Linux has the root of the file system located at '/', which is equivalent to C:\ in Windows. No matter how many drives there are in a Linux system, there will only be one filesystem tree - more drives just increase the available space. Linux does not use the concept of one directory tree for each drive like Windows does.

    Learning the structure of this directory tree is one of the biggest hurdles for the new user. As an example, the part of the system containing your personal files is referred to as your home directory; its name is the same as your username, and it is located under the '/home' directory. There is a link to a great little tutorial at the end of this article that may take a couple of hours to go through, but it will definitely help you to get familiar with Linux.

    Most commands in Linux are some short form of what the command does; e.g., the ls command stands for 'list', and cd being short for change directory. Just by learning a few simple commands you will be able to navigate your system effectively and perform many tasks, but be patient - it takes time. Linux has around 3000+ commands.

    [ Actually, that statement is both true and misleading. Every program in Linux is a command, but you only need to know a tiny percentage of these to work on the command line effectively. In general, the standard Unix toolkit (e.g., this list of commands) is all that's necessary - and even experts are rarely completely familiar with all of these. In my opinion, knowing about 40 of these commands will suffice the average user 99.9% of the time. -- Ben ]

    All of these have similarities in their syntax, and there are some basic concepts that are uniform across all commands. Perhaps the most important thing to remember is the man command; whenever you need to use some unfamiliar command, just type man command. This will pop up the manual page for that command, giving you information that you may need before you execute it.

    Syntax

    The syntax of a command refers to the grammar of the sequence that we enter on the command line. To verify the syntax for a specific command, a quick glimpse at the appropriate man page will usually clear things up; this information is found under the SYNOPSIS heading. For most commands, the syntax is 'command options argument'.

    Command
    The name of a program like ls or cat or grep.
    Option
    A letter or a word preceded by a dash for "short options" or double dashes for the "long form" of an option. -a is a short option and --all is a long option; both mean the same thing.
    Argument
    An argument may be a file name, a directory, a device specification like /dev/cdrom. It is usually tied to the option (e.g., '--file doc.txt') and is, in a sense, the object of the command.

    Most people have problems deciding where to put a space in the command line. The best guideline is that there should be spaces after every command and between all options preceded by a minus. Arguments may or may not require preceding spaces. Again, the man page is often the best guide.

    The Prompt

    The first thing you'll see when working in the command shell is the shell prompt. Looking at it will normally tell you what user you are, your system hostname, and your location on the system.

    [ This is not necessarily true on many other Unix systems, or even in some of the smaller Linux distros. Often, the prompt consists of nothing more than '$', '%', or '#' to show your user level and shell type; for more information, you may need to either configure the prompt or use some of the toolkit informational commands. -- Ben ]

    Looking at the sample below, we can see that our username is 'tom'. This is followed by the @ symbol which separates the username from the system hostname; if you never named your system, it may have 'localhost' in this position. Following this, we have the '~' character which indicates that we are in this user's home directory. Finally, we have the '$' character; this tell us, in a general fashion, the level of permissions that we have as a user. This varies across different shells; for example, the C shells (csh, tcsh) will use '%' for a standard user, while the Bourne shell and its derivatives (sh, ksh, bash, zsh, and a number of others) will usually use '$'. The root user (the system's superuser) gets a special prompt of '#'; this lets the user know to be very, very cautious and precise in the commands they issue, since the root user can easily destroy or damage the system.

    Regular User:

    [tom@station1 ~]$            
    

    Root User:

    [root@station1 ~]#
    

    Typing the command echo $SHELL tom finds out he is using Bash (a.k.a., the "Bourne Again Shell".) The 'echo' command in this example prints the value of the $SHELL variable to the screen. The Bash shell is normally the default shell on most modern Linux systems.

    [tom@station1 ~]$ echo $SHELL
    /bin/bash
    

    If the current directory is not shown in the prompt, the first important command you should learn is pwd which is short for "print working directory". This will reveal the full path as we can see below.

     
    [tom@station1 ~]$ pwd
    /home/tom
    

    Commands

    Commands are either shell built-ins, compiled executable programs, or executable scripts located somewhere on your system. The shell is a command interpreter which reads certain environment variables (you can see these by typing 'env' at the command line) when you open a command shell. When you type a command, the command interpreter will search the directories listed in the PATH variable to find the program you've asked it to execute. If it is not a shell built-in, an alias to a command, a defined function, or a program in the defined path, the command interpreter returns the 'command not found' error.

    [tom@station1 ~]$ print help me 
    bash: print: command not found
    

    There are many commands - maybe too many. To see how many you have available, just hit the 'tab' key twice in a terminal; this will show you a number and ask if you actually want to see all of them. Don't worry: you'll never need to know most of them (many of these are invoked by other commands or via GUI menus.) No one can remember them all - there wouldn't be any point to it in the first place - and learning just a handful will help you out greatly on any Linux system.

    Commands are case sensitive and are normally lower case. Another important thing to remember is that many commands (e.g., copying, moving, attaching a device, etc.) work in a "what to where" fashion:

    cp /my/source/file /destination/file/or/directory
    mv /from/here /to/there
    mount /dev/mydevice /directory/to/mount/in
    

    Switches and Options

    Switches and options are used to modify the action of the command. They are usually preceded immediately by either a single or double dash, and are in the form of letters or full words. Some commands will require options and some will not. Here are some samples:

    Simple command:

    [tom@station1 ~]$ ls
    

    Short option example:

    [tom@station1 ~]$ ls -a
    

    Multiple short options:

    [tom@station1 ~]$ ls  -a -h -i
    

    Multiple short options can be grouped:

    [tom@station1 ~]$ ls -ahi
    

    Long option example:

    [tom@station1 ~]$ ls --all 
    

    Multiple long options:

    [tom@station1 ~]$ ls --all --human-readable --inode
    

    Normally, short options can be grouped together if more than one is required. In addition, some short options have equivalent long options which perform the same action.

    Arguments

    An argument is usually an object that the command operates on; whether a command takes an argument at all depends on the command. For example, the command ls can take an argument, but if none is supplied the command assumes the current working directory is the argument and lists the current directory. An example of a command that takes no arguments is 'true': it takes only options. Again, arguments vary depending on the command and a great many commands have a default argument.

    Getting Help Using the Man Pages

    When using the command line, it is advantageous to get in the habit of using Linux's built-in manual page system, also called the "man" pages. At first glance, the man pages can look intimidating and very cryptic. Be assured, though - they are very helpful and necessary. By using them you will greatly enhance the speed at which you are learning Linux. To see the man pages for a command, merely type 'man command' and a screen with information will pop up. To navigate a man page, you use the arrow keys and page-up/page-down keys to scroll, and the letter "q" to quit. There are a number of other navigation keys for searching, jumping to the top or the bottom, and so on; these depend on the pager program (usually 'more' or 'less', with the latter being most common.) You can read about these by hitting the 'h' key (help) while browsing the man page.

    When you look at any man page, you will notice several section headings:

    NAME:
    The name of the command and often similar commands that may be of use.
    SYNOPSIS:
    This section will give you the detailed syntax for running the command correctly. You may have to review the OPTIONS section, below, to get the precise details. There are many commands that must have options supplied to work at all.
    DESCRIPTION:
    Usually a brief to detailed technical description of what the command does.
    OPTIONS:
    Under this heading, there will be a listing of the long and short options that the command accepts. You will almost always need to use some sort of option with a command, so this section is quite important. This section can be fairly long. Since most commands use different meanings for their options, be aware that whatever options you used on one command may mean something completely different when used with some other command.
    EXAMPLES:
    When using unfamiliar commands I almost always look for this section so I don't foul something up. This section, if it exists, may further clarify up the syntax of the command being researched.

    There may be more categories for some commands and less for others. This depends on the commands.

    Learning Linux can be a long process but you will most definitely find it fun and rewarding as you discover the awesome power that it contains.

    Additional Help

    The Linux system was originally designed for the command line user, and if you know how to find the help that you need, you will have access to far more power than the average desktop user can even imagine. Another helpful source of information, if it is installed on your machine, is the 'info' command; this is the Texinfo documentation system, which contains the official documentation of the GNU Project. Some systems have both 'man' and 'info' available. It is a good idea to check both and see if one offers more information that is to your liking. Generally info pages tend to be easier to understand then the man pages, although in some instances the information is identical. To use the info system, type info command or just info to get familiar with it.

    Tab Completion

    Another helpful feature of the Linux command line is tab command completion and path completion. If you are not sure of the correct complete spelling of a command or a file name you can just type a couple of the initial letters of that command and hit the 'tab' key to see what the Bash shell comes up with for assistance. If the command specified by what you've typed so far is unique, it will complete the task for you by filling in the spelling; if not, it will present you with a list of possible choices.

    Assuming, for example, that you need a command that can help you configure your sound card but only remember that it starts with "syst". Just type

     
    [tom@station1 ~]$ syst
    

    Then hit tab (you may need to hit it twice depending on your configuration) which should reward you with the following output:

     
    system-cdinstall-helper          system-config-rootpassword
    system-config-authentication     system-config-samba
    system-config-date               system-config-securitylevel
    system-config-display            system-config-securitylevel-tui
    system-config-kdump              system-config-selinux
    system-config-keyboard           system-config-services
    system-config-language           system-config-soundcard
    system-config-lvm                system-config-time
    system-config-network            system-config-users
    system-config-network-cmd        system-control-network
    system-config-packages           system-install-packages
    system-config-printer            systool
    

    From this, you can see the name of the command you need: system-config-soundcard. This also works with paths to files. To use this feature and get the feel of how it works, start by typing a directory path - but after typing a couple of characters hit the tab key and see if it auto-completes for you. If it does not, it's because there is not enough characters to differentiate it from another possible choice. You can try adding another character or couple of characters and hitting the tab key again; this will allow it to complete for you. This can actually save you a lot of typing: 'cat /usr/include/c++/4.0.4/i486-linux-gnu/bits/messages_members.h' becomes 'cat /u⇒i⇒c+⇒⇒0.⇒i⇒b⇒m⇒' - 22 characters instead of 66!

    Command History

    Using the arrow keys also can make things easier. In Bash, these are mapped to what is called the command history: using the up arrow will take you to the previous executed command, and hitting the up arrow twice will take you back two commands. Of course, the down arrow does the opposite and moves you forward in the command history until no more commands are left. To see everything in the command history type history on the command line; recalling a previous command anywhere in the list can be done with 'Ctrl-R' and typing a part of that command. A command can also be recalled by noting its number in the 'history' output and typing an exclamation point followed by that number. There are many more shortcuts available; you can read about these in the 'HISTORY' section of 'man bash'.

    Some basic commands and common switches

    CommandDescriptionCommon OptionsDescription
    pwdPrint working directory--helpDisplay help information
    manDisplay manual page[1-8] Man page section number
    -a All relevant information
    -k Keyword
    cdChange directory..Up one level
    /Root of filesystem
    ~ or nothingCurrent users home directory
    lsList files-a --allAll even hidden files
    -lLong listing
    -h --human-readableHuman readable sizes
    cpCopy files-i --interactivePrompt before overwrite
    -r --recursiveInclude underlying directories
    -a --archiveArchive mode all files
    mvMove files-i --interactiveprompt before overwrite
    -u --updatemove if file is newer
    -f --forceDon't prompt before overwriting
    rmRemove files-r --recursiveInclude underlying directories
    -f --forceDon't prompt before removal
    -i --interactivePrompt before removal
    mkdirMake directory-p --parentsMake parent directories
    -m --mode=MODESet file mode
    -v --verbosePrint confirmation
    rmdirRemove directory-p --parentsRemove directory and parents
    -v --verbosePrint confirmation
    --ignore-fail-on-non-emptyIgnore failure if files exist
    catConcatenate files-b --number-nonblankNumber the non blank lines
    -n --numberNumber all lines
    -s --squeeze-blankOnly show one blank line
    locateLocate files-c --countCount matches instead
    -i --ignore-caseBe case insensitive
    -S --statisticsLocate database statistics
    lessShow one page at a time-N --LINE-NUMBERSPrint line numbers
    -s --squeeze-blank-linesDisplay only one blank line
    -E --QUIT-AT-EOFMake less stop at the last page
    grepPrint lines matching a pattern-i --ignore-caseIgnore case sensitivity
    -R -r --recursiveRead all files under directory
    -v --invert-matchSelect non-matching lines
    mountMount a device-oSpecify additional options
    -tManually specify file system type
    -aMount all file systems in fstab
    umountUnmount a device-tSpecify file system type
    -vVerbose mode
    -rOn failure remount read olny

    Resources


    Talkback: Discuss this article with The Answer Gang


    [BIO]

    Joey was born in Phoenix and started programming at the age fourteen on a Timex Sinclair 1000. He was driven by hopes he might be able to do something with this early model computer. He soon became proficient in the BASIC and Assembly programming languages. Joey became a programmer in 1990 and added COBOL, Fortran, and Pascal to his repertoire of programming languages. Since then has become obsessed with just about every aspect of computer science. He became enlightened and discovered RedHat Linux in 2002 when someone gave him RedHat version six. This started off a new passion centered around Linux. Currently Joey is completing his degree in Linux Networking and working on campus for the college's RedHat Academy in Arizona. He is also on the staff of the Linux Gazette as the Mirror Coordinator.


    Copyright © 2009, Joey Prestia. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

    Published in Issue 160 of Linux Gazette, March 2009

    SCaLE 7 Speed-through

    By Kat Tanaka Okopnik

    Now in its seventh year, it's hard to believe the Southern California Linux Expo (SCaLE) started off as a grouping of Southern California LUGfests. Regular readers of LG may remember that Ben and I first attended two years ago, mentioned in a series of writeups, in issue 136. Our now 18-month old son, Michael, attended for the second time as well, while his sibling-to-come upheld family tradition by attending his or her first Linux convention in utero. Some of the big names in Open Source attend this convention, and the organizers have really applied every lesson they've learned along the way to make this a smoothly-run success.

    In contrast to the long lines reported at SCaLE 5, the organizers switched to self-check-in, which cut delays substantially. The check-in printout included a raffle ticket for the Saturday evening reception and a personalized Web address for speaker surveys. The convention schedule was well-designed and easy to read. Signs made what could have been "hard to find" rooms merely a long walk.

    There were signs of economic slowdown everywhere. Freebies were barely in evidence, and most of those were inexpensive - distro CDs, pens, and stickers. Some of the very few memorable ones were the ubiquitous flashing FreeBSD felt devil horns on a headband, O'Reilly's selection of buttons, and the ZenOSS squishy zebra. Michael's favorite was the collapsible frisbee in a pouch from Bytware; his Mommy and Daddy really appreciated that, too, as one of the best child-safe and attention-holding gimmes ever! The convention itself gave away eco-friendly lightweight canvas totes instead of T-shirts - a smart move away from the usual plastic or paper bags.

    The free WiFi available in many parts of the convention was maddeningly slow. (Another unfortunate victim of budgetary constraints - truly adequate bandwidth would have been prohibitively expensive, and efforts at more creative solutions were stymied by lack of line-of-sight to appropriate relay stations). The Westin itself is a T-mobile hotspot, so anyone inclined to pay was probably well-served.

    Friday started the weekend off with a full schedule:

    What was once the "medical track" at SCaLE has become its own one-day conference (Demonstrating Open-Source Healthcare Solutions - DOHCS). Ben and I didn't sit in on any seminars, but it looked like it was well-attended and well-regarded by the professionals it was designed for.

    Fedora and ZenOSS had free-form tracks on Friday, respectively as "Activity Day" and "Community Day", while Subversion opted for a series of organized panels for their Community Day track. Sun had Solaris workstations set up in the room that became the "Try It Lab" for the rest of the weekend; this was a hands-on area with multiple workstations and a facilitator for trying out everything from Ruby to Moodle to OpenOffice - a very nice opportunity for anyone who wanted to get a little actual experience.

    I stopped by one of the last panels in the Women in Open Source track on Friday, after ignoring the room with the mysterious label "WIOS". I wish they had opted for the complete expanded title instead. The OSSIE (Open Source in Education) track was also very well attended - they managed to fill an auditorium. This is the track that Ben regrets missing the most, since education has always been near and dear to his heart; perhaps we'll catch it in a future SCaLE.

    Other multi-slot programming included BSD certification exams, the Ubuntu Bug Jam, and the League of Professional System Administrators (LOPSA) running four half-day classes for sysadmins as SCaLE University. The convention had its usual excellent spread of classes for all experience levels from utter beginner to pro, with some big name speakers at all levels. Ben and I didn't get to as many panels as we'd have liked, but we had a great time over all. We did a lot of tag-team babyminding, and some fellow attendees (and some staffers) entertained Michael as well. Many of the presenters shared their information at the SCaLE Web site, and I will definitely be following up on the ones I wish I'd seen. I wish I'd gotten to participate in "Growing up Free", and hope that SCaLE expands that concept along with the ubiquitous OLPC machines to a parent/child track (especially if they incorporated childcare or child-friendly panels).

    The "Weakest Geek Theatre" (actually, a panel-style game show with contestants voting each other off for bad answers) was a lot of fun. Topics ranged from the obvious to the obscure, and Randal Schwartz was on hand to act as arbiter for the Perl questions. Contestants started to use "Al Gore!" (who, as we all know, didn't invent the Internet), as their favorite way of saying, "bleeped if I know the answer to that!", and I'm highly tempted take that on, myself. Midway through, someone announced the availability of FREE! beer in the back. Cue stampede for free beer.

    As far as the venue itself, the Westin LAX's elevators were horribly slow, but that, of course, is out of SCaLE's control - and there were signs promising faster service (after renovation) coming soon. The front desk staff was extremely gracious and helpful.

    We met some LG alums - sorry to say I don't remember your names, having been slightly distracted by the energetic explorations of Michael throughout the weekend. Please do remind me! We learned about some great open source software - a few games we had overlooked until now, and some projects we hope to showcase in the coming issues.

    For anyone at any level of Linux expertise, SCaLE is a great "bang for your buck" gathering and one we look forward to attending again.


    Talkback: Discuss this article with The Answer Gang


    Bio picture

    Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.

    Her transition away from other OSes started with the design of a massively multilingual wedding invitation.

    When away from the keyboard, her hands have been found wielding of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler. More often these days, she's occupied with managing her latest project.


    Copyright © 2009, Kat Tanaka Okopnik. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

    Published in Issue 160 of Linux Gazette, March 2009

    Development Builds Layered on Top of a Stable System by Means of Unionfs

    By Dirk Wallenstein

    Intro

    Unionfs is a filesystem type by which you can merge multiple directories into one mountpoint. The source directories (branches) are stacked onto each other with filesystem entries in the top layer hiding/overriding entries in the layers below. In this article I'd like to show you how to set up a layered /usr directory, which allows you to push and pop a development directory onto/from your distribution's /usr directory. That way you don't ever modify your stable system, and can still test your work in a real context. Further I'll show you how to put boot options into place, to choose between the stable system and the development system at bootup. That way you can replace core system applications that are started by init.

    Disclaimer

    The steps shown here will deeply interfere with your system's filesystem hierarchy and boot process. You might render your system unusable or even lose all your data. I cannot be held responsible for that. Reader beware; back up your system to a safe location.

    Getting unionfs support

    Alas, no filesystem unification implementation has made it into the vanilla kernels, and so you have to patch and compile a custom kernel to get the support. If you haven't done that before, there is a distribution specific kernel compilation series on howtoforge.com. They run under the title "How To Compile A Kernel - The XXX Way" where XXX is Fedora, Ubuntu, etc.

    You can get the patches for unionfs and further documentation at the unionfs project. Once you have applied the patch, there is very focused documentation below /linux/Documentation/filesystems/unionfs/. I think concepts.txt and usage.txt in particular give you all the information you need to understand and operate a unionfs, and they aren't too long. You'll need to activate unionfs support by checking "File systems -> Layered filesystems -> Union file system" (UNION_FS) and optionally "Unionfs extended attribute" (UNION_FS_XATTR) next to it, in the kernel configuration. Now, compile and install the new kernel.

    A New Filesystem Structure

    Unionfs hides everything below the mountpoint (like every other filesystem type), so we have to move the stable branch (the /usr dir of the distribution) to another directory, and hide away this fact, by using a bind mount back onto /usr. As you can guess, this will rumble a bit (i.e., you might miss some commands on the way), and is thus best applied from a live distribution of your choice. In this example I'll use the following branches:

            /usr                    # the mountpoint for your unionfs
            /usr_stable             # your distro's stable branch (formerly known as /usr)
            /mnt/devel/usr          # your development branch
    

    So, let's assume you are running a live distro and your normal root filesystem is mounted on /mnt/hda1.

    1. Rename the stable branch:

                $ mv /mnt/hda1/usr /mnt/hda1/usr_stable
    

    2. Create the mountpoint. You can create fallback links within the mountpoint in case either of the unionfs or the bind mount fails, and a sentinel file which you can check for existence in scripts:

                $ mkdir /mnt/hda1/usr
                $ cd /mnt/hda1/usr
                $ ln -sn -t . ../usr_stable/*
                $ touch YOU_SHOULD_NEVER_SEE_THIS_FILE
    

    3. Then create a bind mount for the stable system.

                $ echo "/usr_stable  /usr none  bind" >> /mnt/hda1/etc/fstab
    

    Now you should be able to boot into your stable system, and everything should be like before.

    Boot Script:

    The next step is to create a script that checks for a chosen identification string on the kernel command line, and if given, releases the bind mount again and puts the union mount into its place. A call to this script must be inserted into the boot process right after the corresponding init scripts have processed /etc/fstab and mounted all the given filesystems. Where and how to insert this script varies among the different distributions; if you don't know, you'll have to consult the documentation of your distribution for this. To find the corresponding scripts, you can grep for 'mount' or 'fstab' in the /etc/init.d/ scripts.

                $ grep -Hw -e mount -e fstab /etc/init.d/*
    
    Good candidates are localmount, bootmisc or similar. Look for the '-a' flag to mount. In case of Gentoo the right place to insert the call is here (in this example the script is called unionmount-usr):
        --- /etc/init.d/localmount.orgy 2009-02-15 10:26:22.000000000 +0100
        +++ /etc/init.d/localmount      2009-02-15 10:33:02.000000000 +0100
        @@ -23,6 +23,9 @@ start()
                mount -at "${types}"
                eend $? "Some local filesystem failed to mount"
    
        +       # conditionally mount development branch through unionfs
        +       /etc/init.d/unionmount-usr
        +
                # Always return 0 - some local mounts may not be critical for boot
                return 0
         }
    

    Note that for the approach given here to work (unmounting the default bind mount), you can't have another bind mount on /usr_stable (or indirectly on /usr) unless you unmount and remount them too.

    Now, the script - this is just a very basic version to illustrate the necessary steps. You might, for example, remount the bind mount if the union mount fails. In general, I like to have consistency checker script at the end of the boot process, where you can check for different vital things that you don't want to miss. Besides checking iptables rules and such, you could also check the sentinel file for existence there.

        #!/bin/bash
        # check the kernel command line (/proc/cmdline) for the id-String and if given release the bind mount on /usr.
        # and put the union mount in its place.
        UNION_WANTED_FLAG="UniteWithDevel"      # this is what would be given on the kernel cmdline
        MOUNTPOINT="/usr"
        STABLE_BRANCH="/usr_stable"
        DEVEL_BRANCH="/mnt/devel/usr"
        ERROR_LOG_FILE="/var/log/develUnionFailed.log"
    
        errorOut() {
            MSG="ERROR ${0} $(date) : ${@}"
            echo "${MSG}" >&2
            echo "${MSG}" >> "${ERROR_LOG_FILE}" 
            exit 88
        }
    
        wantUnionMount() {
            grep "${UNION_WANTED_FLAG}" /proc/cmdline &>/dev/null
        }
    
        wantUnionMount && {
            echo "trying to unite..."
            umount "${MOUNTPOINT}" || errorOut "umounting failed"
            mount -t unionfs -o "dirs=${DEVEL_BRANCH}=rw:${STABLE_BRANCH}=rw" none ${MOUNTPOINT} || errorOut "mounting unionfs failed"
        }
    

    Final Step

    Now the final step is to create a boot menu entry which contains the ${UNION_WANTED_FLAG} from above. You can put any string on the kernel command line (well, maybe any byte value, I guess all ascii-chars, but at least all ascii-alphanum chars) and if the kernel doesn't know it, it seems to be silently ignored but still appears in '/proc/cmdline' (and in `dmesg | grep 'Kernel command line') So, in the case of the example above, just create a boot menu entry with "UniteWithDevel" on the kernel command line and you can boot right into your development work, giving you the possibility to replace core system applications and daemons that get started by the init process (e.g.: ntp, iptables, sysklogd, etc). E.g:

            kernel /boot/unionkern-2.6.27.10/unionkern-2.6.27.10 root=/dev/hdc9 vga=0x31B UniteWithDevel 
    

    Some Unionfs notes:

    Union Mount

    There's also an alternative implementation called union mount. For that, see "Unifying filesystems with union mounts". As far as I know, the corresponding patches are not yet considered stable at this time. Unionfs seems like it will never be supported, but the union mount patches will be. See "Unionfs and related patches pre-merge review". Nevertheless, unionfs is widely used (see here) and I didn't have any problems with it (using xserver and xlib). So until the union mount patches go mainline, unionfs seems to be a good opportunity. Obviously union mount would make fs-entries below the mountpoint accessible which would eventually obviate some of the steps above, making a test run much easier (e.g., you wouldn't have to move /usr.)

    I have used the term union mount at different places in this text and scripts, because it seems to describe what happens. That does not relate to the implementation that goes by that name. Everything except this paragraph is based on unionfs and not "union mount".

    Populate

    If you have a binary (i.e., pre-compiled) distribution, there are certainly a lot of points to consider. Depending on how much "core" your application wants, you have to build the application with the right flags in case other apps make use of those features (--enable-this, --enable-that and friends, in case it is based on autotools.) And of course, you have to use the same paths that the installed package uses, to hide all the files in the stable branch. Actually you would probably only need to hide entry points for other applications, like binaries, headers, libs and such. If there are single differences, you can try to hide them by using symlinks at the topmost layer.

    So, with binary distributions, it is not that easy - but it is very easy with Gentoo, as Gentoo is about building apps from source in the first place. For example: to make little modifications to the xserver and try it out, it would only require something similar to this:

            $ ebuild /usr/portage/x11-base/xorg-server/xorg-server-1.5.3-r2.ebuild compile
                #... now modify the sources like so:
                $ cd /var/tmp/portage/x11-base/xorg-server-1.5.3-r2/work/xorg-server-1.5.3
                $ sed -i "s/\"(II)\"/\"(I am Bob and I almost completely rewrote the xserver)\"/g" os/log.c
                #... and restart the build process
                $ make
            $ ebuild /usr/portage/x11-base/xorg-server/xorg-server-1.5.3-r2.ebuild install
            $ cp -a /var/tmp/portage/x11-base/xorg-server-1.5.3-r2/image/usr/* /mnt/devel/usr/
    

    The result would be exactly the same xserver like the one that is in the stable branch (if it was of version 1.5.3-r2) but it would show condign respect to your accomplishments in the servers logfile.

    Another more useful example: As a non-native English speaker, I sometimes miss words from English movies. I like using "kaffeine" to watch movies, but the smallest step to skip backward is 20 seconds, which is far too long in response to "Come again?" So, with the steps from the previous example adapted to kaffeine and a bit of source-code browsing resulted in the following patch, which allows rewinding by 5 secs:

        --- kaffeine/src/player-parts/xine-part/xine_part.cpp.orgy      2008-11-10 19:24:18.000000000 +0100
        +++ kaffeine/src/player-parts/xine-part/xine_part.cpp   2008-11-10 19:24:23.000000000 +0100
        @@ -511,7 +511,7 @@ void XinePart::slotPosPlusSmall()
    
         void XinePart::slotPosMinusSmall()
         {
        -       slotJumpIncrement( -20 );
        +       slotJumpIncrement( -5 );
         }
    
         void XinePart::slotPosPlusMedium()
    

    The point is that you can easily modify sources and test the results without ever messing up your stable branch.

    Your fully functional stable system is just one reboot away.

    As a side note: You can, of course, also use this mechanism to try out different versions of an application. Just put a read-write mounted blank branch on top of your read-only mounted stable-branch, and use your distribution's package management system to install another, possibly unstable, version of that application. If, in the course of that, you want to manage different top level branches, you might also consider using "/mnt/devel/usr" as a symlink, and let the unionmount-usr script above resolve that symlink to the branch you really want, by adding a line like:

        DEVEL_BRANCH=$(readlink -f "${DEVEL_BRANCH}") 
    

    That way you can choose which branch is mounted at bootup by simply redirecting the symlink.

    Sources

    Now, you've probably guessed that I'm quite satisfied with Gentoo - once it's combined with a unification filesystem. Once you have set up and configured a Gentoo system, and know how to use portage (Gentoo's package manager), updating applications is a no-brainer most of the time. (Big kudos to all the Gentoo developers who make this possible.)

    If you are interested, you can also get an initial binary distribution from Gentoo and then still use the source-based package manager afterwards on top of the binary packages. See Gentoo Linux and wikipedia:Gentoo.

    Then there is Sabayon which is based on Gentoo and also includes a binary package manager but only supports x86 and x86-64 architectures. See Sabayon Linux and wikipedia:sabayon.

    Can it get any better?

    Initially I wanted to share my musings about whether it would be possible to bring the demonstrated "Populate" approach to binary distribution as a distro-independent application. When I wrote the last paragraph I took the opportunity to finally have a more thorough look at Sabayon Linux. The impression I got is that you can have a more-or-less semi-annually updated binary base system while still having the opportunity to use all the versatility that Gentoo offers (trying the latest features of bleeding edge apps, always have the newest versions of exposed and endangered apps, and of course easily modifying selected apps). So, if you don't want to go "all source" you can still try Sabayon Linux as a compromise.

    If you have any interest in browsing, modifying, and testing open source programs, I strongly recommend giving Gentoo or Sabayon a try, getting accustomed to the Gentoo build system, and optionally trying a unified /usr-directory.

    Again, I am very pleased to have the opportunity to develop and test a program while having a stable system just a reboot away. This makes development very pleasant.

    And all the difficulties about dependencies and different build systems are covered by portage. This makes tweaking selected apps very convenient.


    Talkback: Discuss this article with The Answer Gang


    [BIO]

    I`m a Linux user since 2003 and and after I did my first Linux steps with SuSE, I very soon chose source-based distributions (LFS and later Gentoo) as they offer a convenient way to see and alter any bit of the system that`s actually running.


    Copyright © 2009, Dirk Wallenstein. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

    Published in Issue 160 of Linux Gazette, March 2009

    XKCD

    By Randall Munroe

    More XKCD cartoons can be found here.

    Talkback: Discuss this article with The Answer Gang


    [BIO]

    I'm just this guy, you know? I'm a CNU graduate with a degree in physics. Before starting xkcd, I worked on robots at NASA's Langley Research Center in Virginia. As of June 2007 I live in Massachusetts. In my spare time I climb things, open strange doors, and go to goth clubs dressed as a frat guy so I can stand around and look terribly uncomfortable. At frat parties I do the same thing, but the other way around.


    Copyright © 2009, Randall Munroe. Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

    Published in Issue 160 of Linux Gazette, March 2009

    The Linux Launderette

    Very short 419 scam

    Mike Orr [sluggoster at gmail.com]


    Wed, 4 Feb 2009 12:04:02 -0800

    ---------- Forwarded message ----------

    From: Jubouri Omar <solveig.skold2@comhem.se>
    Date: Wed, Feb 4, 2009 at 6:43 AM
    
    Subject:
    To: undisclosed-recipients
    

    I want to move my family out of Iraq due to violence and I want to transfer $12,500,000 to you.reply

    --- End forward ---

    How nice and concise and to the point. I wish all scams were like that. But that last word is rude. Just "reply". Not "Please reply" or "Hoping to God I will hear from you soon", but just an order: "Reply!"

    -- 
    Mike Orr <sluggoster@gmail.com>
    

    [ Thread continues here (2 messages/1.72kB) ]


    In honor of Jimmy. . .

    Kat Tanaka Okopnik [kat at linuxgazette.net]


    Thu, 19 Feb 2009 14:50:12 -0800

    http://news.bbc.co.uk/2/hi/uk_news/northern_ireland/7899171.stm

    The mystery of Ireland's worst driver

    Perhaps the Irish police need to have our Jimmy O'Regan and his translating compatriots on the payroll!

    -- 
    Kat Tanaka Okopnik
    Linux Gazette Mailbag Editor
    kat@linuxgazette.net
    

    [ Thread continues here (2 messages/1.17kB) ]



    Talkback: Discuss this article with The Answer Gang

    Published in Issue 160 of Linux Gazette, March 2009

    Tux