[ Table of Contents ] [ Front Page ] [ Prev ] [ Linux Gazette FAQ ] [ Next ]

"Linux Gazette...making Linux just a little more fun!"


More 2¢ Tips!


Send Linux Tips and Tricks to gazette@ssc.com


2 Cent Trick: handy dictionary

Fri, 10 Mar 2000 16:52:05 -0800
From: Bryan Henderson <bryanh@giraffe-data.com>

A writer should never forego looking up a word in the dictionary because it's too much effort. Here's a way to have a dictionary at your fingertips if you're connected to the Internet: Make a shell script 'dict' that does

lynx "http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=$*"

Now the shell command

dict quotidian

Brings up a definition of "quotidian".

I use Lynx instead of wget because the dictionary page has links on it I might want to follow (such as alternate spellings and synonyms). I use Lynx instead of something graphical because it is fast.


faq_builder.pl script

Sat, 11 Mar 2000 07:08:15 +0100 (CET)
From: Hans Zoebelein <hzo@goldfish.cube.net>

Everybody who is running a software project needs a FAQ to clarify questions about the project and to enlighten newbies how to run the software. Writing FAQs can be a time consuming process without much fun.

Now here comes a little Perl script which transforms simple ASCII input into HTML output which is perfect for FAQs (Frequently Asked Questions). I'm using this script on a daily basis and it is really nice and spares a lot of time. Check out http://leb.net/blinux/blinux-faq.html for results.

Attachment faq_builder.txt is the ASCII input to produce faq_builder.html using faq_builder.pl script.

'faq_builder.pl faq_builder.txt > faq_builder.html'

does the trick. Faq_builder.html is the the description how to use faq_builder.pl.

faq_builder.pl
faq_builder.html
faq_builder.txt


winmodems

Tue, 14 Mar 2000 20:09:41 -0500
From: Les Urban <lnaa@centurytel.net>

Well I'm a Linux newbie. Been in DP for 25 years and wanted something better. I'm running SuSE 6.3 and having fun. Really close to wiping WINDOZE from my hard disk. I've really enjoyed your site and have found many answers to the many questions that have been starring me in the face. Keep up the good work.

Read another article about winmodems being dead wood as far as Linux is concerned. I got a great deal I thought at Best Buys a 56k modem for 9.95. Of course I should have know it had to be a winmodem. But I stumbled on the site linmodems.org There are alot of people working on this issue. In fact LUCENT has provided a binary for their LT MODEMS. Well I downloaded that guy and I'm running great on my winmodem. There are other drivers available for other modems. Some are workable and some are still in development. I believe it's worth taking a look and spreading the word. You can reach alot more than I can.

Once again great site and thank you for all the helpful hints. I'll continue to be a steady visitor and a Linux advocate.


Fantastic book on linux - available for free both on/offline!

Sat, 18 Mar 2000 16:15:22 GMT
From: Esben Maaløe (Acebone) <acebone@f2s.com>

Hi!

When I browse through the 2 cent tips, I see a lot of general Sysadmin/bash questions that could be answered by a book called "An Introduction to Linux Systems Administration" - written by David Jones and Bruce Jamieson.

You can check it out at www.infocom.cqu.edu.au/Units/aut99/85321

It's available both on-line and as downloadable PostScript file. Perhaps it's also available in PDF.

It's a great book, and a great read!


Tips in the following section are answers to questions printed in the Mail Bag column of previous issues.


ANSWER: Re: help in printing numbered pages!

Wed, 01 Mar 2000 07:48:30 GMT
From: Anthony E. Greene <agreene@pobox.com>

I usualy print a lot of documentation. One thing that I would like to make is that my print jobs gets the pages to be numbered. So at the bottom of the pages we could see "page 1/xx" , etc. I had looked a while for info in how to set this up, but could not find. The printtool just dont do it. Maybe i should to create a filter, but what commands must i use to make this heapppens???

It depends on the software you used to create the file. If it is a plain text file, you can use "pr" to print it with page numbers in the header:

pr -f -l 55 somefile.txt | lpr

If the file is HTML, there is a utility html2ps that will do what you need. It's available at http://www.tdb.uu.se/~jan/html2ps.html. It converts HTML to PostScript with an option for page numbers and other things.

If you are using a word processor, it may have an option to include page numbers. Let us know what you are trying to print and we may be able to give better help.

José A. Gaeta Mendes <gaeta@ecosfera.com.br> suggests

See emacs *Tools | Print | Postscript Print Buffer* command... It does exactly what you want!

[ ]s - J. A. Gaeta Mendes Greetings from Brazil!

Michal Jaegermann <michal@ellpspace.math.ualberta.ca> suggests

'man pr'

For fancier output with various bells and whistles

'man nenscript'

(plus maybe a driver script, or two, to get your customizations).

Michal

Bob Ternosky <michal@ellpspace.math.ualberta.ca> suggests

Try the 'mpage' command.

It wont work for everything but it uses a header and page numbers and can also print multiple pages of stuff per page (saves paper).

Example:

mpage -2 -H document | lpr

(prints 2 pages on one, and puts a header w/ filename, page numbers)

Hope this helps.

From: Clovis Sena <csena@itautec-philco.com.br>

Thanks Bob and everybody who helped me! Now I am using mpage to print. That makes more sense, saving paper and toner.


ANSWER: Re: Inexpensive, powerful db's for Linux?

Wed, 01 Mar 2000 08:01:11 GMT
From: Anthony E. Greene <agreene@pobox.com>
To develop a distributed database application that runs on Linux, what inexpensive, powerful databases might work best?

Check the Application section at Linux.org for databases. It lists big names such as DB2, Oracle, Informix, Ingres, and others such as PostgreSQL which is free, powerful, and ships with Red Hat Linux. The latest beta adds several desirable features. You may hear a lot about MySQL, but if you're building anythin more complicated than a basic query system, you'll need something more powerful than MySQL.

Marius ANDREIANA <rocky@ss.pub.ro> wrote

PoestgreSQL, by far. www.postgresql.org Inexpensive ( free, GPL ), powerful ( check the website for a list of features ), great. There's also MySQL, but less powerful than Postgres ( performance, when talking about milions of records, and many more )

Distributed ? PostgreSQL is the database engine, server & client. You start the server and you can connect to it from any other machine using TCP/IP. Connect from what ? From what you want : C, Python, PHP, Perl, etc etc ( see Postgres howto ). A recent great adition is Gnome-DB ( www.gnome.org/gnome-db/ ), which gives you the power to develop cool desktop applications.


ANSWER: Re: linuxconf

Wed, 1 Mar 2000 06:36:17 -0700 (MST)
From: <ghaverla@freenet.edmonton.ab.ca>

I'm just guessing, but are the clocks on all these machines synchronized? That seems to be one reason why one computer would continually update another.


ANSWER: Modems

Wed, 1 Mar 2000 12:20:02 -0300
From: Juan M. Fera <jmfera@sion.com>

Hello: You can solve your problems by reading the section 20 (Modems) on the Linux Hardware Compatibility HOWTO...

http://www.linuxdoc.org/HOWTO/Hardware-HOWTO-20.html

and the section 30 (Appendix E. Linux incompatible Hardware) too...

http://www.linuxdoc.org/HOWTO/Hardware-HOWTO-30.html


ANSWER: IP Masquerade Connection Problems

1 Mar 00 08:47:35 PST
From: Darrell Scott <scottie99@netscape.net>

This is a known bug with the PPP packages distributed with Red Hat 6.1. See RHEA-1999:051-01 for a full description of the problem, and pointers to updated packages.

Scottie


ANSWER: Re: make virtuald

Thu, 2 Mar 2000 22:32:07 +0100 (CET)
From: Roland Smith <rsmith@xs4all.nl>

I am trying to compile virtuald using make virtuald here is the error I get

Makefile:14: *** missing separator. Stop. 
I did a cut and paste of the code from http://www.linuxdoc.org/HOWTO/Virtual-Services-HOWTO-3.html in the section 3.4 Source then used ftp to put it on the server in order to compile it.

This is a small gotcha with make. See the following makefile snippet:

# Remove all generated files.
clean:;
        rm -f $(OBJS) $(BASENAME) *~ core $(TARFILE) $(BACKUP) $(LOG)

The commands that follow a rule (`rm' in this case) should be preceded by a tab character. Probably, some tabs got converted to spaces when cutting-and-pasting.

Try editing the Makefile with vi or emacs. Both can insert literal tab characters, and emacs has a specail mode for makefiles that takes care of that automagically.

Darrell Scott <scottie99@netscape.net> suggests

Hi,

I think you're missing a Makefile!

The "make" command interprets a set of commands in a specifically written makefile, to compile and link source code into an executable. "gcc" is the actual compiler/linker, and is appears to be what you actually require in this case. Try "gcc (source filename goes here) -o virtuald" (it's probably wise to move your source from virtuald to virtuald.c with the command "mv virtuald virtuald.c" first).

Scottie

btw, if you did write a makefile, a common gotcha is that at least one tabspace is required between the label and the commands. Spaces just won't cut it; you'll end up with the "Missing Separator" message again!


ANSWER: Linux & win98 internet connection sharing

Wed, 1 Mar 2000 22:41:07 -0500
From: DJ Busch <djbusch@mediaone.net>

Check out a program called WinRoute for Windows 98. You can download a trial copy from winroute.com. If it's setup right, you can tell your Linux box that your Win98 box is your "default gateway" and winroute will do the rest for you. I have 2 W98 boxes, 1 linux box and 1 iMac all running on this kind of setup and I've never had a problem.

DJ Busch


ANSWER: Re: Gazette crashes Konqueror (from LinuxGazette #51/General Mail)

Thu, 2 Mar 2000 08:41:07 +0000
From: Jan-Hendrik Terstegge <helge@jhterstegge.de>

Hi!

In LinuxGazette #51 Pierre Abbat (phma@oltronics.net) wrote:

I am trying to read the Gazette with kfm 1.167 and several pages crash it, including the mailbag and 2c Tips. Can you help me figure out what's wrong? It's happend before.
In you Subject you wrote the the gazette crashes the konqueror, but in your text you wrote it is the kfm. What thing do you use? Is the Konqueror your kfm replacement. Then I think you use a KDE-beta Version. Please try to use stable releases (I think the last was KDE 1.1.2).


ANSWER: AS/400 Emulation

Thu, 02 Mar 2000 08:25:59 -0500
From: Vince Du Beau <vdubeau@ploverdev.com>
From: Jeffrey T. Ownby (jownby@ecsis.net)
Subject: 5250 terminal for AS400 connection

I am adding a Linux box to a network consisting of several Win9X and NT machines that use either IBM Client Access or Rumba to connect to our AS400. Is there a program similar to either one of these that can provide terminal emulation on Linux. Any info appreciated!

Later,
Jeffro

I currently use tn5250 to connect from my laptop to an AS/400 (thru an NT Server). This is a modified version of telnet with better key mappings. You can find it at: http://www.blarg.net/~mmadore/5250.html

IBM also has a java based version of client access which reportedly runs under Linux. I haven't tried it since it requires some of it to run from the HTTP server on the AS/400. Their link is: http://www-4.ibm/software/network/hostondemand

Hope this helps.

Vince Du Beau

From: Jimmy O'Regan <jimregan@litsu.ie> suggests

There's a 5250 terminal emulator available at http://www.linux-sna.org/software/5250/index.html

We use an AS/400 in the college I attend, and while I haven't been able to get permission to put a linux box on the same network as the 400 (and therefore cannot vouch for the linux version) I have tried the WinNT port of the program, and it works very well. (Though as far as I remember, there was no way to paste)

But as far as getting the capabilities of client access, you should try Linux SNA ( http://www.linux-sna.org/) which adds the AS/400s native protocol stack to the kernel. There are also some tools which should provide some of the other capabilities of client access, such as file transfer.

(If you get it working, please drop me a note, as I'd love some testimonial to use to convince my college to let me hook up a box to our as/400).


ANSWER: Quick tip for mounting FDs, CDs, etc...

Fri, 25 Feb 2000 15:49:17 -0800
From: <fuzzybear@pocketmail.com>

If you can't or don't want to use auto-mounting, and are tired of typing out all those 'mount' and 'umount' commands, here's a script called 'fd' that will do "the right thing at the right time" - and is easily modified for other devices:

#!/bin/bash
d="/mnt/fd0"
if [ -n "$(mount $d 2>&1)" ]; then umount $d; fi

It's a fine example of "obfuscated Bash scripting", but it works well - I use it and its relatives 'cdr', 'dvd', and 'fdl' (Linux-ext2 floppy) every day.

Ben Okopnik


ANSWER: Re: Pentium-II Xeon and calculation speed

Sun, 27 Feb 2000 15:19:44 +0100 (CET)
From: Roland Smith <rsmith@xs4all.nl>

Hi guys, here I'm trying to get a little bit of help with my computer. I'm doing a very time expensive calculations using FORTRAN programs, compiled with g77 under Red Hat 6.1. First, on dual Pentium-II/400MHz and Pentium-III/450MHz computers I noticed, that when program size (RSS in top) is getting biger that approximately 600K computation speed dramatically decreasing by factor two. This slowing down agrees with the bus speed (100MHz) and L2 cache (512K, 200MHz). So,

What have you done about optimizing your program? Some things you could do:

I've decided that the reason lies in the cache sped/size and bought (pretty cheap) dual Pentium-II/450MHz Xeon computer with 2MB L2 cache per proccessor that suppose to run at 450MHz and 512M SDRAM on SuperMicro mainboard. Unfortunately I did not find any difference in performance of this computer and still much cheaper dual Pentium-II/400MHz. Why is it?

Total performance depends on a lot of things except CPU speed and cache size. To name a few:

The clock speeds between the two systems don't vary much. So if you find that your performance hasn't increased as much as you expected, the only thing that you can conclude is that the cache size probably isn't the limiting factor here.

May be, Red Nat 6.1 somehow must be told explicitely about cache size? But I did not find any such option...

I don't *think* so. AFAIK, the cache can't be influenced from the OS on x86 CPU's. I've only heard of that trick with Mac's running on 68040's

HTH, Roland


ANSWER: Re: Anti Virus programs for linux Red Hat 6

Sun, 27 Feb 2000 15:35:41 +0100 (CET)
From: Roland Smith <rsmith@xs4all.nl>

Is there a "definative" anti virul program for Linux? Any info appreciated!

There are several anti-virus programs available, see Freshmeat.net: http://freshmeat.net/appindex/daemons/anti-virus.html

I believe that the McAfee also runs on Linux.

But these are mostly used for scanning mail destined for other systems that are more vulnerable to viruses.

Linux and other UNIX-like systems don't suffer much from viruses, because most programs do not run with root privileges. So they don't have access to the system, other than the user's home-directory and processes.

So as long as you're surfing as a normal user, and not as root, any virus that you contract can at most endanger your own files and processes, and not the integrety of the system. Besides, most binary and macro viruses are targeted on DOS/Windows, so they don't even work on Linux.

Of course there are other attacks on your system possible, forms of so-called root-exploits; using known defects in programs to gain access to your machine as root.

That's why you do need to keep track of your distribution's security advisories.


ANSWER: Re: neighbour table overflow

Sun, 27 Feb 2000 15:38:13 +0100
From: Baco <baco@baco.net>

Dear,

I was running quite a long time with NFS and transmission stopped. I get: Sep 6 00:03:20 coyote kernel: eth0: trigger_send() called with the transmitter busy. I rebooted the machine I was connected to and I get the below (part of /var/log/messages >file. Not all error statements shown):
Sep  6 17:57:04 beartooth kernel: neighbour table overflow
Sep  6 17:57:04 beartooth kernel: neighbour table overflow
Sep  6 17:57:04 beartooth rpc.statd: Cannot register service: RPC:
Unable to send; errno = No buffer space available
Sep  6 17:57:04 beartooth nfs: rpc.statd startup succeeded
Sep  6 17:57:04 beartooth rpc.statd[407]: unable to register (SM_PROG,
SM_VERS, udp).l:

I had the same problem but with all ftp daemons running under inetd

My problem was resolved when I added "127.0.0.1 localhost" into /etc/hosts and when I setup the loopback lo interface using route and ifconfig

I hope this was also your problem.


ANSWER: Win95-Linux small network with null modem

Sun, 27 Feb 2000 10:36:57 -0500
From: David B Sarraf <david.sarraf@juno.com>

Dear Wagner Perlino

What you are asking is commonly done with Linux. For example I have a small home network with three and sometimes four W95 machines and sometimes another Linux box. All are connected to a hub along with a "server" linux box. That machine does the following

Projects underway:

I did not set out to do all of this at once. The project began as a request to make a dial-on-demand connection. It just grew as I got better with Linux and realized the power of the O/S and machine.

Dial on demand was quite an experience to set up due to my then inexperience with Linux, ISP hookups, and networking. Now it would be quite easy.

There were some issues with the current version of diald (0.16 and 0.99) and incompatibilities with newer versions of Linux (particularly Red Hat 6.1 and the Ethertap device). When I did this project about a year ago the issues were being worked through. Some comprehensive how-tos were posted and some users were reporting success. By now the package itself has probably been fixed. My workaround was to stay with diald 0.16 and RedHat 5.1. Setup was straightforward and the combination has worked flawlessly for about a year.

Contact me if you want more details. I'd be glad to help.


ANSWER: Subject: insmod and newbie to RH6.1

Sun, 27 Feb 2000 11:05:53 -0500
From: David B Sarraf <david.sarraf@juno.com>

Put the insmod command into /etc/rc.d/rc.local

Any commands there are run at boot time.


ANSWER: secondary ide and cdrom

Sun, 27 Feb 2000 22:32:44 -0500
From: George W. Bursha <gburs2@hotmail.com>

I noticed a number of times people have problems with secondary ide on a pnp sound card. Don't be alarmed by ide3: unexpected interrupt, status=0xff, count=1. That is to say, if you are successful in getting your isapnp.conf correct and the the card seems proper from dmesg all except for this message and you still can't do a mount /dev/cdrom /cd0.....well fool, then go to /dev and rm cdrom and ln -s to the correct device! /dev/cdrom is probably linked to /dev/hdb for example. When testing your cd use 'mount -t iso 9660 /dev/hdx /mnt/cdrom'. Where x in hdx is the correct device name. You will perhaps surprise yourself after many hours spent shaking your head.


ANSWER: Re: Users required to enter root-password? Red Hat 6.1

Tue, 29 Feb 2000 19:40:27 -0500
From: Srinivasa A. Shikaripura <sas@lucent.com>
I have just installed version 6.1 and set up my modem to dial out to my ISP. However, when I log on as a user and press KDE>Internet>kppp a pop-up box opens up and wants me to enter the root-password! This does not seem right. is there a way to avoid having to enter the root pass word when logged on as a non-root user?

You could possibly change the permissions on /dev/modem and whatever it is pointing to (/dev/ttyS1 etc.) to allow the user to read/write from that device.

You should also be selectively allow some users to use the modem by giving group permissions, but I am really not suer how it is done. (but I know it is possible).

cheers -Sas


ANSWER: Re: Modem problem with 5.2

Sat, 04 Mar 2000 11:33:45 -0800
From: Steve Dunbar <sad@efn.org>
... I can hear the modem clicking like it is trying to dial, but it will never dial the number....

Try running setserial to set the IRQ used by the modem, e.g. I use "setserial /dev/ttyS2 irq 5 uart 16550A" to get my modem working. This command must be run as root. You may have to mess with the jumpers on your modem card to set the IRQ.

There should be an initialization script that controls serial port configuration at boot time. On Suse 6.3 this is /sbin/init.d/serial. It should be possible to edit this script to set your modem set up automagically, although I haven't yet got it working on my system.

See the Modem HOWTO for more info.

-- Steve


ANSWER: Installing Linux on a laptop with a single FD/CD slot

Sun, 5 Mar 2000 17:45:26 -0800
From: <ben-fuzzybear@yahoo.com>

Saw this question in LG the other day - yep, I'm a few issues behind but catching up fast. Relatively easy answer (I just tried it with both Debian and Red Hat and it works fine):

(Assumptions: you have DOS/Windows installed, and can read from your CD.)

First, create a directory - C:\Linux is fine.

(The two examples below will cover the majority of the installs done these days, and are easily adaptable to other distros.)

Debian

I'm one of those guys who'd never heard of a distro on CD until _after_ d'loading a gig-plus worth of stuff, back when... but somewhere on the CD there should be files called "loadlin.exe", "root.bin", and "linux", ~1.5MB worth of stuff. Copy those to your new directory. Shut down your machine (I'm assuming the CD and the FD are not hot-swappable - otherwise there'd be no point to this), swap in the CDR, and turn it back on. Start DOS - *not* a DOS window under Win9x, but DOS (by pressing the F8 key, if necessary, as soon as you see the "Starting Windows..." message), type
cd Linux

at the C: prompt, then type

loadlin linux root=/dev/ram initrd=root.bin

Red Hat

This uses the 5.2 CD but I would think it's much the same for the different versions. From the "dosutils" directory, copy "loadlin.exe"; from "dosutils\autoboot" copy "vmlinuz" and "initrd.img" into your "Linux" directory. Shut down, attach CDROM, reboot into DOS, 'cd Linux', and type

loadlin vmlinuz initrd=initrd.img

...and you're on your way!

Another tip, while we're on the subject - Debian has these files available at their FTP server, and probably on the CD as well -

base2_1.tgz -  10MB
drv1440.bin -  1.4MB
resc1440.bin - 1.4MB

Stick these in your "Linux" directory, too; they'll install a base Linux system on your HD, or let you perform any sort of rescue ops necessary (by mounting your existing Linux partition as /target hanging off a ramdisk, for example - forget your root password lately? ). No CD required - that's the entire base package. One of _the_ handiest things there is when you're munging through a tricky installation - and a GOOD reference for the initial state of your /etc files (look inside base2_1.tgz). Putting those files on the DOS side is usually my first step during an installation, and it's saved my sanity more than once.

Ben Okopnik Captain S/V "Ulysses"


ANSWER: X screen captures into video animation

Mon, 6 Mar 2000 10:27:37 -0700 (MST)
From: Michael J. Hammel <mjhammel@graphics-muse.org>

Thus spoke Ferenc Tamas Gyurcsan

I just saw your problem on the lg. Did you look for something like xvidcap? (I can't give you an url, but you will find it.) ps: If you manage to produce a good mpeg from the captured pictures, please let me know how. Ferenc

The original question was from Shawn Medero, who asked:

It captures motion on the computer desktop, basically multiple screen-captures tied together to form a movie of sorts. Primarly one would use to create training demostrations on linux applications, etc.

I checked, and yes, XVidCap does appear to fit this description. A quick check on Freshmeat gave this description for XVidCap:

XVidCap is an X11/Xt program, which captures specified rectangular areas of the X11 desktop. The captured frames can be saved in different formats (XWD, PNG, JPEG, PPM). Frames per second and other parameters can be defined at the command line. The saved frames could be used e.g. for an mpeg encoder or to make an animated GIF. A Step-mode is supported to get a frame on mouse click.

I tried to access the Homepage for this program, but couldn't get through, though it might be a problem on my end. I don't have time to recheck today, so I'll just pass along the URLs of interest.

Download: ftp://ftp.komm.hdk-berlin.de/pub/linux/X11/ Homepage: http://home.pages.de/~rasca/xvidcap/

Thanks to Ferenc for pointing this out. Its another application to add to my own catalog of tools on the Graphics Muse site (http://graphics-muse.com).


ANSWER: Uninstalling Linux on a Laptop

Tue, 7 Mar 2000 12:55:23 -0500
From: <Michael.Risser@Summus.com>

First I know this is opposite of what the intentions of Linux are, but sometimes it IS necessary. I recently had to remove Linux from my Dell Inspiron 3500 Laptop so that I could reinstall Windoze, it was necessary due to work, and limited hard-drive space(4GB).

Windows fdisk, Partition Magick, and Drive Wizard would not remove the partitions I had created for Red Hat 6.1(kudos to Linux on that one :-)) Instead you must first begin installing WinNT, and allow it to remove the partition. After WinNT has removed the partition you can either let it format the drive, or install Windoze as normal. So far that is the only way I have found to remove Linux partitions

Note: This tip was actually given me by my brother-in-law who had to do the same thing.


ANSWER: 2 Cent Tips

Wed, 08 Mar 2000 16:13:59 -0500
From: Bolen Coogler <bcoogler@dscga.com> How to set vi edit mode in bash for Mandrake 7.0

If, like me, you prefer vi-style command line editing in bash, here's how to get it working in Mandrake 7.0.

When I wiped out Redhat 5.2 on my PC and installed Mandrake 7.0, I found vi command line editing no longer worked, even after issuing the "set -o vi" command. After much hair pulling and gnashing of teeth, I finally found the problem is with the /etc/inputrc file. I still don't know which line in this file caused the problem. If you have this same problem in Mandrake or some other distribution, my suggestion for a fix is:

1. su to root. 2. Save a copy of the original /etc/inputrc file (you may want it back).

3. Replace the contents of /etc/inputrc with the following:

set convert-meta off
set input-meta on
set output-meta on
set keymap vi
set editing-mode vi

The next time you start a terminal session, vi editing will be functional.

--Bolen Coogler


ANSWER: Solution to mysterious dial-in hangup

Wed, 15 Mar 2000 19:19:56 +0100
From: Eric Kafé <kafe@mobilixnet.dk>

This seems to be a very common Red Hat 6.1 bug. The problem is mentioned, without a solution, in the current FAQ for the PPP demon.

After installing Red Hat 6.1, every time I tried to dial-in to my ISP, a mysterious hangup occurred on the first attempt, and the connection always succeeded the second time. I first suspected the ISP, but they have nothing to do with it.

The problem disappeared as soon as I compiled ppp from the most recent source rpm. I use ppp-2.3.11-1 with a 2.2.14 kernel. Compilation was straightforward. Now I connect at once and everything is just fine.

According to the README.linux file in the ppp documentation, there are some subtleties related to compiling ppp for different kernel versions. Perhaps the ppp package included in Red Hat 6.1 was configured for another kernel than it ships with.

Best regards


ANSWER: ATI xpert cards

Sat, 18 Mar 2000 21:50:25 -0800
From: <noah@nack.org>
I'm new user and believer of the Linux OS and I need help badly. I'm looking for a driver for an ATI Xpert@Work 8Mb PCI card. Where can I get it? I'm using a RedHat 5.2 and my monitor is a Mitsubishi Diamond Scan model FA3415AT4 [...]

Configure your display with the help of 'XF86Setup' (you have to write it as I do, with upper and lower cased letters), or, if it doesn't run the 'xf86config' program. Try to find your ATI Card, and if you don't, use simply SVGA. Most of cards which are not listed are standard SVGA Cards (my Matrox Millenium G200 also), and they run very well with the SVGA driver.

I have (successfully) set up a bunch of ATI cards under RedHat, and lately (>=5.0) have found that Xconfigurator seems to give better results with ATI cards.


ANSWER: Re: Tip & Tricks

Tue, 21 Mar 2000 16:09:45 -0600
From: Jesse Lang <jesse@tcmi.com>

The tip you're looking for looks something like this:

Edit your dns entry that probably looks like this:

www	A	[Your Machine's IP Address]
TO:
@	A	[Your Machine's IP Address]
www	CNAME		@

What you're saying is that your IP is uniandes.edu.co and that www is a alias to it. So either way, it will end up at your site. If Apache is set up with the ServerName directive as "www.uniandes.edu.co", then it will fix names as soon as it connects to apache.


ANSWER: Re: make virtuald

Tue, 21 Mar 2000 16:50:16 -0600
From: Jesse Lang <jesse@tcmi.com>

Chances are if your are using any other platform than Linux (Windows, dos, mac, etc.), the problem is the mode that ftp was in when you uploaded it. You have to make sure you upload in ascii mode as opposed to binary. This mode will do the proper conversion of line breaks and such. Give it another shot. The best way to tell if it's readable on linux is to type 'cat filename.c' You should see the line breaks in the right places. Hope this works for you. Let me know if you need more help if this doesn't roll out for you.


ANSWER: Linux not detecting above 64meg

Tue, 21 Mar 2000 16:50:16 -0600
From: Linux Gazette <gazette@ssc.com>

Several readers took your humble Editor to task for telling a user that Linux cannot autodetect memory above 64 MB because of a BIOS limitation; instead, I said that you have to tell Linux explicitly in the LILO config file or at the LILO command line.

The readers said they have had no problems with Linux autodetecting their 128 MB of memory. So I went home and took the

append = "mem 128M"
line out of my /etc/lilo.conf file and discovered that, indeed, it was unnecessary. But I know it was a necessity last year when I put the system together. In the meantime I had switched from kernel 2.0.36 to 2.2.14--perhaps autodetection was added to the 2.2 kernels.


ANSWER: Multiple video cards

Mon, 13 Mar 2000 12:46:32 +0100
From: François Désarménien <desar@club-internet.fr>

I have a question, have you any idea where I could find info about running multiple video cards and monitors under linux. eg. 2 SVGA cards or a SVGA and a VGA card ... and how should one configure these ??

XFree4.0 (which is out now) should solve this issue.


ANSWER: Screen "Camera" for linux

Thu, 30 Mar 2000 02:28:57 -0500
From: Paul Winkler <desar@club-internet.fr> Shawn Medero wrote:
It captures motion on the computer desktop . .basically multiple screen-captures tied together to form a movie of sorts. Primarly one would use to create training demostrations on linux applications, etc.

You could do this with import (part of the ImageMagick package) and a simple shell script. Try this, for example.

camera.sh You could then combine them to an animated GIF with Gimp. Or use ImageMagick's animate command to view the sequence, like this:

animate shot*.gif
But that starts a loop that repeats until you stop it...


ANSWER: Extracting a block of text from a file

Thu, 30 Mar 2000 09:29:34 -0800
From: Jim Dennis <djimd@linuxcare.com>
I'm trying to extract a block of text from a file using just bash and standard shell utilities (no perl, awk, sed, etc). I have a definitive pattern that can denote the start and end or I can easily get the line numbers that denote the start and end of the block of text I'm interested in (which, by the way, I don't know ahead of time. I only know where it is in the file). I can't find a utility or command that will extract everything that falls between those points. Does such a thing exist?

awk and sed are considered to be "standard shell utilities." (They are part of the POSIX specification). The sed expression is simply:

sed -n "$begin,${end}p"  ...
... if begin and end are line numbers.

For patterns it's easier to use awk:

awk "/$begin/,/$end/" ...
... Note: begin and end are regexes and should be chosen carefully!

However, since you don't want to do it the easy way, here are some alternatives:

------------------ WARNING: very long -------------------------

If it is a text file and you just want some lines out of it try something like: (text version)

#!/bin/sh
# shextract.sh 
#	extract part of a file between a
#	pair of globbing patterns 
[ "$#" -eq "2" ] || {
	echo "Must supply begin and end patterns" >&2
	exit 1
	}
begin=$1
end=$2
of="" ## output flag
while read a; do
	case "$a" in
		"$begin") of="true";;
		"$end") of="";;
		esac
	[ -n "$of" ] && echo $a 
	done
exit 0

... this uses no external utilities except for the test command ('[') and possibly the 'echo' command from VERY old versions of Bourne sh. It should be supported under any Bourne shell derivative. Under bash these are builtin commands.

It takes two parameters. These are "globbing" patterns NOT regular expressions. They should be quoted, especially if they contain shell wildcards (?, *, and [...] expressions).

Read any good shell programming reference (or even the rather weak 'case...esac' section of the bash man page) for details on the acceptable pattern syntax. Note because of the way I'm using this you could invoke this program (let's call it shextract, for "shell extraction") like so:

shextract "[bB]egin|[Ss]tart" "[Ee]nd|[Ss]top"

... to extract the lines between the any occurrence of the term "begin" or "Begin" or "start" or "Start" and the any subsequent occurence of "end" or "End" or "stop" or "Stop."

Notice that I can use the (quoted) pipe symbol in this context to show "alternation" (similar to the egrep use of the same token).

This script could be easily modified to use regex's instead of glob patterns (though we'd either have to use 'grep' for that or rely on a much newer shell such as ksh '93 or bash v. 2.x to do so).

This particular version will extract *all* regions of the file that lie between our begin and end tokens.

To stop after the first we have to insert a "break" statement into our "$end") ...;;; case. To support an "nth" occurence of the pattern we'd have to use an additional argument. To cope with degenerate input (cases where the begin and end tokens might be out of order, nested or overlapped) we'd have to do considerably more work.

As written this example requires exactly two arguments. It will only process input from stdin and only write to stdout. We could easily add code to handle more arguments (first two are patterns, 'shift'ed out rest are input file names) and some options switches (for output file, only one extraction per file, emit errors if end pattern is found before start pattern, emit warnings if no begin or subsequent end pattern is found on any input file, stop processing on any error/warning, etc).

Note: my exit 0 may seem superfluous here. However, it does prevent the shell from noting that the program "exited with non-zero return value" or warnings to that effect. That's due to my use of test ('[') on my output flag in my loop. In the normal case that will have left a non-zero return value since my of flag will be zero length for the part of the file AFTER the end pattern was found.

Note: this program is SLOW. (That's what you get for asking for it in sh). Running it on my 38,000 line /usr/share/games/hangman-words (this laptop doesn't have /usr/dict/words) it takes about 30 seconds or roughly only 1000 lines per second on a P166 with 16Mb of RAM. A binary can do better than that under MS-DOS on a 4Mhz XT!

BUG: If any lines begin with - (dashes) then your version of echo *might* try to treat the beginnings of your lines as arguments. This *might* cause the echo command to parse the rest of the line for escape sequences. If you have printf(1) evailable (as a built-in to your shell or as an external command) then you might want to use that instead of echo.

To do this based on line numbers rather than patterns we could use something more like: (text version)

#!/bin/sh
# lnextract.sh
#       extract part of a file between a
#       line numbers $1 and $2

function isnum () {
	case "$1" in
	*[^0-9]*)
		return 1;;
	esac
	}

[ "$#" -gt "2" ] || {
	echo "Must supply begin and end line numbers" >&2
	exit 1
	}

isnum "$1" || {
	echo "first argument (first line) must be a whole number" >&2
	exit 1
	}

isnum "$2" || {
	echo "second argument (last line) must be a whole number" >&2
	exit 1
	}

begin=$1
end=$2

[ "$begin" -le "$end" ] || {
	echo "begin must be less than or equal to end" >&2
	exit 1
	}

shift 2
for i; do
[ -r "$i" -a -f "$i" ] || {
		echo "$i should be an existing regular file" >&2
		continue
		}
ln=0
while read a ; do
	let ln+=1
	[ "$ln" -ge "$begin" ] && echo $a
	[ "$ln" -lt "$end" ]   || break
	done < "$i"
done
exit 0

This rather ugly little example does do quite a bit more checking than my previous one.

It checks that its first two arguments are numbers (your shell must support negated character class globs for this, ksh '88 and later, bash 1.x and 2.x, and zsh all qualify), and that the first is less than or equal to the latter. Then it shifts those out of the way so it can iterate over the rest of the arguments, extracting our interval of line from each. It checks that each file is "regular" (not a directory, socket, or device node) and readable before it tries to extract a portion of it. It will follow symlinks.

It has some of the same limitations we saw before.

In addition it won't accept it's input from stdin (although we could add that by putting the main loop into a shell function and invoking it one way if our arg count was exactly two, and differently (within our for loop) if $# is greater than two. I don't feel like doing that here --- as this message is already way too long and that example is complicated enough.

It's also possible to use a combination of 'head' and 'tail' to do this. (That's a common exercise in shell programming classes). You just use something like:

head -$end $file | tail -$(( $end - $begin ))

... note that the 'tail' command on many versions of UNIX can't handle arbitrary offsets. It can only handle the lines that fit into a fixed block size. GNU tail is somewhat more robust (and correspondingly larger and more complicated). A classic way to work around limitations on tail was to use tac (cat a file backwards, from last line to first) and head (and tac again). This might use prodigous amounts of memory or disk space (might use temporary files).

If you don't want line oriented output --- and your patterns are regular expressions, and you're willing to use grep and dd then here's a different approach:

start=$(grep -b "$begin" ... )
stop=$(( $( grep -b "$end" ... ) - $begin ))
dd if="$file" skip=$begin count=$stop bs=1b

This is not a shell script, just an example. Obviously you'd have to initialize $begin, $end, and $file or use $1, $2, and $3 for them to make this into a script. Also you have to modify those grep -b commands a little bit (note my ellipses). This is because grep will be giving us too much information. It will be giving a byte offset to the beginning of each pattern match, and it will be printing the matching line, too.

We can fix this with a little work. Let's assume that we want the first occurrence of "$begin" and the last occurence of "$end" Here's the commands that will just give us the raw numbers:

grep -b "$begin" "$file" | head -1 { 
	IFS=: 
	read b x
	echo b  
	}

grep -b "$end" "$file" | tail -1 | { 
	IFS=:
	read e x
	echo e
	}

... notice I just grep through head or tail to get the first or last matching line, and I use IFS to change my field separator to a ":" (which grep uses to separate the offset value from the rest of the line). I read the line into two variables (separated by the IFS character(s)), and throw away the extraneous data by simply echoing the part I wanted (the byte offset) back out of my subshell.

Note: whenever you use or see a pipe operator in a shell command or script --- you should realize that you've created an implicit subshell to handle that.

Incidentally, if your patterns *might* have a leading - (dash) then you'll have problems passing them to grep. You can massage the pattern a little bit by wrapping the first character with square brackets. Thus "foo" becomes "[f]oo" and "-bar" becomes "[-]bar". (grep won't consider an argument starting with [ to be a command line switch, but it will try to parse -bar as one).

This is easily done with printf and sed:

printf "%s" "$pattern" | sed -e 's/./[&]/'

... note my previous warning about 'echo' --- it's pretty permissive about arguments that start with dashes that it doesn't recognize, it'll just echo those without error. But if your pattern starts with "-e " or -n it can effect out the rest of the string is represented.

Note that GNU grep and echo DON'T seem to take the -- option that is included with some GNU utilities. This would avoid the whole issue of leading dashes since this conventionally marks the end of all switch/option parsing for them.

Of course you said you didn't want to use sed, so you've made the job harder. Not impossible, but harder. With newer shells like ksh '93 and bash 2.x we can use something like:

[${pattern:0:1}]${pattern:1}

(read any recent good book on shell programming to learn about parameter expansion). You

can use the old 'cut' utility, or 'dd' to get these substrings. Of course those are just as external to the shell as perl, awk, sed, test, expr and printf. If you

really wanted to do this last sort of thing (getting a specific size substring from a variable's value, starting from an offset in the string, using only the bash 1.x parameter expansion primitives) it could be done with a whole lot of fussing. I'd use ${#varname} to get the size, a loop to build temporary strings of ? (question mark) characters to of the right length and the ${foo#} and ${foo%} operators (stripping patterns from the left and right of variable's value respectively) to isolate my substring.

Yuck! That really is as ugly as it sounds.

Anyway. I think I've said enough on the subject for now.

I'm sure you can do what you need to. Alot of it depends on which shell you're using (not just csh vs. Bourne, but ksh '88 vs. '93 and bash v1.14 vs. 2.x, etc) and just how rigit you are about that constraint about "standard utilities"

All of the examples here (except for the ${foo:} parameter expansion) are compatible with bash 1.14.

(BTW: now that I'm really learning C --- y'all can either rest easy that I'll be laying off the sh syntax for awhile, or lay awake in fear of what I'll be writing about next month).

Here's a short GNU C program to print a set of lines between one number and another: (text version)

/* extract a portion of a file from some beginning line, to 
* some ending line
* this functions as a filter --- it doesn't take a list
* of file name arguments.
*/

#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include &lt;errno.h&gt;

int 
main (int argc, char * argv[] )
{
char * linestr;
long begin, end, current=0;
ssize_t * linelen;


linelen = 0; 
linestr=NULL;

if ( argc &lt; 3 ) {
fprintf(stderr, "Usage: %s begin end\n", argv[0]);
exit(1);
}

begin=atol(argv[1]);
if ( begin &lt; 1 ) {
fprintf(stderr, "Argument error: %s should be a number "
		"greater than zero\n", argv[1]);
exit(1);
}

end=atol(argv[2]);
if ( end &lt; begin ) {
fprintf(stderr, "Argument error: %s should be a number "
		"greater than arg[1]\n", argv[1]);
exit(1);
}

while ( getline(&linestr, &linelen, stdin ) &gt; -1 
  && (++current &lt; end ) ) {
if (current &gt;= begin) {
	printf("%s", linestr);
	}
}

exit(0);
return 0;
}

This is about the same length as my shell version. It uses atol() rather than strtol() for the argument to number conversion. atol() (ASCII to long) is simpler, but can't convey errors back to us. However, I require values greater than zero, and GNU glibc atol() returns 0 for strings that can't be converted to longs. I also use the GNU getline() function --- which is non-standard, but much more convenient and robust than fussing with scanf(), fgets() and sscanf(), and getc() stuff.


This page written and maintained by the Editor of the Linux Gazette. Copyright © 2000, gazette@ssc.com
Published in Issue 52 of Linux Gazette, April 2000
[ Table of Contents ] [ Front Page ] [ Prev ] [ Linux Gazette FAQ ] [ Next ]