<< Prev  |  TOC  |  Front Page  |  Talkback  |  FAQ  |  Next >>
LINUX GAZETTE
...making Linux just a little more fun!
More 2¢ Tips!
By The Readers of Linux Gazette

See also: The Answer Gang's Knowledge Base and the LG Search Engine


Backup Software: Robustness

Mon, 2 Jun 2003 08:09:29 +1000
Nick Coleman (njpc from ozemail.com.au)

This is a reply to a letter to the Mailbag in the June 2003 issue of Linux Gazette, compressed tape backups

quite a while back I remember a discussion on compressed tar archives on tape and the security risk, i.e. the data would be unrecoverable behind the first damaged bit.

Now at that time I knew that bzip2, unlike gzip, is internally a blocking algorithm and it should be possible to recover all undamaged blocks after the damaged one.

Your correspondent may like to look into afio instead of tar for backups. I believe it recovers from errors much better. The mondo rescue tool developer uses it.

Regards,
Nick Coleman

[JimD] The problems recovering tar files are worst with GNU tar operating on gzip'd archives. star (by Joerg Schily, of cdrecord and mkisofs fame) cpio, and pax are all better at resynchronizing to the archive headers past a point of file corruption than GNU tar.
afio might very well be better that cpio. I don't know, I neither run my own tests nor perused the code.
In general I'd suggest that added redundancy (both through ECC -- error correction coding -- and additional separate copies) is the better way to make one's backups more robust.
I've heard that BRU (backup/recovery utility: http://www.tolisgroup.com a commercial product) adds ECC and checksum data to the archive stream as it performs backups --- and defaults to verifying the archive integrity in a second pass over the data. With cpio, afio, tar, star, dump/restore and pax you have to write your own scripts to perform the verification pass. (cpio and presumably afio do add checksums, GNU tar doesn't, I don't know about the others). So far as I know none of the common free tools adds additional ECC redundancy to their archives.
There is an obscure little utility called 'ras' (redundancy archive system) which can be used to create a set of ECC (sum) files to go with set of base files and allow one to recover from the loss of a subset of base files. This is essentially a utility to manually (and crudely) perform the same sort of redundancy operations as a RAID5 subsystem.
http://www.icewalkers.com/Linux/Software/52890/ras.html
However, I should warn that I haven't used this at all much less tried to integrate it into any sane backup/recovery scripts!
So far the best free backup tool for Linux still seems to be AMANDA (http://www.amanda.org ) though Bacula (http://www.bacula.org ) seems to have a similar and impressive feature set.
AMANDA still uses native dump and/or GNU tar to actually perform the backup. It initiates those processes on each client, aggregates their archives on a central server and manages the process of writing them out to tapes (optionally using a tape changer).
Thus, AMANDA is tape centric and still has the inherent risks of the underlying archiver (vendor's dump --- dumpe2fs for Linux, or GNU tar).
I think it would be neat if AMANDA or Bacula were integrated with ras or some redundancy library in some meaningful way.
There is an overview of these and other free backup packages for UNIX (and Linux) at:
http://www.backupcentral.com/free-backup-software2.html
Ultimately you'd want to keep multiple generations of data backups even if you knew that you had perfect ECC, redundancy, media, and drives. You need this for the same reason you need backups regardless of how sophisticated and redundant your RAID array is configured. Because you may find that your software or your users corrupt your data, and you may need to back off to earlier, known good versions of the data, possibly days, weeks, even month after those backups were made.
(Some forms of corruption can be subtle and insidious).


can I have Linux on a ThinkPad G40? with WinXP?

Thu, 05 Jun 2003 18:35:32 PST
borejsza (borejsza from ucla.edu)

Hi,

I am about to buy a laptop and am looking for advice as to its compatibility with Linux.

I know little about computers (last time I owned one it was a Commodore 64), and less about Linux, but saw a friend use it, and would like to learn how to myself, and gradually move away from Windows. The laptop I am thinking of buying is an IBM ThinkPad G40 (http://www-132.ibm.com/webapp/wcs/stores/servlet/ProductDisplay?productId=8600909&storeId=1&langId=-1&categoryId=2580117&dualCurrId=73&catalogId=-840). I think it is a new model, and could not find it anywhere on the pages that list hardware that has been already tried out with Linux.

Can anybody confirm that I can partition that laptop between Linux and WindowsXP before I blow all my savings on it?

Thanks,
Alex

You could buy one preloaded from EmperorLinux: (http://www.emperorlinux.com/auk.html) -- Ben
Or they'll preload a dual boot, or can customize. (So this tip is good for more than that one model.) -- Heather
As far as I'm concerned, IBM-made hardware today should be a sure bet for Linux anyway: they've really thrown themselves behind Linux in a big way, and I'd be surprised to hear of a laptop they make that can't run it. Come to think of it, given the range of hardware that Linux supports these days, making a 'top that can't run Linux would be quite a trick in the first place. -- Ben
[jra] Now, that's not to say that you can easily dual-boot XP. There may be reinstallation issues, and licensing; I don't know that Partition-* or FIPS can safely resize whatever you have loaded without breaking it, and you may not have "install" media for XP -- only "recover" media, which will not let you install on a resized partition.
Missing install media for WinXP isn't relevant to its ability to coexist with Linux, but personally, if my vendor "forgot" to include the Other OS that I had paid for - I'd demand my real discs, or that they discount the box the price of their OS. Given the number of people competing for your business in this venue, I have precious little tolerance for that kind of ripoff. -- Ben
[jra] I would google for "linux win xp dual boot howto", and see what I got. -- jra
[Kapil] Apparently, the trick is to: (1) Install Linux and resize the NTFS partition (2) Boot the recovery CD for XP (3) Interrupt (count 5 :-)) the reinstallation process and run "OS.bat". It seems XP will then "just install" on the resized partition.
This worked with the laptops bought for our Institute. YMMV.
-- Kapil.


FTP Daemons (Servers) and Alternatives: Just Say No?

Tue, 3 Jun 2003 06:03:09 -0700
Jim Dennis (the LG Answer Guy)
Question by Dinos Kouroushaklis on the BLT-newuser list (Blt-newuser from basiclinux.net)

Dear list members,

I would like to hear your suggestions for an ftp server.

I would like to replace an existing win2k ftp server with a Linux based one. What I am interested in is reliability and ease of management. The machine should need only one (maybe more) ethernet card to provide the ftp service (except during installation time). The two ethernet cards can be use one for management and one for the traffic.

The machine will be an Intel Celeron 400 Mhz with 160 (128+32) and 20 GB hard disk with a public (static) IP address in the DMZ.

Regards

Just to be contrarian I have to suggest that you seriously consider abandoning FTP entirely. HTTP is adequate for simple, lightweight anonymous distribution of files (text or binary). scp, sftp (SSH) and rsync over ssh are inherently more secure than plain FTP can ever be. Your MS-Windows users can get Putty (and pscp, et al.) for free.

(Plain, standard FTP will, by dint of the standards, always pass user name and password information "in the clear" across the Internet --- those exposing these valuable, private tokens to "sniffers"). For some purposes BitTorrent can be far more efficient (for widespread, peer assisted distribution of files to many concurrent clients, for example).

SSH, scp, and sftp:

http://www.openssh.org

Putty:

http://www.chiark.greenend.org.uk/~sgtatham/putty

rsync:

http://www.samba.org/rsync

BitTorrent:
http://bitconjurer.org/BitTorrent

If you can, just eliminate FTP and direct your users and customers to better alternatives.

In general the problem with FTP servers is that they run as root (at least during the authentication phase, if they support anything other than anonymous FTP). So FTP daemons have classically been a source of vulnerability (as bad as DNS -- BIND/named --- and MTA -- sendmail --- daemons).

With that in mind, vsftpd would probably be my first free choice. (http://vsftpd.beasts.org )

ProFTPd is popular, and has configuration file syntax that's a vaguely similar to Apache/HTML/SGML (I'll leave it for others to judge that a feature or bug). However, ProFTPd is complex and has had too many security alerts posted against it for my tastes. (http://www.proftpd.org ).

WU-FTPD (for years the default that shipped with most Linux distributions) has the worst security track record in the field. I wouldn't recommend it, I don't care how many bugs they've patched. There comes a time to abandon the codebase and start from scratch. There also comes a time when "brand recognition" (the project's name) shifts from notoriety to notorious infamy.

By contrast, Chris Evans coded vsftpd specifically to be as secure as possible. He discussed the design and every pre-release of the code extensively on the Linux security auditing mailing list (and in other fora devoted to secure programming and coding topics).

If you're willing to go with a commercial/shareware package (that's not free) I'd suggest that Mike Gleason's ncftpd has been around longer than vsftpd and still has a very good track record. (http://www.ncftpd.com ). Registration is only $200 (U.S.) per server for unlimited concurrent connections ($100 for up to 50 concurrent users) and is free for use in educational domains.

If there are no objections I'd like to cross-post this to the Linux Gazette for publication (names of querents will be sanitized) since the question comes up periodically and I like to refresh this answer and the URLs.

All of this assumes that you have no special needs of your FTP server. If you need special features (directory trees restricted by user/group info, pluggable authentication support, virtual domain support, etc) then you'll have to review these products more carefully. However, each of them offers at least some virtual domain/server functionality and a mixture of other features.

[Dan] For a comprehensive annotated list, see: http://linuxmafia.com/pub/linux/security/ftp-daemons
Everybody's got their favorite, and mine's PURE-ftpd, of which Rick Moen of Linuxmafia says on the above page:
Seems like a winner.
http://sourceforge.net/projects/pureftpd


Pause after running xterm

Fri, 30 May 2003 20:39:56 -0400
Ben Okopnik (the LG Answer Gang)
Okay, so it's a nickel's worth. So there. -- Heather

Here's a little problem you might run into: you want to run a certain program - say, as a Mozilla "Helper application" - which needs to run in an xterm. So, you set it up like so:

xterm -e myprogram -my -options

The only problem is, when it comes time to run it, all you see is a flash as the xterm appears, then immediately disappears. What happened? What error did it print out? Why (this does happen at times) does it work when you launch it 'manually' but not from Mozilla?...

Here's an easy and useful solution that will require you to hit a key in order to exit the xterm after the program has finished running. Note that it may fail on tricky command lines (subshell invocations, evals, and other shell-specific gadgetry) but should work fine with normal commands and their options.

See attached okopnik.hold.bash.txt

Invoke it like so:

xterm -e hold myprogram -my -options
[jra] Were you actually planning to answer those question, Prof?
Or are they left as an exercise for the students? :-)
[Ben] The answer is implicit in the solution provided, and will depend on the specific program being launched. The implementation, as always, is left to the student. Giddyap, dammit. :)
[JimD]
	xterm -e /bin/sh 'myprogram -my -options; read x'
... in other words, have a shell execute your program, then read a dummy value from the xterm (the xterm process' console/terminal/stdin)
The command will run, output will be displayed, you'll get a pause where you can type anything you like (also allowing you to scroll through the xterm's buffer). When you hit [Enter] the xterm goes away.
Seems pretty transparent to me. More verbose:
	xterm -e /bin/sh 'myprogram -my -opts; echo "[Enter] when done: ";read x'
More elegant, create a two line script:

See attached jimd.pauseadter.sh.txt

(I'm not really sure we need the eval, but I don't think it'll hurt in any case).
Now simply:
	xterm -e pauseafter.sh myprogram -my -opts
(/me shudders at the electrons that got excited by this blatantly obvious suggestion).


Tips on PDF conversion

Thu, 12 Jun 2003 12:12:55 +0100 (BST)
Mike Martin (the LG Answer Gang)

Has anyone any ideas on converting PDF's to decent text.

To explain

I have a document which has been scanned in, with the only accurate conversion being to pdf (no images)

So I have used pdf2ps which gives me ps file.

However then when I use psto... anything text like, the output is exactly ^L

Any ideas/tips?

[Thomas] If you could convert the pdf to ps and then to LateX then you won't have a problem since tex -> ascii is not a problem. However, going from ps to ascii might require some more thought.
I know that there is a utility called "a2ps" which takes ascii and converts it to a ps file, however I cannot see a converse one program.
I am sure that there is a perl module (hey, Ben!) that could be used to write a perl-script for such a task, however, I am going to suggest you try the following......(I haven't tested this):
strings ./the_ps_file.ps | col -b > ~/new_text_file.txt
I am shunting this through "col" since you describe having lots of "^L" characters. You might have to edit the file by hand as well, since I am sure that a lot of useless information is being processed.
[Ben] See the "pstotext" utility for that.
[Andreas] There's a utility called pdftotext, it is in the xpdf Package, see the xpdf-Homepage http://www.foolabs.com/xpdf
Hopefully an OCR has been performed on your scanned document before it was converted to pdf, otherwise the pdf file would just contain an image and could not directly be converted to text.

Unfortunately, and very annoyingly this is what seems to have happened, seriously aggravating software - it lies.

Off to to see if I can work out how to convert the image to text (its only tables)

[Ben] Well, if it's a picture, "pstotext" won't help. Oh, and don't bother with "strings" on a .ps file: it's all text.
[Robos] Hmm, I ran into some ocr discussion lately and found this: gocr and claraorc (http://www.claraocr.org). The latter one seems to be more evolved...


quotas on directories?

Tue, 3 Jun 2003 19:55:26 +0200
Emmanuel Damons (emmanuel.damons from enterpriseig.com)
Answered By Thomas Adma, Jim Dennis, Kapil Hari Paranjape

Hi

Can you help me I need to specify the size that a folder can grow. almost like the quotas for folder and not users

Thanks

[K.-H.] spontaneous idea, especially if this is for one folder only:
create a partiton of exactly right size and mount it at mountpoint "folder". If creating a partition is not possible use a file and mount it a loop device.
[JimD] In the same concept you could use regular files with the loop mount option to create "partitions" of this sort.
Example:
		 dd if=/dev/zero of=/mnt/images/$FOLDERNAME bs=1024 count=$SIZE
		 mkfs -F /mntimages/$FOLDERNAME
		 mount -o loop /mntimages/$FOLDERNAME $TARGET
Where:
	FOLDERNAME is an arbitrary filename used as a "loopback image"
		(the container that the loop block device driver will treat
		as if it were a partition)
	SIZE is the desired size in kilobytes
	TARGET is the desired location of the "folder" (the mountpoint for
	   this filesystem).
You can use any of the Linux supported filesystem types (ext2, ext3, minix, XFS, JFS, ReiserFS) and you can tune various options (like the amount of reserved space on such "folders" and which UID/GID (user or group) that space is reserved for. You should be able to use quotas, ACLs and EAs (extended attributes and access control lists) (assuming you've patched your kernel for ACL/EA use and enabled it) etc.
Obviously this approach as a couple of downsides. You need intervention by root (or some sudo or SUID helpers) to create and use these images.
[Kapil] Of course, you can use User-mode-linux to create and use these images.
[JimD] Also Linux can only support a limited number of concurrent loop mounts (8 by default). Newer kernels allow this as a module parameter (max_loop=<1-255> ... so up to 255 such folders maximum on the system). This limits the number that could be in concurrent use (though an unlimited number of these "folders" could be stored on the system, mounted and unmounted as needed).
There might be other disadvantages in performance and overhead (I'm not sure).
[Kapil] That would be a downside with UML if you use the file systems with UML.
[JimD] On the plus side you could have any of these encrypted, if you're running a kernel that's had the "International crypto" patch applied to it; and you pass the appropriate additional options to the mount command(s). We won't address the key management issues inherent in this approach; suffice it to say that almost forces us to make mounting these filesystems an interactive process.
If you wanted to have a large number of these, but didn't need them all concurrently mounted you might be able to configure autofs or amd (automounters) to dynamically mount them up and umount them as the target directories were accessed --- possibly by people logging in and out.
There are probably better ways, but this seems to be the most obvious and easiest under Linux using existing tools.
[Kapil] One solution (rather complicated I admit) is to switch over to the Hurd which allows such things and more complicated things as well.
Another is to use "lufs" or other "Usermode filesystems". These put hooks in the kernel VFS that allow one to set up a "user mode" program to provide the "view" of the part of VFS that lies below a particular directory entry.
[JimD] The very notion of limiting the size of a "directory tree" (folder) is ambiguous and moot given the design of UNIX. Files don't exist "under" directories in UNIX. Files are bound to inodes which are on filesystems. Filenames are links to inodes. However every inode can have many links (names). Thus there's an inherent abiguity of what it means to take up space "in a folder" (or "under a directory"). You could traverse the directory tree adding up all files (and the sizes of all directories) thereunder (du -s). This works fine for all inodes with a link count of one, and for cases where all of the inodes are within the scope of the tree (and assuming there are no mount points thereunder). However, it's ambiguous in the general case and begs the question: just what are you trying to accomplish.
[Kapil] Excellent explanation Jim.


What is Reverse DNS?

Mon, 2 Jun 2003 20:37:46 EDT
(jimd from mars.starshine.org)
Question by TEEML914 (TEEML914 from aol.com)

I'm doing an assigment. Can you tell me in laymans terms what reverse DNS is?

[Faber] Yes, we can.

Thank you and have a great day

[Faber] You're welcome and have a spiffy night yourself..
[JimD] Faber, I think your cheerful sarcasm might be lost on him. After, he's dense enought to take such a simple question (from his homework assigment, no less) and go to all the trouble it of asking us
Yes, we can tell you. We can answer such questions. With dilligent work (as in DOING YOUR OWN HOMEWORK) you'd be able to answer questions like that, too.
For everyone else who hears this buzz phrase and wonders about it (people who aren't trying to skate through classes so they can make complete idiots of themselves when they enter a job market thoroughly unprepared by the schooling they shirked):

...............

"reverse DNS" is the process of asking the DNS (domain name system) for the name associated with a given IP address (which, of course, is numeric). Since DNS is primarily used to resolve (look up) an address given a name; this numeric to symbolic lookup is the converse operation. However, the term "converse" is somewhat obscure so the more literate and erudite among us are stuck with the phrase: "reverse DNS."
On a technical level, a reverse DNS query is a question for a PTR record in the in-addr.arpa domain. For historical reasons the in-addr (inverse address) subdomain of the "Advanced Research Projects Administration" (the forebear of the Internet) is reserved for this purpose. For technical reasons the four components of a traditional "dotted quad decimal" representation of the address are arranged in reverse order: least significant octet first. This allows the most significant octets to be treated as "subdomains" of the in-addr.arpa domain which allows delegation (a DNS mechanism for administrative and routing/distribution purposes) to be down on octet boundaries.
Of course any good book on DNS will provide all of the gory details, or one could simply read the RFCs (request for comments documents) which are the normal mechanism by which standards are proposed to the IETF (Internet Engineering Task Force) which marshalls them through a review and vetting process, publishes them and recommends their adoption. (Since the Internet is still basically anarchial the adoption of new standards is essentially a ratification process --- each Internet site "votes with its feet" as it were).
In particular it looks like you'd want to read RFC3172:
http://www.faqs.org/rfcs/rfc3172.html

...............

Please have your instructor send my extra credit points c/o Linux Gazette and be sure to have him give you a failing grade in your TCP/IP or Internet/Networking Fundamentals class.
(In the unlikely event the assignment was to explore the use of sarcasm by curmudgeons in the Linux community --- then bravo!)


Subscribe to groups...........pan,Knode.......????

Wed, 25 Jun 2003 20:21:12 +0530
Vivek Ravindranath (vivek_ravindranath from softhome.net)
Answered By Dan Wilder, Karl-Heinz Herrmann, Anita Lewis, Ben Okopnik, Jason Creighton, Heather Stern

Hi Answer Gang,

Can please tell me how to subscribe to linux groups

[Dan] You might start by pointing your browser (konqueror, mozilla, lynx, w3m, netscape, and so on) at:
http://www.tldp.org
and browse what's there. Then look at
http://www.linuxjournal.com
http://www.linuxgazette.com
http://www.lwn.com
http://www.linuxtoday.com
http://www.slashdot.com
Then you might come back and explain in somewhat more specific terms what you're trying to do. There are lots of Linux websites, including documentation, news, online discussions; to get to any of those, you just click on links.
For e-mail discussion groups you mostly have to subscribe. How you do that depends on what group you're interested in. Once you're subscribed, any email you send to some submission address is duplicated and sent to all subscribers.
Many discussion groups have their archives open. For example, point your browser at
http://www.ssc.com/mailing-lists
for an overview of mailing lists hosted by SSC, publishers of Linux Journal.
From that page you can click on list information pages and get to list
archives by following the links. The list information pages also let you apply for membership in the lists. Normally you'll get a confirming email back, and your list membership goes into effect when the list management software receives your reply.

such yahoo groups ,

[Jason] Well, "Yahoo groups" are just email lists, so you can subscribe to them and read them offline. Same deal for any mailing list.

google groups .......

[Jason] Now for newsgroups (What you call "google groups". Google groups is actually a web interface on top of usenet.) I use leafnode (Sorry, don't have the URL, but a google for "leafnode usenet server" would probaby turn up the homepage.) for this. It's an easy to configure (IMHO) usenet server that only downloads messages in groups that you read.

and download all messages for offline viewing using pan or knode or any other software (Please mention the name of the software and URL).I wan't to view the messages offline.

First of all I dont know whether it is possible.Can you suggest any other methods to do so? By groups I mean any linux group, please suggest any good linux groups if possible...and please give the address that is to be entered in the address field of the viewer and other details.I just want to get regular information regarding linux........thanks in advance.

Vivek.

[K.-H.] for the offline reading: I'm not quite sure what "linux group" you are talking about. If you want to have a look at linux websites as suggested wwoffle is very useful for caching of webpages so you can view them at leasure offline. Any new link you click on will be remembered and fetched next time online. If you talk about news groups (usenet) like: comp.os.linux.* I am using [x]emacs newsreader "gnus" which has a offline feature called "agent". You can read the info pages to this but if this is your first contact with news and [x]emacs then I can not recommend this wholeheartedly -- gnus itself is rather complex and therefor powerful (or is it the other way round?). Agent is an additional layer of complexity which takes time to get used to.
pan I don't know,
It's a newsreader, whose name might offend a family publication, but which is nonetheless supposed to be very nifty. -- Heather
knode I can only guess is the kde version of a newsreader. If they support offline features I've no idea. There are other newsreaders: nn, tin, ... but as far as I know all miss the offline feature. netscape has a newsreader with rather limited offline capabilities but for a first try that might be sufficient.
[Anita] Do you mean that you would subscribe to a mailing list on yahoogroups and then go there and download their archives? That is something I would like to know how to do too, because we had a list there and changed to our own server. I'd like to be able to get those old messages. Well, in truth, I would have liked to have had them, but now I think they are too obsolete. Still, I wouldn't mind having them, especially if I could get them into mbox format.
[Faber] <musing out loud>Couldn't you use something like wget in a Perl script to download the archives by links? Ben could probably write a one-liner to do it. In his sleep. :-) </musing>
[Ben] Actually, it would take some tricky negotiation, Web page downloading and parsing, etc. - it's a non-trivial task if you wanted to do it from scratch. "Yosucker" from Freshmeat is a good example of how to download their Web-only mail; it wouldn't be too hard to tweak for the above purpose (it's written in Perl.)
[Jason] You could probably just use wget, with some combination of -I and -r. The thing is a HTTP/FTP shotgun.
[Ben] Nope. Remember that you need to log in to Yahoo before you can read the stuff; after that, you get to click on the message links (20 per page or so) to read them. If it was that easy, they wouldn't be able to charge you for the "improved" access (which includes POP access to your mail and a bunch of other goodies.)
[Jason] Actually, I was thinking of download from an online mailing list archive, not logging into Yahoo.
Perhaps a little specific encoding with lynx' ability to pick up its transmission data from stdin ... -get_data. It's your login, so you'll need to guard your password in that packet from prying eyes. Like Ben says, tricky, but certainly it can be done. -- Heather


Confused about symantics of "mount -o,async/sync" commands

Thu, 12 Jun 2003 21:30:21 -0700
Bombardier System Consulting (bombardiersysco from qwest.net)
Answered By Karl-Heinz Herrmann, Thomas Adam, Ben Okopnik, Jim Dennis, Jay R. Ashworth

Hello,

I am taking a local Linux certification class and seem to have offended my instructor by questioning the semantics of the "sync" and "async" options in the mount command. They seem backward to me and I don't understand what I am missing.

The following are the definitions that I found online and understand for the words:

Synchronous (pronounced SIHN-kro-nuhs, from Greek syn-, meaning "with," and chronos, meaning "time") is an adjective describing objects or events that are coordinated in time. (within the context of system activities I associate synchronous with either being timing based or requiring an acknowledgement)

Asynchronous (pronounced ay-SIHN- kro-nuhs, from Greek asyn-, meaning "not with," and chronos, meaning "time") is an adjective describing objects or events that are not coordinated in time. (within the context of system activities I associate asynchronous with being event/interrupt driven).

It has been my experience and is my understanding with disk caching that data that is released to the system to be written to disk is kept for a specific time or until the cache is full before being written to disk. Hence synchronous. It is my experience and is my understanding that data from an application which is released to the system and is directly written through to disk is done so in an asynchronous or event driven manner.

[K.-H.] synchronous -- applications intent to write data and actual write are at the same time
asynchronous -- applications intent to write and actual write are not at the same time as system decides when to write the cached data
[Thomas] These options are really useful in /etc/export if you ever need to mount directories over NFS, too. Although just don't specify them at the same time as each other!
[Ben] Yup. The latter is more efficient, since it allows the hevy lifting to occur all at once (one way to look at it is that the "startup" and the "wind-down" costs of multiple disk writes are eliminated - you "pay" only once), but is a little less secure in the sense of data safety - if your computer is, say, accidentally powered down while there's data in the cache, that data evaporates, even though you "know" that you saved it.

This is evidently opposite of the way that the terms are understood and used in Linux. Please help me understand.

Thanks,

Jim Bombardier

Put simply, ... you're wrong.
"sync" in the Linux parlance (and in other disk buffering/caching contexts with which I'm familiar) means that the writes to that filesystem are "synchronized" out to the disk before the writing process is scheduled for any more time slices. In other words, upon return from a write() system call the write as occurred to the hardware device.
This usage is consistent with the traditional meaning of the 'sync' utility (part of all versions of UNIX I've used and heard of). The 'sync' utility forces the kernel to "synchronize" its buffers/caches out to the device.
"async" means that writes are happening asynchronously to the ongoing events in the process. In other words mere return from the function call doesn't indicate that the data is safely flushed to the device.
Note that use of sync is strongly discouraged by kernel luminaries (Linus Torvalds in particular). I sometimes choose to over-ride their better judgement myself --- but I do so only with considerable mulling on the tradeoffs. In general you're better off with UPS (uninterruptable power supply) and a journaling filesystem than you'll ever be by trying to force synchronous writes for an entire filesystem.
Of course, with open source packages you can opt for aggressive explicit synchonization of selected file descriptors using the fsync() function. Note that this can lead to poor overall system performance in some cases. For example MTAs (mail transport agents) and syslogd both make extensive use of fsync(). If they share the same filesystem (/var/log and /var/spool are on a single volume) it can make the entire system feel sluggish under only moderate mail handling load (as each mail delivery logs several messages and each of those processes runs its on fsync() calls.
[jra] You know, the way I've always interpreted this is that it describes the coupling between the application program's logical view of the disk contents and the actual physical, magnetic contents of the drive, across time:
those views are either mandated to stay in "sync" -- no buffering; if the OS says it's written, it is on the platters, or they're "async" -- the OS is permitted to "cheat" a little bit in between when it tells the app "it's written" and when it actually happens.
I guess it's really just another way of phrasing the same thing...


Linux Journal Weekly News Notes - Tech Tips

Mon, 23 Jun 2003 01:29:49 -0700
Linux Journal News Notes (lj-announce from ssc.com)


Cut Them Off At The Pass

If someone's script is going haywire and making too many connections to your system, simply do:

route add -host [hostname]

...to keep the offending box from shutting yours down entirely.



Log A Lot Less

You can turn off syslogd's MARK lines by invoking them with -m 0. You can put this invocation in the init script that starts syslogd. This is especially useful on laptops to keep them from spinning up the hard drive unnecessarily.



Watch a Bit More

Using the watch command, you automatically can run the same command over and over and see what changes. With the -d option, watch highlights the differences. Try it with watch -d ifconfig.



Rooting Around with LILO

If you are working from a rescue disk with your normal root partition mounted as /mnt/root, you can reinstall the LILO boot sector from your /etc/lilo.conf file with lilo -r /mnt/root. This tells LILO( 8) to chroot to the specified directory before taking action. This command is handy for when you install a new kernel, forget to run LILO and have to boot from a rescue disk.



Removing Files Starting With Dashes

If you want to remove a file called -rf, simply type rm -- -rf. The -- tells rm that everything after -- is a filename, not an option.

The LG staff note that ./ (dot slash) preceding the offending filename is effective too, and works even on older versions of rm - or non linux systems - that may not have this handy option. -- Heather


Any Program Can Learn To Read

If you have a program that reads only from a file and not from standard input, no problem. The /proc filesystem contains a fake file to pass such programs their standard input. Use /proc/self/fd/0.


This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Copyright © 2003
Copying license http://www.linuxgazette.net/copying.html
Published in Issue 92 of Linux Gazette, July 2003

<< Prev  |  TOC  |  Front Page  |  Talkback  |  FAQ  |  Next >>