restore djgpp, eventually merge TODO lists add unit tests for lib/*.c strip: add an option to specify the program used to strip binaries. suggestion from Karl Berry doc/coreutils.texi: Address this comment: FIXME: mv's behavior in this case is system-dependent Better still: fix the code so it's *not* system-dependent. implement --target-directory=DIR for install (per texinfo documentation) ls: add --format=FORMAT option that controls how each line is printed. cp --no-preserve=X should not attempt to preserve attribute X reported by Andreas Schwab copy.c: Address the FIXME-maybe comment in copy_internal. And once that's done, add an exclusion so that `cp --link' no longer incurs the overhead of saving src. dev/ino and dest. filename in the hash table. See if we can be consistent about where --verbose sends its output: These all send --verbose output to stdout: head, tail, rm, cp, mv, ln, chmod, chown, chgrp, install, ln These send it to stderr: shred mkdir split readlink is different Write an autoconf test to work around build failure in HPUX's 64-bit mode. See notes in README -- and remove them once there's a work-around. Integrate use of sendfile, suggested here: http://mail.gnu.org/archive/html/bug-fileutils/2003-03/msg00030.html I don't plan to do that, since a few tests demonstrate no significant benefit. Should printf '\0123' print "\n3"? per report from TAKAI Kousuke on Mar 27 http://mail.gnu.org/archive/html/bug-coreutils/2003-03/index.html printf: consider adapting builtins/printf.def from bash df: add `--total' option, suggested here http://bugs.debian.org/186007 seq: give better diagnostics for invalid formats: e.g. no or too many % directives seq: consider allowing format string to contain no %-directives m4: rename all macros that start with AC_ to start with another prefix resolve RH report on cp -a forwarded by Tim Waugh Martin Michlmayr's patch to provide ls with `--sort directory' option tail: don't use xlseek; it *exits*. Instead, maybe use a macro and return nonzero. add mktemp? Suggested by Nelson Beebe df: alignment problem of `Used' heading with e.g., -mP reported by Karl Berry tr: support nontrivial equivalence classes, e.g. [=e=] with LC_COLLATE=fr_FR lib/strftime.c: Since %N is the only format that we need but that glibc's strftime doesn't support, consider using a wrapper that would expand /%(-_)?\d*N/ to the desired string and then pass the resulting string to glibc's strftime. sort: Compress temporary files when doing large external sort/merges. This improves performance when you can compress/uncompress faster than you can read/write, which is common in these days of fast CPUs. suggestion from Charles Randall on 2001-08-10 sort: Add an ordering option -R that causes 'sort' to sort according to a random permutation of the correct sort order. Also, add an option --random-seed=SEED that causes 'sort' to use an arbitrary string SEED to select which permutations to use, in a deterministic manner: that is, if you sort a permutation of the same input file with the same --random-seed=SEED option twice, you'll get the same output. The default SEED is chosen at random, and contains enough information to ensure that the output permutation is random. suggestion from Feth AREZKI, Stephan Kasal, and Paul Eggert on 2003-07-17 unexpand: [http://www.opengroup.org/onlinepubs/007908799/xcu/unexpand.html] printf 'x\t \t y\n'|unexpand -t 8,9 should print its input, unmodified. printf 'x\t \t y\n'|unexpand -t 5,8 should print "x\ty\n" Let GNU su use the `wheel' group if appropriate. (there are a couple patches, already) sort: Investigate better sorting algorithms; see Knuth vol. 3. We tried list merge sort, but it was about 50% slower than the recursive algorithm currently used by sortlines, and it used more comparisons. We're not sure why this was, as the theory suggests it should do fewer comparisons, so perhaps this should be revisited. List merge sort was implemented in the style of Knuth algorithm 5.2.4L, with the optimization suggested by exercise 5.2.4-22. The test case was 140,213,394 bytes, 426,4424 lines, text taken from the GCC 3.3 distribution, sort.c compiled with GCC 2.95.4 and running on Debian 3.0r1 GNU/Linux, 2.4GHz Pentium 4, single pass with no temporary files and plenty of RAM. Since comparisons seem to be the bottleneck, perhaps the best algorithm to try next should be merge insertion. See Knuth section 5.3.1, who credits Lester Ford, Jr. and Selmer Johnson, American Mathematical Monthly 66 (1959), 387-389. cp --recursive: perform dir traversals in source and dest hierarchy rather than forming full file names. The latter (current) approach fails unnecessarily when the names become very long. tail --p is now ambiguous Remove suspicious uses of alloca (ones that may allocate more than about 4k) Adapt these contribution guidelines for coreutils: http://sources.redhat.com/automake/contribute.html Changes expected to go in, post-5.2.1: ====================================== wc: add an option, --files0-from [as for du] to make it read NUL-delimited file name arguments from a file. dd patch from Olivier Delhomme Apply Andreas Gruenbacher's ACL and xattr changes Apply Bruno Haible's hostname changes test/mv/*: clean up $other_partition_tmpdir in all cases ls: when both -l and --dereference-command-line-symlink-to-dir are specified, consider whether to let the latter select whether to dereference command line symlinks to directories. Since -l has an implicit --NO-dereference-command-line-symlink-to-dir meaning. Pointed out by Karl Berry. A more efficient version of factor, and possibly one that accepts inputs of size 2^64 and larger. dd: consider adding an option to suppress `bytes/block read/written' output to stderr. Suggested here: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=165045 Pending copyright papers: ------------------------ ls --color: Ed Avis' patch to suppress escape sequences for non-highlighted files getpwnam from Bruce Korb pb (progress bar) from Miika Pekkarinen ------------------------------ Look into improving the performance of md5sum. `openssl md5' is consistently about 30% faster than md5sum on an idle AMD 2000-XP system with plenty of RAM and a 261 MB input file. openssl's md5 implementation is in assembly, generated by a perl script. On an AMD-64 system, using a 700MB file on a tmpfs file system (and enough RAM so that no actual disk reads were performed), GNU md5sum is slightly faster than `openssl md5', e.g.: 2.38s user 0.38s system 100% cpu 2.756 total (gnu md5sum) vs. 2.52s user 0.34s system 100% cpu 2.869 total However, `openssl sha1' is about 5% faster than GNU sha1sum: 3.32s user 0.33s system 99% cpu 3.653 total (openssl sha1) 3.45s user 0.39s system 99% cpu 3.843 total (gnu sha1sum) The above are using the debian-sid (amd_64 alioth) binaries from coreutils-5.2.1. When I compile the latest (coreutils-cvs) with gcc-4.0 -O3, I get slightly (2-3%) better sha1sum performance, and a ~7% *decrease* in performance for md5sum. I suspect that with the right compiler options you can do much better. ------------------------------ Have euidaccess.m4 check for eaccess as well as euidaccess If found, then do `#define euidaccess eaccess'. Remove long-deprecated options like tail's --allow-missing Add a distcheck-time test to ensure that every distributed file is either read-only(indicating generated) or is version-controlled and up to date. Implement Ulrich Drepper's suggestion to use getgrouplist rather than getugroups. This affects only `id', but makes a big difference on systems with many users and/or groups, and makes id usable once again on systems where access restrictions make getugroups fail. But first we'll need a run-test (either in an autoconf macro or at run time) to avoid the segfault bug in libc-2.3.2's getgrouplist. In that case, we'd revert to using a new (to-be-written) getgrouplist module that does most of what `id' already does. remove `%s' notation: grep -E "\`%.{,4}s'" src/*.c remove.c should never exit, yet may do so (see uses of EXIT_FAILURE) remove or adjust chown's --changes option, since it can't always do what it currently says it does. Adapt tools like wc, tr, fmt, etc. (most of the textutils) to be multibyte aware. The problem is that I want to avoid duplicating significant blocks of logic, yet I also want to incur only minimal (preferably `no') cost when operating in single-byte mode. Remove all uses of the `register' keyword rm: add support for a -I option, like that from FreeBSD's rm: -I Request confirmation once if more than three files are being removed or if a directory is being recursively removed. This is a far less intrusive option than -i yet provides almost the same level of protection against mistakes.