Age | Commit message (Collapse) | Author |
|
Noticed while testing TMPDIR=/noaccess sort on Illumos/OpenIndiana, which gives
a warning to stderr. For sort temporary directories might be used for large
output.
|
|
|
|
Some of the options, like -P are as of writing (2023) non-POSIX:
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/xargs.html. However
many systems support this useful extension for many years now.
Some historic context:
The xargs -0 option was added on 1996-06-11, about a year after the NetBSD
import (over 27 years ago at the time of writing):
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/xargs/xargs.c?rev=1.2&content-type=text/x-cvsweb-markup
On OpenBSD the xargs -P option was added on 2003-12-06 by syncing the FreeBSD
code:
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/xargs/xargs.c?rev=1.14&content-type=text/x-cvsweb-markup
Looking at the imported git history log of GNU findutils (which has xargs), the
very first commit already had the -0 and -P option on Sun Feb 4 20:35:16 1996
+0000.
Tested on many systems, old and new, some notable:
- OpenBSD 7.4
- Void Linux
- FreeBSD 12
- NetBSD 9.3
- HaikuOS (uses GNU tools).
- Slackware 11
- OpenBSD 3.8
- NetBSD 5.1
Some shells:
- oksh
- bash
- dash
- zsh
During testing there are some incompatibilities found in parsing the fields so
the arguments are passed as one argument which is split later on by the child
program.
|
|
Tested on NetBSD 5.1:
- mktemp -p doesn't exist there yet.
- mktemp without any arguments/template doesnt work
- mktemp -d without any arguments/template doesnt work
|
|
Tested on NetBSD 5.1: evaluating directories as config files could allow
garbage, so disallow it.
Devices / fifo, etc are still allowed.
|
|
- Add a die() helper function to cleanup and exit.
- NOTE that with an empty sfeedtmpdir the case rm -rf "" is fine.
- Respect $TMPDIR for creating temporary files like many UNIX tools do.
- Fix: when creating "${sfeedtmpdir}/ok" fails for some reason cleanup the
whole temporary directory as well.
- Fix: when the feeds() function is not defined exit with status code 1 (this
was incorrectly status code 0).
Reproduce: sfeed_update /dev/null; echo $?
|
|
In zsh the variables $path and $status are special.
https://zsh.sourceforge.io/Doc/Release/Parameters.html#index-path
https://zsh.sourceforge.io/Doc/Release/Parameters.html#index-status
(No promises I will keep up with this insanity in the future though)
|
|
This might make it easier to set filters or ordering by pattern matching on a
group of feeds by the feed URL. For example youtube or reddit feeds.
Another way which was already possible is prefixing names with for example:
"reddit somename" or "yt somename".
|
|
If creating a temporary directory for the feed files failed then $sfeedtmpdir
would be empty and it would try to:
mkdir -p "/feed"
touch "/feed/ok"
After failing it would also still try to process all the feeds.
Now just fail early.
mktemp or touch themselve will print the actual error to stderr.
|
|
... and some small rewording.
|
|
found via codespell
$ codespell --ignore-regex Nd
|
|
Pedantic change:
Make main more consistent since other functions in it exit too and main is not
supposed to return or used like that.
|
|
separator
echo is unportable in this way and names containing characters like an option
(-n) or backslash or escape codes (\e, \n, etc) could be messy.
For awk set LC_ALL=C for simple collation.
This makes sfeed_opml_export slower in some shells that don't have printf
builtin though. For example with about 150 feeds in a config file it is a bit
slower on OpenBSD ksh.
time ./sfeed_opml_export | wc -l
152
0m00.29s real 0m00.05s user 0m00.20s system
time sfeed_opml_export | wc -l
152
0m00.02s real 0m00.00s user 0m00.03s system
|
|
|
|
This can be useful for scripts, for example the sfeed_update_xargs example
script in the README. This way the process can signal an error and xargs will
exit with the code 123:
"One or more invocations of utility returned a nonzero exit status."
|
|
In practise this may change the meaning of the examples:
sfeed_update && pkill -SIGHUP sfeed_curses
An alternative:
sfeed_update; pkill -SIGHUP sfeed_curses
|
|
|
|
When sfeed_update was called without using a parameter and it used the default
and this path did not exist it would incorrectly print:
Configuration file "" does not exist or is not readable.
See sfeedrc.example for an example.
Make the error message a bit shorter too.
This was a partial regression of commit df74ba274c4ea5d9b7388c33500ba601ed0c991d
|
|
This is useful so the script can be included, call main and then have
additional post-main functionality.
|
|
Follow-up from a rushed commit:
commit 58555779d123be68c0acf9ea898931d656ec6d63
Author: Hiltjo Posthuma <hiltjo@codemadness.org>
Date: Sun Feb 28 13:33:21 2021 +0100
sfeed_update: simplify, use feedurl directly
This also make it possible to use non-authoritive URLs as a baseurl, like
"magnet:" URLs.
|
|
This also make it possible to use non-authoritive URLs as a baseurl, like
"magnet:" URLs.
|
|
|
|
|
|
This adds a main() function. When the environment variable
$SFEED_UPDATE_INCLUDE is set then it will not execute the main handler. The
other functions are included and can be reused. This is also useful for
unit-testing.
|
|
handler
This is useful to be able to reuse the code (together with using sfeed_update
as an included script, coming in the next commit).
|
|
basesiteurl
Move it closer before it is used.
|
|
"(FAIL CONVERT)" -> "(FAIL PARSE)". Convert may be too similar to text encoding
conversion.
|
|
This can be useful to make more cleanly make connector scripts.
This does not necesarily even have to be in the sfeed(5) format.
|
|
... and do not show stderr of readlink.
|
|
|
|
- Improve feed creation with empty results and new feed files.
Always make sure the file is created even when it is new and there are also no
items (after filtering).
- Consistency: always use the same feed file for merging.
Do not use "/dev/null" when it is a new file. This works using sort, but is
ugly when the merge() function is overridden and does something else. It should
be the feed file always.
|
|
This adds the name as the first parameter for the convertencoding() function,
like filter, merge, order, etc.
This can be useful to make an exception rule for text decoding in a more clean
way.
|
|
Feeds should contain absolute urls, but if it does not have it then this makes
it more convenient to configure such feeds.
|
|
noticed on DragonFlyBSD where it prints a warning when moving the file from
/tmp.
To reproduce it:
touch /tmp/file
mv /tmp/file ~/
On other systems this would not print a warning, but it would preserve the
group permissions etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
... and simplify example in README.
|
|
- Better checking and verbose logging (on failure) of each stage:
fetchfeed, filter, merge, order, convertencoding. This makes sure on out-of-memory,
disk-space or other resource limits the output is not corrupted.
- This also has the added advantage it runs less processes (piped) at the same
time.
- Clear previous unneeded file to preserve space in /tmp
(/tmp is often mounted as mfs/tmpfs).
- Add logging function (able to override), use more logical logging format (pun
intended).
- Code-style: order overridable functions in execution order.
|
|
|
|
|
|
This reverts commit 8699fa2bb4c75670952fee503a58ca4a652627eb.
There is a regression in directory permissions among other things.
|
|
+ fix wrong comment "temporary file" -> "temporary directory".
|
|
- Handle SIGTERM properly, don't leave stray processes. Kill them on both
SIGTERM and SIGINT.
- When a "batch" of feeds was interrupted, don't allow to wait again.
- Simplify and create sighandler function.
- Now on both SIGTERM and SIGINT the cleanup() handler is called to not leave
stray files.
Tested with ksh, dash, bash, zsh.
|
|
When SIGINT occurs on waiting for jobs it returns 130 (128 + SIGINT). Make sure
to check for interrupted and return immediately.
Tested with ksh, dash, bash, zsh.
Sidenote: ideally we want to cleanup() on SIGTERM too, but this is too
inconsistent over various shells.
|
|
|
|
This adds a variable for the maximum amount of feeds to update concurrently. A
system/user may have fork resource limits or want to setup some job limit.
Thanks leot for the idea and feedback!
|