Age | Commit message (Collapse) | Author |
|
... and some small rewording.
|
|
found via codespell
$ codespell --ignore-regex Nd
|
|
Pedantic change:
Make main more consistent since other functions in it exit too and main is not
supposed to return or used like that.
|
|
separator
echo is unportable in this way and names containing characters like an option
(-n) or backslash or escape codes (\e, \n, etc) could be messy.
For awk set LC_ALL=C for simple collation.
This makes sfeed_opml_export slower in some shells that don't have printf
builtin though. For example with about 150 feeds in a config file it is a bit
slower on OpenBSD ksh.
time ./sfeed_opml_export | wc -l
152
0m00.29s real 0m00.05s user 0m00.20s system
time sfeed_opml_export | wc -l
152
0m00.02s real 0m00.00s user 0m00.03s system
|
|
|
|
This can be useful for scripts, for example the sfeed_update_xargs example
script in the README. This way the process can signal an error and xargs will
exit with the code 123:
"One or more invocations of utility returned a nonzero exit status."
|
|
In practise this may change the meaning of the examples:
sfeed_update && pkill -SIGHUP sfeed_curses
An alternative:
sfeed_update; pkill -SIGHUP sfeed_curses
|
|
|
|
When sfeed_update was called without using a parameter and it used the default
and this path did not exist it would incorrectly print:
Configuration file "" does not exist or is not readable.
See sfeedrc.example for an example.
Make the error message a bit shorter too.
This was a partial regression of commit df74ba274c4ea5d9b7388c33500ba601ed0c991d
|
|
This is useful so the script can be included, call main and then have
additional post-main functionality.
|
|
Follow-up from a rushed commit:
commit 58555779d123be68c0acf9ea898931d656ec6d63
Author: Hiltjo Posthuma <hiltjo@codemadness.org>
Date: Sun Feb 28 13:33:21 2021 +0100
sfeed_update: simplify, use feedurl directly
This also make it possible to use non-authoritive URLs as a baseurl, like
"magnet:" URLs.
|
|
This also make it possible to use non-authoritive URLs as a baseurl, like
"magnet:" URLs.
|
|
|
|
|
|
This adds a main() function. When the environment variable
$SFEED_UPDATE_INCLUDE is set then it will not execute the main handler. The
other functions are included and can be reused. This is also useful for
unit-testing.
|
|
handler
This is useful to be able to reuse the code (together with using sfeed_update
as an included script, coming in the next commit).
|
|
basesiteurl
Move it closer before it is used.
|
|
"(FAIL CONVERT)" -> "(FAIL PARSE)". Convert may be too similar to text encoding
conversion.
|
|
This can be useful to make more cleanly make connector scripts.
This does not necesarily even have to be in the sfeed(5) format.
|
|
... and do not show stderr of readlink.
|
|
|
|
- Improve feed creation with empty results and new feed files.
Always make sure the file is created even when it is new and there are also no
items (after filtering).
- Consistency: always use the same feed file for merging.
Do not use "/dev/null" when it is a new file. This works using sort, but is
ugly when the merge() function is overridden and does something else. It should
be the feed file always.
|
|
This adds the name as the first parameter for the convertencoding() function,
like filter, merge, order, etc.
This can be useful to make an exception rule for text decoding in a more clean
way.
|
|
Feeds should contain absolute urls, but if it does not have it then this makes
it more convenient to configure such feeds.
|
|
noticed on DragonFlyBSD where it prints a warning when moving the file from
/tmp.
To reproduce it:
touch /tmp/file
mv /tmp/file ~/
On other systems this would not print a warning, but it would preserve the
group permissions etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
... and simplify example in README.
|
|
- Better checking and verbose logging (on failure) of each stage:
fetchfeed, filter, merge, order, convertencoding. This makes sure on out-of-memory,
disk-space or other resource limits the output is not corrupted.
- This also has the added advantage it runs less processes (piped) at the same
time.
- Clear previous unneeded file to preserve space in /tmp
(/tmp is often mounted as mfs/tmpfs).
- Add logging function (able to override), use more logical logging format (pun
intended).
- Code-style: order overridable functions in execution order.
|
|
|
|
|
|
This reverts commit 8699fa2bb4c75670952fee503a58ca4a652627eb.
There is a regression in directory permissions among other things.
|
|
+ fix wrong comment "temporary file" -> "temporary directory".
|
|
- Handle SIGTERM properly, don't leave stray processes. Kill them on both
SIGTERM and SIGINT.
- When a "batch" of feeds was interrupted, don't allow to wait again.
- Simplify and create sighandler function.
- Now on both SIGTERM and SIGINT the cleanup() handler is called to not leave
stray files.
Tested with ksh, dash, bash, zsh.
|
|
When SIGINT occurs on waiting for jobs it returns 130 (128 + SIGINT). Make sure
to check for interrupted and return immediately.
Tested with ksh, dash, bash, zsh.
Sidenote: ideally we want to cleanup() on SIGTERM too, but this is too
inconsistent over various shells.
|
|
|
|
This adds a variable for the maximum amount of feeds to update concurrently. A
system/user may have fork resource limits or want to setup some job limit.
Thanks leot for the idea and feedback!
|
|
|
|
Pass the name parameter to the functions and add these to the pipeline. They
can be overridden in the config.
- add the ability to change the merge logic per feed.
- add the ability to filter lines and fields per feed.
- add the ability to order lines differently per feed.
- add filter example to README.
- code-style:
- fetchfeed consistency in parameter order.
- change [ x"" = x"" ] to [ "" = "" ]. Simplify some if statements.
- wrap long line in fetchfeed().
- use signal names for trap.
|
|
... this is useful to change the interrupted behaviour in some use-cases.
Thanks leot for the feedback.
|
|
Reported by "Dekedro", thanks!
|
|
Make curl fail (return a non-zero exit status) on a HTTP redirect. This makes
sure sfeed_update shows the feed as "FAILED" instead of succesful with zero
data.
|
|
this makes sure the sort order of the initial feed sync works again.
|
|
this requires more work without breaking the order in sfeed_html and other
tools (top to bottom: new to oldest), vs sfeed_plain in tail mode: oldest to
newest.
There will also be improvements to the merge logic to reduce many writes in
the future.
|
|
|
|
|