Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
... and simplify example in README.
|
|
- Better checking and verbose logging (on failure) of each stage:
fetchfeed, filter, merge, order, convertencoding. This makes sure on out-of-memory,
disk-space or other resource limits the output is not corrupted.
- This also has the added advantage it runs less processes (piped) at the same
time.
- Clear previous unneeded file to preserve space in /tmp
(/tmp is often mounted as mfs/tmpfs).
- Add logging function (able to override), use more logical logging format (pun
intended).
- Code-style: order overridable functions in execution order.
|
|
|
|
|
|
This reverts commit 8699fa2bb4c75670952fee503a58ca4a652627eb.
There is a regression in directory permissions among other things.
|
|
+ fix wrong comment "temporary file" -> "temporary directory".
|
|
- Handle SIGTERM properly, don't leave stray processes. Kill them on both
SIGTERM and SIGINT.
- When a "batch" of feeds was interrupted, don't allow to wait again.
- Simplify and create sighandler function.
- Now on both SIGTERM and SIGINT the cleanup() handler is called to not leave
stray files.
Tested with ksh, dash, bash, zsh.
|
|
When SIGINT occurs on waiting for jobs it returns 130 (128 + SIGINT). Make sure
to check for interrupted and return immediately.
Tested with ksh, dash, bash, zsh.
Sidenote: ideally we want to cleanup() on SIGTERM too, but this is too
inconsistent over various shells.
|
|
|
|
This adds a variable for the maximum amount of feeds to update concurrently. A
system/user may have fork resource limits or want to setup some job limit.
Thanks leot for the idea and feedback!
|
|
|
|
Pass the name parameter to the functions and add these to the pipeline. They
can be overridden in the config.
- add the ability to change the merge logic per feed.
- add the ability to filter lines and fields per feed.
- add the ability to order lines differently per feed.
- add filter example to README.
- code-style:
- fetchfeed consistency in parameter order.
- change [ x"" = x"" ] to [ "" = "" ]. Simplify some if statements.
- wrap long line in fetchfeed().
- use signal names for trap.
|
|
... this is useful to change the interrupted behaviour in some use-cases.
Thanks leot for the feedback.
|
|
Reported by "Dekedro", thanks!
|
|
Make curl fail (return a non-zero exit status) on a HTTP redirect. This makes
sure sfeed_update shows the feed as "FAILED" instead of succesful with zero
data.
|
|
this makes sure the sort order of the initial feed sync works again.
|
|
this requires more work without breaking the order in sfeed_html and other
tools (top to bottom: new to oldest), vs sfeed_plain in tail mode: oldest to
newest.
There will also be improvements to the merge logic to reduce many writes in
the future.
|
|
|
|
|
|
make the feedname sanitization less strict again.
|
|
|
|
does not sort all entries by datetime descending anymore. Now only
new entries are appending in a datetime ascending order.
|
|
- clarify code and improve linewrapping.
- translate characters in filename (allow /)
- add feedname as separate feed name field.
- change in which order priority the field is checked.
|
|
when a feed file is created for the first time make sure to sort and
filter duplicate items using the same logic as merge().
|
|
|
|
When fetching a feed failed and its temporary file was stored (filesize=0)
it would still be merged with the (possible) old file. This updated the
modification time which would be used in the next poll (If-Modified-Since).
The solution is to check if the file is non-empty and only then merge, this
is still not 100% fool-proof, but much better.
|
|
|
|
|
|
|
|
|
|
It could mess up urls in items (redirect http to https). It is also safer.
|
|
the base temporary directory is random. The directory is cleaned afterwards
or on SIGTERM etc so remove this unneeded line.
|
|
A feeds file was created with touch when it didnt exist. This confuses
curl -z since it uses that file for If-Modified-Since. Now it just
copies the temporary file to the new file without merging if it doesn't
exist. The warning for curl for a non-existant file is already suppressed
with curl -s.
|
|
|
|
|
|
update sfeed_update, there is now a feeds file per feed.
|
|
feed urls sometimes change and is not important for the order.
|
|
this fixes an issue with differences in stat(1) versions (OpenBSD, GNU, sbase).
|
|
|
|
|
|
this makes it compatible with the current sbase mktemp
|
|
Signed-off-by: Hiltjo Posthuma <hiltjo@codemadness.org>
|
|
lots of things changed, but cleanup todo. changelog and consistent stream of small updates will come in the future.
Signed-off-by: Hiltjo Posthuma <hiltjo@codemadness.org>
|
|
Signed-off-by: Hiltjo Posthuma <hiltjo@codemadness.org>
|
|
This allows to have a feed on a different domain but specify the base
url of links if links in the feed are relative.
Signed-off-by: Hiltjo Posthuma <hiltjo@codemadness.org>
|