Age | Commit message (Collapse) | Author |
|
- Do not show stderr of readlink.
- Show the reference to the example sfeedrc (like sfeed_update).
- Make the error message a bit shorter.
- Fix showing the path if it does not exist, for example:
$ sfeed_opml_export "a"
readlink: a: No such file or directory
Configuration file "" does not exist or is not readable.
Now shows:
$ sfeed_opml_export "a"
Configuration file "a" cannot be read.
See sfeedrc.example for an example.
|
|
This title format now matches the one with sfeed_curses. It shows the count to
the most left and makes it more readable imho. It also works better when the
titlebar is small.
|
|
When sfeed_update was called without using a parameter and it used the default
and this path did not exist it would incorrectly print:
Configuration file "" does not exist or is not readable.
See sfeedrc.example for an example.
Make the error message a bit shorter too.
This was a partial regression of commit df74ba274c4ea5d9b7388c33500ba601ed0c991d
|
|
|
|
|
|
|
|
Input to reproduce:
<entry>
<link href="https://codemadness.org/a" href="https://codemadness.org/b"/>
</entry>
Old value:
"https://codemadness.org/ahttps://codemadness.org/b"
New value:
"https://codemadness.org/b"
same with RSS <enclosure url="" />
|
|
This standard was a draft used around 2005-2006.
Instead of the fields "published" and "updated" it used "issued" (mandatory
field) and "modified" (optional). Add support for them and also in preference
of supporting Atom 1.0 and creation dates first.
I don't know any real-life examples that still use this though.
Some references:
- http://rakaz.nl/2005/07/moving-from-atom-03-to-10.html
- https://www.dokuwiki.org/syndication (rss_type "atom" parameter value).
- https://support.google.com/merchants/answer/160598?hl=en
|
|
... if there is no content.
|
|
getchar_unlocked is part of POSIX and should be supported by most platforms. On
all tested platforms it has a performance benefit, sometimes smallish (<12%),
sometimes large (~40%).
|
|
Since newsboat version 2.22 (2020-12-21) it stores the content mime-type of a
field so allow to export this.
The older entries are empty and will be exported as "html" (even though they
might have been plain-text).
... also add the (empty) category field.
|
|
|
|
|
|
Reference:
https://www.w3.org/2003/01/xhtml-mimetype/
|
|
This fix is very important *ahem*.
|
|
|
|
|
|
This is useful so the script can be included, call main and then have
additional post-main functionality.
|
|
Workaround it by setting the empty "middle" fields to some value. The last
field can be empty.
Some feeds were incorrectly using the wrong base URL if the `baseurl` field was
empty but the encoding field was set. So it incorrectly used the encoding field
instead.
Only now noticed some feeds were failing because the baseURL is validated since
commit f305b032bc19b4e81c0dd6c0398370028ea910ca and returning a non-zero exit
status.
This doesn't happen with GNU xargs, busybox or toybox xargs.
Affected (atleast): OpenBSD, NetBSD, FreeBSD and DragonFlyBSD xargs which share
similar code.
Simple way to reproduce the difference:
printf 'a\0\0c\0' | xargs -0 echo
Prints "a c" on *BSD.
Prints "a c" on GNU xargs (and some other implementations).
|
|
Follow-up from a rushed commit:
commit 58555779d123be68c0acf9ea898931d656ec6d63
Author: Hiltjo Posthuma <hiltjo@codemadness.org>
Date: Sun Feb 28 13:33:21 2021 +0100
sfeed_update: simplify, use feedurl directly
This also make it possible to use non-authoritive URLs as a baseurl, like
"magnet:" URLs.
|
|
No functional difference because the base URI host is copied beforehand.
|
|
The shellscript is optional, but reference it in the documentation.
|
|
This also make it possible to use non-authoritive URLs as a baseurl, like
"magnet:" URLs.
|
|
Removed/rewritten the functions:
absuri, parseuri, and encodeuri() for percent-encoding.
The functions are now split separately with the following purpose:
- uri_format: format struct uri into a string.
- uri_hasscheme: quick check if a string is absolute or not.
- uri_makeabs: make a URI absolute using a base uri and the original URI.
- uri_parse: parse a string into a struct uri.
The following URLs are better parsed:
- URLs with extra "/"'s in the path prepended are kept as is, no "/" is added
either for empty paths.
- URLs like "http://codemadness.org" are not changed to
"http://codemadness.org/" anymore (paths are kept as is, unless they are
non-empty and not start with "/").
- Paths are not percent-encoded anymore.
- URLs with userinfo field (username, password) are parsed.
like: ftp://user:password@[2001:db8::7]:2121/rfc/rfc1808.txt
- Non-authoritive URLs like mailto:some@email.org, magnet URIs, ISBN URIs/urn,
like: urn:isbn:0-395-36341-1 are allowed and parsed correctly.
- Both local (file:///) and non-local (file://) are supported.
- Specifying a base URL with a port will now only use it when the relative URL
has no host and port set and follows RFC3986 5.2.2 more closely.
- Parsing numeric port: parse as signed long and check <= 0, empty port is
allowed.
- Parsing URIs containing query, fragment, but no path separator (/) will now
parse the component properly.
For sfeed:
- Parse the baseURI only once (no need to do it every time for making absolute
URIs).
- If a link/enclosure is absolute already or if there is no base URL specified
then just print the link directly. There have also been other small performance
improvements related to handling URIs.
References:
- https://tools.ietf.org/html/rfc3986
- Section "5.2.2. Transform References" have also been helpful.
|
|
Combine E-Tags, If-Modified-Since in one section. Also mention the curl
--compression option for typically GZIP decompression.
Note that E-Tags were broken in curl <7.73 due to a bug with "weak" e-tags.
https://github.com/curl/curl/issues/5610
From a question/feedback by e-mail from Hadrien Lacour, thanks.
|
|
|
|
The commit that introduced the regression was:
commit 33c50db302957bca2a850ac8d0b960d05ee0520e
Author: Hiltjo Posthuma <hiltjo@codemadness.org>
Date: Mon Oct 12 18:55:35 2020 +0200
simplify time parsing
Noticed on a RSS feed with the following date:
<pubDate>2021-02-03 05:13:03</pubDate>
This format is non-standard, but sfeed should support this.
A standard format would be (for Atom): 2021-02-03T05:13:03Z
Partially revert it.
|
|
Kindof a non-issue but if theres a sfeedrc with no feeds then xargs will still
be executed and give an error. The xargs -r option (GNU extension) fixes this:
From the OpenBSD xargs(1) man page:
"-r Do not run the command if there are no arguments. Normally the
command is executed at least once even if there are no arguments."
Reproducable with the sfeedrc:
feeds() {
true
}
|
|
|
|
|
|
This code uses the non-portable xargs -P option to more efficiently process
feeds in parallel.
|
|
This adds a main() function. When the environment variable
$SFEED_UPDATE_INCLUDE is set then it will not execute the main handler. The
other functions are included and can be reused. This is also useful for
unit-testing.
|
|
handler
This is useful to be able to reuse the code (together with using sfeed_update
as an included script, coming in the next commit).
|
|
basesiteurl
Move it closer before it is used.
|
|
"(FAIL CONVERT)" -> "(FAIL PARSE)". Convert may be too similar to text encoding
conversion.
|
|
This can be useful to make more cleanly make connector scripts.
This does not necesarily even have to be in the sfeed(5) format.
|
|
|
|
... and do not show stderr of readlink.
|
|
|
|
surrogate pair
Regression in commit 12b279581fbbcde2b36eb4b78d70a1c52d4a209a
0xdffff should be 0xdfff.
printf '<item><title>👈</title></item>' | sfeed
Before (bad):
👈
After:
👈
|
|
This regression introduced in commit e43b7a48 on Tue Oct 6 18:51:33 2020 +0200.
After a content tag was parsed the "iscontenttag" variable was not reset.
This caused 2 regressions:
- It ignored other tags such as links after it.
- It incorrectly set the content-type of a lesser priority field.
Thanks to pazz0 for reporting it!
|
|
Interesting C compiler project:
lacc: A simple, self-hosting C compiler:
https://github.com/larmel/lacc
|
|
Simple way to reproduce:
printf '<item><title>�</title></item>' | sfeed | iconv -t utf-8
Result:
iconv: (stdin):1:8: cannot convert
Output result:
printf '<item><title>�</title></item>' | sfeed
Before:
00000000 09 ed b0 80 09 09 09 09 09 09 09 0a |............|
0000000c
After:
00000000 09 26 23 78 64 63 30 30 3b 09 09 09 09 09 09 09 |.�.......|
00000010 0a |.|
00000011
The entity is output as a literal string. This allows to see more easily whats
wrong and debug the feed and it is consistent with the current behaviour of
invalid named entities (&bla;). An alternative could be a UTF-8 replacement
symbol (codepoint 0xfffd).
Reference: https://unicode.org/faq/utf_bom.html , specificly:
"Q: How do I convert an unpaired UTF-16 surrogate to UTF-8? "
"A: A different issue arises if an unpaired surrogate is encountered when
converting ill-formed UTF-16 data. By representing such an unpaired surrogate
on its own as a 3-byte sequence, the resulting UTF-8 data stream would become
ill-formed. While it faithfully reflects the nature of the input, Unicode
conformance requires that encoding form conversion always results in a valid
data stream. Therefore a converter must treat this as an error. [AF]"
|
|
|
|
- Improve feed creation with empty results and new feed files.
Always make sure the file is created even when it is new and there are also no
items (after filtering).
- Consistency: always use the same feed file for merging.
Do not use "/dev/null" when it is a new file. This works using sort, but is
ugly when the merge() function is overridden and does something else. It should
be the feed file always.
|
|
This adds the name as the first parameter for the convertencoding() function,
like filter, merge, order, etc.
This can be useful to make an exception rule for text decoding in a more clean
way.
|
|
|
|
OPML is a more generic format, this tool is specificly for "rss" types and
subscription lists.
|
|
- Export read/unread state to a separate plain-text "urls" file, line by line.
- Handle white-space control-chars better.
From the sfeed(1) man page:
" The fields: title, id, author are not allowed to have newlines and TABs,
all whitespace characters are replaced by a single space character.
Control characters are removed."
So do the reverse for newsboat aswell: change white-space characters which are
also control-characters (such as TABs and newlines) to a single space
character.
|
|
Make a huge difference (cuts the time in half to process the same amount of
lines) on atleast glibc 2.30 on Void Linux. Seems to make no difference on
OpenBSD.
- This removes atleast one heap allocation per line (checked with valgrind).
This is because glibc will strdup() the environment variable $TZ and free it
each time, which is pointless here and wasteful.
- localtime_r does not require to set the variables like tzname.
In glibc-2.30/time/tzset.c in __tz_convert is the following code and comment:
/* Update internal database according to current TZ setting.
POSIX.1 8.3.7.2 says that localtime_r is not required to set tzname.
This is a good idea since this allows at least a bit more parallelism. */
tzset_internal (tp == &_tmbuf && use_localtime);
This makes it always tzset() and inspect the environment $TZ etc. While with
localtime_r it will only initialize it once:
static void tzset_internal (int always) {
[...]
if (is_initialized && !always)
return;
|