The Issue:
When saving an entry that is already bookmarked on Pinboard,
Miniflux was overriding all existing data on Pinboard. This action
removed any extended content or, worse, changed the private settings
to public, making previously private bookmarks publicly available.
The Fix:
Now, upon saving an entry as a bookmark, I first fetch it. If it
already exists, I apply the necessary modifications (adding tags and any state)
that Miniflux would have normally done, then add it again. This way, no
data is lost in the process. Pinboard has a stable API, so I don't anticipate
any new fields being added soon.
I manually tested the integration by hitting the save button in the following situations:
- Entry URL does not exist on Pinboard:
- Bookmark is properly added on Pinboard with tags and "to read" status according to Miniflux settings.
- Entry URL already exists on Pinboard:
- Existing data remains unchanged.
- Tags from Miniflux settings are properly added to the bookmark.
- "To read" status is set to yes when the option is checked in Miniflux. Nothing is changed otherwise.
39d752c removed a link to the feed name to solve a web preview issue. This change brings back the feed name without the link, thus restoring the feed name without bringing back the issue.
Fixes#2620
Some server on the wild are badly configured. Either by mistake or lack
of maintenance. Safe and unsafe Ciphers change overtime based on new
discoveries.
This proposition will include considered unsafe ciphers when `Allow self-signed or invalid certificates` is used.
It could be put into a separate option but, I felt this could fit in.
fix#2671
In order to be more resilient to YouTube URLs variation and
to address this feature_request: https://github.com/miniflux/v2/issues/2628
I've reworked a bit the way the YouTube feed extraction is done.
I've kept all the `FindSubscriptionsFromYouTube*` in order
to keep all the existing unit tests as-is ensuring little to no
regressions. By doing so, I had to call twice `youtubeURLIDExtractor`.
Small performance penalty for peace of mind in my opinion.
`youtubeURLIDExtractor` is made in a way only one kind
of page can be detected at a time. This mean I can
solve the "video in a playlist" feature_request
by prioritizing the playlist ID over the Video ID
Also, by using `url.Parse()` to get ids, it's safer
to url mangle and variation. The most common variation
being the `t=42` parameters that start the playback
at a given position. Previously, this kind of url
would not be detected as "YouTube URL".
I deliberately ignored the url parsing error
to keep previous behavior (skip the YouTube analysis and follow with the other analysis)
I also tried to keep debug logs the same as before as much as I could.
I manually tested all the YouTube cases (video,channel,playlist)
and they all work as expected except for the video. But this one
does not work either on main. The `meta` html tag that was searched for
does not seem to exist anymore.
fix: #2628
The original idea was to have two digit precision at all time
in order to ensure the length of the string is always the same.
This prevents the UI button to move when pressed.
I completely missed the first press as the precision was not right
upon first click.
On shared entries, there is no speed configured as this
is bound to the user. Shared entries are displayed without user config.
I've changed the default view to reflect the
actual default playback speed in this case. 1x.
It's possible to specify a rewrite regex that validates but doesn't compile such
as:
rewrite("(((unmatched-capture-group"|"rewrite)))")
In case we encounter one, exit early instead of letting the server panic.
This adds a new "description" field to the feed settings. This allows to
save custom description regarding a feed. It is also exported and
imported as "description" in OPML.
This ensures that session cookies are not expiring before the session is cleaned up from the database as per CLEANUP_REMOVE_SESSIONS_DAYS.
As of now the usefulness of this configuration option is diminished as extending it has no effect on the actual browser session due to the cookie expiry.
Fixes: #2214
When listening to podcast, it is usual to want to speed up the playback.
https://github.com/miniflux/v2/pull/2521 was addressing the need globally, this PR
allow to address it for just the current open enclosure media. (no save) Some Browser
already include this control directly, but firefox does not (directly anyway).
Also, it is often useful to be able to skip chunk of a podcast, to skip commercials
for example, or get back a bit because we couldn't hear the last part. I added rudimentary
seek controls with the usual +/-10 and 30 seconds chuck size. This is pretty handy when podcast
are very long and using the seek bar is way too tricky to just skip 30s.
As always, I'm French and could only provide English and French translation for the few
text I added in the locale/translations files. Any help is welcome.
Tested mostly on Firefox (121.0) and quickly on Vivaldi(6.5.3206.53), chrome based.
Fixes: #1845#1846
Compress the html of feed entries before storing it. This should reduce the
size of the database a bit, but more importantly, reduce the amount of data
sent to clients
minify being [stupidly fast](https://github.com/tdewolff/minify/?tab=readme-ov-file#performance), the performance impact should be in the noise level.
When clicking the unread counter, the following exception occurs:
```
Uncaught TypeError: Cannot read properties of null (reading 'getAttribute')
```
This is due to `onClickMainMenuListItem` not working correctly for the
unread counter `span`s, which return `null` when using `querySelector`.
By default, Oglaf show some disclaimer/warning about its content, and this
doesn't play well with rss readers, so let's rewrite it to show the actual
comic instead of a placeholder.
rand.Intn(math.MaxInt64) causes tests to fail on 32-bit architectures.
Use the simpler rand.Int() instead, which still provides plenty of room
for generating pseudo-random test usernames.
This commit adds a bunch of checks to prevent reader/rss from adding empty tags
to rss items, as well as some minor refactors like nested conditions and loops
unrolling.
This commit adds a policy, and make use of it in the Content-Security-Policy.
I've tested it the best I could, both on a modern browser supporting
trusted-types (Chrome) and on one that doesn't (firefox).
Thanks to @lweichselbaum for giving me a hand to wrap this up!
- Move the population of the feed's entries into a new function, to make
`BuildFeed` easier to understand/separate concerns/implementation details
- Use `sort+compact` instead of `compact+sort` to remove duplicates
- Change `if !a { a = } if !a {a = }` constructs into `if !a { a = ; if !a {a = }}`.
This reduce the number of comparisons, but also improves a tad the
control-flow readability.