Compress the html of feed entries before storing it. This should reduce the
size of the database a bit, but more importantly, reduce the amount of data
sent to clients
minify being [stupidly fast](https://github.com/tdewolff/minify/?tab=readme-ov-file#performance), the performance impact should be in the noise level.
When clicking the unread counter, the following exception occurs:
```
Uncaught TypeError: Cannot read properties of null (reading 'getAttribute')
```
This is due to `onClickMainMenuListItem` not working correctly for the
unread counter `span`s, which return `null` when using `querySelector`.
By default, Oglaf show some disclaimer/warning about its content, and this
doesn't play well with rss readers, so let's rewrite it to show the actual
comic instead of a placeholder.
rand.Intn(math.MaxInt64) causes tests to fail on 32-bit architectures.
Use the simpler rand.Int() instead, which still provides plenty of room
for generating pseudo-random test usernames.
This commit adds a bunch of checks to prevent reader/rss from adding empty tags
to rss items, as well as some minor refactors like nested conditions and loops
unrolling.
This commit adds a policy, and make use of it in the Content-Security-Policy.
I've tested it the best I could, both on a modern browser supporting
trusted-types (Chrome) and on one that doesn't (firefox).
Thanks to @lweichselbaum for giving me a hand to wrap this up!
- Move the population of the feed's entries into a new function, to make
`BuildFeed` easier to understand/separate concerns/implementation details
- Use `sort+compact` instead of `compact+sort` to remove duplicates
- Change `if !a { a = } if !a {a = }` constructs into `if !a { a = ; if !a {a = }}`.
This reduce the number of comparisons, but also improves a tad the
control-flow readability.
Use a sort+compact construct instead of doing it by hand with a hashmap. The
time complexity is now O(nlogn+n) instead of O(n), and space complexity around
O(logn) instead of O(n+uniq(n)), but it shouldn't matter anyway, since
removeDuplicates is only called to deduplicate tags.
- Simplify a switch-case by moving a common condition above it.
- Remove a superfluous error-check: `strconv.ParseInt` returns `0` when passed
an empty string.
- Online some one-line functions
- Transform a free-standing function into a method
- Massively simplify `removeClickbait`
- Use a proper constant instead of a magic number in `applyFuncOnTextContent`
No need to compile them once for matching on the url,
once per tag, once per title, once per author, … one time is enough.
It also simplify error handling, since while regexp compilation can fail,
matching can't.