* 'main' of github.com:msfjarvis/hugo-social-metadata:
layouts: add rel="canonical" link to each post
hugo: mark theme as a module
theme: update branch
* 'master' of github.com:vividvilla/ezhil: (56 commits)
adding base tag to header to fix static file linking
render code blocks as inline-block
Remove .DS_Store file
Include front matter in home and list pages if provided
chore: add doc for summary length config
replace .RawContent with .Summary - looks way better
fix: disable custom js from sample config
feat: default style for table
fix: don't prepend host url to external js if it starts with http:// or https://
fix: disable disqus if Site.DisqusShortname is not set
fix: update demo pygmentes style
Disable Google Analytics when hugo server
[Fix] Link to home in head partial.
feat: load custom CSS and JS
fix: anchor text color for dark theme
chore: add gitignore
fix: readability fixes
fix: change layout for tags
fix: readability fix for small screen devices
fix: anchor tag overflow on chrome bug
...
Signed-off-by: Harsh Shandilya <me@msfjarvis.dev>
- Backwards compatible with previous implementation
- Consumers can define their own favicon in parital, depending on their
preference of render / source
Previously, all posts had to show a date. If the `date:` attribute was removed from the md file, then something like "0000 Jan 01" would default.
This is use for static pages that are not part of the blog structure, like a separate "about" or "contact" page.
- When clicking on the about logo in the main menu, the site was not redirecting to about page
- This was happening because the about url in the _config.yml was set to '/'
--> Fix that by defining the about page url.
Signed-off-by: prateekpunetha <prateekpunetha@gmail.com>
Prettier has been added as a dev dependency along with a few other tools
that run prettier on staged files before committing them. This prevents
any file from being committed that hasn't gone through the project's
formatting. Check the prettier.config.js file for details on how we
format files.
This commit includes a few adjustments to the layout of a post in order
to accommodate an optional ToC. The layout adjustments have been done in
such a way in order to mitigate as much visual differences between a post
with a ToC and and post without ToC as possible.
- Load minified code (3kB vs 30kB)
- Add prism autoloader so that we get highlighting for languages we
use.
- Hardcode exact prism version. Perhaps this is controversial, but it
saves a redirect for each request, shaving off 30ms on a fast
connection.
.Summary retains all formatting of posts for the paginator which looks much nicer than .RawContent. The length of the .Summary (in words, not characters) can be set via the config variable summaryLength.
* social/master:
Append BaseURL to all social image links
README: Document including hugo-social-metadata in themes
Signed-off-by: Harsh Shandilya <msfjarvis@gmail.com>
* cloak/master:
Update README.md
Add checks for address without @-symbol
Ensure spaces around span tag (#3)
Add parameter to display a text instead of the e-mail address
Update usage documentation
Add "class" parameter
Fix protocol scheme
Backport ideas by @mxmehl
Update README.md
Fix link
Added Awesome badge
Create .gitignore file
Add files to repository
Initial commit
Signed-off-by: Harsh Shandilya <msfjarvis@gmail.com>
This isn't needed to actually use Hugo highlighting, the fenced code blocks are automatically interpreted and
rendered at build-time.
This reverts commit 7e7477fdd3.
This allows having standalone pages at the top level without them showing up in the blog post list
Signed-off-by: Harsh Shandilya <msfjarvis@gmail.com>
* theme/master:
add gitbook svg
Added Stackoverflow icon
Cursor color customization
keybase
favicons color
Set container flex-basis to auto
Signed-off-by: Harsh Shandilya <msfjarvis@gmail.com>
We assume PNG MIME type, because, from my understanding, PNG is the main format
for favicons nowadays (even if they have an "ico" extension). Further
configuration options are needed to allow for specifying different MIME types.
* 'master' of github.com:rhazdon/hugo-theme-hello-friend-ng: (129 commits)
Update LICENSE.md
Update LICENSE.md
fix typo
added additional font-display configuration
Added font-display: auto to _fonts.scss
Add pre scrollbar custom like hermit
Small fix on name of language
sass: make logo font monospaced
icons: add Telegram icon
sass: fix h1 position
Add translations and support for pt-br.
Set correct theme color after page is loaded
Better multilingual support: absURL -> absLangURL
Fix menu items are not clickable on smart devices sometimes
Added spanish support for flag and translation
switching to RelPermalink instead of Permalink for JS and CSS files
Adding media query for reduced motion to turn off animation in that case
Add main tag in homepage
Rename css in pipeline
Fix assets generation path
...
Signed-off-by: Harsh Shandilya <msfjarvis@gmail.com>
Add a Telegram Icon to be displayed at social section. Although Feather
icons doesn't have any Telegram icon, I believe this is a good
replacement.
Signed-off-by: André Almeida <andrealmeid@riseup.net>
Assets were generated into `resources/_gen/js/js/*` and `resources/_gen/scss/scss/*`. Now the files are in the correct folder --> no double nesting anymore.
Right now there wasn't an H1 at all, and the title was written out as a H2. Ideally we'd have the H1 be the main subject of the page (the post title seems appropriate for that), the first heading and there would be only the one H1.
* Ubuntu or Ubuntu Mono fonts were being uppercased when in combination
with a font-feature-setting called 'case'.
* Set the font feature settings to for <code> tags to 'normal' so
that the Ubuntu Mono font is not being uppercased
* Put back in the 'case' font-feature setting for the body
Posts in the index page will be automatically summarized to its first 70
words, or until a user-defined <!--more--> divider. When summarized, the
"Read More" button also appears with a RelPermalink to the full post.
https://gohugo.io/content-management/summaries/
Using the old call was producing the following Warning:
WARNING: Page's Now is deprecated and will be removed in a future
release. Use now (the template func).
Variables under .Site.Params are accessed with all lower-case identifiers. Hugo won't find them with an upper-case first letter, regardless of how they're formatted in the site `config` file.
A minimal blog theme built for [Hugo](https://gohugo.io/) 🍜
## What this theme is
- An about page and a blog. No more. No less.
- Blog posts can be tagged
- You can view all blog posts that a specific tag by going to /tags/:tag-name
## Archetypes
You can create a new blog post page by going to the root of your project and typing:
```
hugo new blog/post.md
```
Where `post.md` is the name of your new post.
## Configuration
There are a few configuration parameters you can add in your `config.toml` to customize the theme:
```toml
# config.toml
# values listed here are default values
[params]
name = "Codex"
description = "A minimal blog theme for hugo."
twitter = "hugo-theme-codex"
github = "jakewies/hugo-theme-codex"
```
1. `name`: This is the heading on the `/about` page
2. `description`: This is the subheading on the `/about` page
3. `twitter`: Your Twitter handle without the @ symbol (optional)
4. `github`: Your GitHub handle without the @ symbol (optional)
## Overriding / Customizing
Right now the way to customize the theme is not very user-friendly. That is the first thing to work on. If you get curious just hop into the theme directory and go exploring through the code. It's not too complicated what's going on.
Source code for my website at [msfjarvis.dev](https://msfjarvis.dev). It's built with [Hugo](https://github.com/gohugoio/hugo), deployed continuously to Netlify.
I'm an Android and Kotlin developer with over 5 years of experience building apps that have scaled up to hundreds of thousands daily active users and design systems for some of the largest startups in India.
### Work
I currently work at [Dyte] as an SDK Tooling engineer, primarily focusing on developer experience for the mobile team.
You can find the latest version of [my resume here].
### Projects
A few ideas that I've gone ahead and built in my spare time. You can support them if you like by [donating here].
- [Android Password Store]: A password manager for Android aiming to be fully compatible with the [pass] format.
- [Claw]: A read-only Android client for [lobste.rs], written entirely in [Jetpack Compose].
- [healthchecks-rs]: A Rust library for interacting with [healthchecks.io] and a couple CLI tools that utilise it.
- [linkleaner]: Telegram bot to automatically improve link previews.
### Contact
You can find me on [Mastodon], I have a Twitter but I no longer use it for well understood reasons. I'd be happy to write back if you'd like to send me an [email]!
The Android Password Store Authors built the Android Password Store app as an Open Source app. This application is provided by The Android Password Store Authors at no cost and is intended for use as is.
This page is used to inform visitors regarding our policies with the collection, use, and disclosure of Personal Information if anyone decided to use our Service.
The terms used in this Privacy Policy have the same meanings as in our Terms and Conditions, which is accessible at Android Password Store unless otherwise defined in this Privacy Policy.
## Information Collection and Use
We collect absolutely no information about users through Android Password Store. Your privacy is respected and enforced.
## Log Data
We want to inform you that whenever you use our Service, in a case of an error in the app We collect data and information (through the Google Play Store, if available on your device) on your phone called Log Data. This Log Data may include information such as your device Internet Protocol (“IP”) address, device name, operating system version, the configuration of the app when utilizing our Service, the time and date of your use of the Service, and other statistics.
## Cookies
Cookies are files with a small amount of data that are commonly used as anonymous unique identifiers. These are sent to your browser from the websites that you visit and are stored on your device's internal memory.
This Service does not use these “cookies” explicitly. However, the app may use third party code and libraries that use “cookies” to collect information and improve their services. You have the option to either accept or refuse these cookies and know when a cookie is being sent to your device. If you choose to refuse our cookies, you may not be able to use some portions of this Service.
## Service Providers
We do not employ any third-party services that will collect your data, all operations on Android Password Store are offline and completely anonymous.
## Links to Other Sites
This Service may contain links to other sites. If you click on a third-party link, you will be directed to that site. Note that these external sites are not operated by me. Therefore, We strongly advise you to review the Privacy Policy of these websites. We have no control over and assume no responsibility for the content, privacy policies, or practices of any third-party sites or services.
## Changes to This Privacy Policy
We may update our Privacy Policy from time to time. Thus, you are advised to review this page periodically for any changes. We will notify you of any changes by posting the new Privacy Policy on this page. These changes are effective immediately after they are posted on this page.
## Contact Us
If you have any questions or suggestions about our Privacy Policy, do not hesitate to contact us.
summary = "The history of my Minecraft adventures as told by my screenshots folder"
slug = "a-tour-of-my-screenshots-folder"
tags = ["april-cools", "minecraft"]
title = "A tour of my screenshots folder"
+++
## Preface
> This is a post for the [April Cools Club](https://aprilcools.club) which encourages people to break away from the typical cringiness of April Fools and do things you don't normally do.
To set the stage, every screenshot you will see going forward is gonna be Minecraft. I just love the game and I have had such fun with it for the past 5 years that it feels remiss to not share every so often (which I do these days at [@msfjarvis@mstdn.games](https://mstdn.games/@msfjarvis)). These are gonna be in order from oldest to newest, and I'll try to annotate each screenshot with dates, alt text and the relevant anecdote as I remember them but honestly a bunch of this is just goofy shit I happened to capture.
## Just a bridge, really
{{<figuresrc="starter-base-bridge.webp"alt="A cinematic shot of a tiny wooden bridge spanning two cliff sides on either side of the frame. The bridge has evenly spaced poles on either side of it with a shroomlight to illuminate the entire thing. The scene is set in the night time and the Aurora Borealis is visible in the background."title="Date taken: August 2, 2022"loading="lazy">}}
This bridge is in a Minecraft world that [Sasikanth](https://sasikanth.dev/me/) and I started in the second half of 2022, and built entirely by him next to our starter base. After Sasi kinda moved on from playing on the server (as Minecraft players inevitably do, myself included) I copied the world and started using it as my singleplayer world and I still play on it to this day.
## An unlikely friendship
{{<figuresrc="an-unlikely-friendship.webp"alt="A blacksmith villager and a creeper standing right next to each other in the night. The villager is facing the creeper while it looks off into the distance, towards the right side of the camera."title="Date taken: August 13, 2022"loading="lazy">}}
I don't think I really remember where this is from, but if I had to guess it was the village I and Sasikanth discovered and promptly lay ruin to which today happens to be my full time base.
## The start of the storage room
{{<figuresrc="the-start-of-the-storage-room.webp"alt="An incomplete rectangular arrangement of double chests with item frames on them, with a mess of shulker boxes in between and me standing on top of them"title="Date taken: August 20, 2022"loading="lazy">}}
After commandeering the aforemention village I decided to lay roots next door, and this is basically the start of my storage room. The basic design is still the same but it has like, walls and stuff now.
## I am a dwarf, and I'm digging a hole
{{<figuresrc="diggy-diggy-hole.webp"alt="A top down shot of my Minecraft character standing next to a one chunk big hole straight down to bedrock"title="Date taken: December 29, 2022"loading="lazy">}}
Honestly not much to say, I was in a bit of a slump with my mental health and decided the best use of my mushy brain was to dig down a whole chunk to eventually build a slime farm.
## Did I drain this Ocean Monument or did it drain me?
{{<figuresrc="draining-ocean-monument-part-1.webp"alt="Cinematic night time shot of my character standing on a wall of sand on the close left side of the screen while an Ocean Monument takes up the rest of the bottom half"title="Date taken: February 27, 2023"loading="lazy">}}
I didn't play much for a month or two so I decided to pick up a somewhat involved project to get me back in the swing of things, which happened to be a Guardian farm for the prismarine family of blocks. I'll let the other screenshots paint the picture, but suffice to say I had under-estimated the scope of this 😬
{{<figuresrc="draining-ocean-monument-part-2.webp"alt="Overhead world map shot of the Ocean monument with approximately 30% of it drained. There is a perimeter of sand around it as well as some evenly spaced walls running across the screen to section off slices to be drained."title="Date taken: March 7, 2023"loading="lazy">}}
{{<figuresrc="draining-ocean-monument-part-3.webp"alt="The same setup described before but with about 55% of the structure drained."title="Date taken: March 12, 2023"loading="lazy">}}
{{<figuresrc="draining-ocean-monument-part-4.webp"alt="The entire structure is now drained, with just the sand perimeter remaining around it"title="Date taken: March 15, 2023"loading="lazy">}}
{{<figuresrc="draining-ocean-monument-part-5.webp"alt="Me standing on top of one of the sand walls, looking inwards to the now completed Guardian farm. It's comprimised of two glass tanks full of water that funnel guardians into a central chamber where they fall and have their drops collected underground"title="Date taken: March 20, 2023"loading="lazy">}}
When this was finally done I genuinely used it like 4 times total, turns out I'm not really a prismarine guy so that's a couple weeks I am not getting back.
## Getting real personal with a Warden
{{<figuresrc="meeting-a-warden.webp"alt="A very close over the shoulder shot of me mere inches from a Warden which is staring right into my soul"title="Date taken: March 12, 2023"loading="lazy">}}
To break up the monotony of placing sand for the Guardian farm I paid a visit to a near by Ancient City and ended up a little too close to a Warden, which did eventually kill me.
## The sea shanty era
{{<figuresrc="tragic-lovers.webp"alt="Two players in a bamboo raft that is positioned on the bow of a shipwreck which is poking out of water"title="Date taken: June 27, 2023"loading="lazy">}}
I took another couple months off and then set up a small server with a handful of friends to mess around on a fresh world with relative newbies to the game. A lot of chaos ensued, but a large chunk of the time I spent on the server ended up being me and [Yash](https://yashgarg.dev/) just boating across oceans looking for anything mildly interesting while in a voice call with our friends. It was fun times, but as always people's interest dwindled down and I shut the server down a couple weeks later.
## Excavating the Nether
{{<figuresrc="nether-mining-pre-boom.webp"alt="A map of the Nether at elevation Y=0, showing parallel one block wide tunnels going across chunk borders and filled with unevenly spaced blocks of TNT"title="Date taken: July 11, 2023"loading="lazy">}}
I wanted Ancient Debris for a project and decided to just get a whole load of it at once, which resulted in this set of TNT-filled tunnels. Here's the damage all the TNT did:
{{<figuresrc="nether-mining-post-boom.webp"alt="The same tunnels from above after all the TNT in them was lit. They are now wider, more jagged and a lot more lava-filled"title="Date taken: July 11, 2023"loading="lazy">}}
## My first Sniffers
{{<figuresrc="first-sniffers.webp"alt="A fenced off patch of moss with me standing in the middle and one Sniffer egg on each side"title="Date taken: July 12, 2023"loading="lazy">}}
When the Sniffers were added to the game I just had to get them, and obviously then I created a farm for the seeds they "sniff" up.
{{<figuresrc="sniffer-for-a-sniffer-farm.webp"alt="Elevated shot of a large Sniffer build, the back of it is built with colored glass and you can see some 10 Sniffers inside on a floor of mud blocks"title="Date taken: September 13, 2023"loading="lazy">}}
## The End Ring Project
{{<figuresrc="incomplete-end-island-ring.webp"alt="Top down shot of the main End island, showing a Prismarine ring going through all the end gateways which are illuminated by Prismarine Lights"title="Date taken: March 28, 2024"loading="lazy">}}
I envisioned a continous ring of Prismarine walkways around all the End gateways to be a cool way to get to them rather than just pillaring up but I biffed the "circle" so many times that it's kinda stalled at the moment.
## All Trimmed Up
{{<figuresrc="obtaining-every-armor-trim.webp"alt="A short hallway of Spruce planks showcasing every single Armor Trim as of Minecraft 1.20.4"title="Date taken: March 16, 2024"loading="lazy">}}
I went on a quest to obtain every single armor trim and enough Netherite to create armor to put them on, which in total probably took 20-odd hours including all the Diamond and Ancient Debris mining as well as actually locating all the trims. My Minecraft closet has more variety than my IRL one which feels like cause for concern.
## Bonus randomness
### The Minecraft x FaZe collab
For some reason Minecraft likes to spawn Nether Fossils that look way too much like the [FaZe Clan] logo and I apparently have a bunch of them screenshotted, so here they are:
{{<figuresrc="fazeup-1.webp"alt="The FaZe Fossil in a soul sand valley"loading="lazy">}}
{{<figuresrc="fazeup-2.webp"alt="Another FaZe Fossil in a soul sand valley"loading="lazy">}}
{{<figuresrc="fazeup-3.webp"alt="Yet Another FaZe Fossil in a soul sand valley"loading="lazy">}}
{{<figuresrc="fazeup-4.webp"alt="Would you believe it? Another FaZe logo in a soul sand valley"loading="lazy">}}
{{<figuresrc="fazeup-5.webp"alt="To break up the monotony, this FaZe logo was in a 'Quartz Flats' custom biome from the Incendium datapack"loading="lazy">}}
### The impossible portal
{{<figuresrc="chunk-pruned-portal.webp"alt="A Nether portal with its left side of Obsidian blocks missing but the portal is still intact"title="Date taken: June 12, 2023"loading="lazy">}}
I manually prune unused chunks before Minecraft updates in order to let them regenerate with the updated terrain and accidentally sliced off this portal as it happened to be on a chunk boundary. It's all the way back at my starter base so it does not see much use, but it's there.
## The end...?
I don't know why I picked this to do of all the things I could have for April Cools. However, reliving all the memories of fun times I had with my friends and even by myself were worth the pain it was to go through 800 odd screenshots and find moderately interesting things :)
Hopefully I have something cooler for next year and that I actually give myself more than 4 hours to write it up.
## Automatically adding social metadata to Hugo sites
After coming across [this list](https://github.com/budparr/awesome-hugo#theme-components) I realized theme components was a thing so I've extracted my [social metadata commit](https://github.com/msfjarvis/msfjarvis.dev/commit/cc08039a6b4a6b649bdd8710295383d2388c9955) into a separate component for re-use by the community. It's available on GitHub at [msfjarvis/hugo-social-metadata](https://github.com/msfjarvis/hugo-social-metadata). The README goes through the installation steps so here I will simply cover what the component is actually adding. Here's the generated metadata for this very post.
- `og:type` - Allowed values are specified at the OpenGraph protocol's documentation [here](https://ogp.me/#types). I use `website` to reflect the content I serve.
- `twitter:card` - One of `summary`, `summary_large_image`, `app`, or `player`. `summary_large_image` indicates that I want to see a social image as well as the description I provide when this is rendered on Twitter.
- `twitter:site` - Twitter username of the owner of this website.
- `description` - HTML5 tag that describes the content of this page. The content of this can be replicated in `og:description` and `twitter:description` to satisfy Facebook and Twitter respectively.
- `og:url` and `twitter:url` - Permalink to the content that this page is for. You can use this to provide a link with tracking related metadata to track social origins.
- `og:title` and `twitter:title` - Title of the page as you want it to be shown on social media.
- `twitter:image:src` - Absolute link to an image that will be used in your Twitter card.
title = "Android Password Store 1.10.1 patch release"
+++
Hot on the heels of the [v1.10.0](https://github.com/android-password-store/Android-Password-Store/releases/tag/v1.10.0) release we have an incremental bugfix update ready to go!
As mentioned in the [previous release notes](/posts/aps-july-release), the algorithm for handling GPG keys was significantly overhauled and thus had the potential to cause some breakage. Well, it did.
This release includes 3 separate fixes for different bugs around GPG.
- [#959](https://msfjarvis.dev/aps/pr/959) ensures long key IDs are correctly parsed as hex numbers.
- [#960](https://msfjarvis.dev/aps/pr/960) fixes a type problem where we incorrectly used a `Array<Long>` that gets interpreted as a `Serializable` as opposed to the `Long[]` expected by OpenKeychain.
- [#958](https://msfjarvis.dev/aps/pr/958) reintroduces the key selection flow, adding it as a fallback for when no key has been entered into the `.gpg-id` file. This notably helps users who generate stores within the app.
The release is going up on the [Play Store](https://play.google.com/store/apps/details?id=dev.msfjarvis.aps) over the next few hours, [F-Droid](https://f-droid.org/packages/dev.msfjarvis.aps/) builds will be delayed until our patch [shifting F-Droid to the free flavor](https://gitlab.com/fdroid/fdroiddata/-/merge_requests/7141) is not merged.
title = "Android Password Store 1.10.2 patch release"
+++
Exactly one week after the [previous patch release](/posts/aps-1.10.1-release), we have another small release fixing a few bugs that were deemed too high-priority for our usual release cadence.
List of the patches included in this release:
- [#985](https://github.com/android-password-store/Android-Password-Store/pull/985) fixes a couple of crashes originating in the new SMS OTP autofill feature that completely broke it.
- [#982](https://github.com/android-password-store/Android-Password-Store/pull/982) ensures that the 'Add TOTP' button only shows when its needed to.
- [#969](https://github.com/android-password-store/Android-Password-Store/pull/969) improves support for pass entries that only contain TOTP URIs, and no password.
This release has been uploaded to the Play Store and should reach users in a few hours. F-Droid is [yet to merge](https://gitlab.com/fdroid/fdroiddata/-/merge_requests/7141) our MR to support the free flavor we've created for them so just like the previous two release in the 1.10.x generation, this too shall not be available on their store just yet.
Continuing this new tradition, here are the detailed release notes for the [v1.11.0](https://github.com/android-password-store/Android-Password-Store/releases/tag/v1.11.0) build of of Android Password Store that is going out right now on the Play Store and to F-Droid in the coming days. The overall focus of this release has been to improve UX and resolve bugs. Regular feature development has already resumed for next month's release where we'll be bringing [Android Keystore](https://source.android.com/security/keystore) backed SSH key generation as well as a rewritten OpenKeychain integration for SSH connections.
# New features
## One URL field to rule them all
Previously you'd have to set the URL to your repository across multiple fields like username, server, repository name and what not. Annoying! These things make sense to us as developers, but users should not have to be dealing with all that complexity when all they want to do is enter a single URL. We've received numerous bug reports over time as a result of people misunderstanding and ultimately misconfiguring things when exposed to this hellscape. Thanks to some _amazing_ work from Fabian, we now have a single URL field for users to fill into.
![Single URL field in repository information](/uploads/aps-august-release-single-url-field.webp)
## Custom branch support
A long-requested feature ([from 2017](https://msfjarvis.dev/aps/issue/298)!) has been the ability to change the default branch that APS uses. It was previously hard-coded to `master`, which was an issue for people who don't use that term or who keep separate stores on separate branches of their repository and would like to be able to switch easily. Now you can set the branch while cloning or make the change by setting it in the git server config screen, then using the 'Hard reset to remote branch' option in Git utils to switch to it.
## XkPasswd generator improvements
We made a number of UI improvements in this area for the last series, and for this release the original contributor [glowinthedark](https://github.com/glowinthedark) has returned to add the ability to append extra symbols and numbers to the password. Sometimes you'll see sites that require that each password have at least 1 symbol and 1 number to agree with some arbitrary logic's idea of a 'secure' password, and while it can be done manually, automatic is just better :)
![XkPasswd generator with the new symbol/number append option](/uploads/aps-august-release-xkpasswd.webp)
To add 1 symbol and 1 number to the end of a password, input `sd` and press generate. Each instance of `s` means one symbol, and `d` means one digit. Together these can be put together in any order and in any amount to create passwords conforming to any arbitrary snake-oil check. Remember, in passwords, length is king!
## Improved subdirectory key support
In the last major release we added support for [per-directory keys](/posts/aps-july-release/#proper-support-for-per-directory-keys). Building upon this, we now have support for also setting the key for a subdirectory when creating it.
![Create folder dialog but key selection checkbox](/uploads/aps-august-release-subdir-key-support.webp)
When selected, you will be prompted to select a key from OpenKeychain that will then be written into `your-new-directory/.gpg-id` which makes it compatible with all `pass` compliant apps.
# Bugfixes
## Detect missing OpenKeychain properly instead of crashing
Many, many people reported being unable to edit/create passwords and the app abruptly crashing. This is pretty bad UX, and we've now fixed it. Users will be prompted to install OpenKeychain and once you install and return to Password Store, the app will pick up from where you left and continue the operation. Pretty neat, even if I say so myself :)
A couple of regressions resulted in cloning to external storage being completely broken. This has now been fixed alongwith a workaround for a possible freezing scenario during deletion of existing files from the selected directory. We've also improved the UX around cloning to external to be more straightforward and reliable.
## Creating nested directories
Previously, attempting to create directories like `directory1/subdirectory` would fail if `directory1` didn't already exist. This has now been fixed.
# Misc changes
## UI/UX tweaks
We're constantly working towards a better UI for APS, and to that end we've made some more improvements in this release. The password list now has dividers between individual items, and the parent path that was previously only shown on files now also does on directories. We hope this will help reduce ambiguity in results when searching, for example when you have a `github.com` subdirectory in both `work` and `personal` categories and need to find the right one quickly.
A longstanding to-do has been addressed as well, where the user will now be notified after a push operation if there was nothing to be pushed. Previously this would just do nothing which wasn't very intuitive.
We've completely rewritten the Git operation code to use a simpler progress UI and cleaner patterns which made a lot of these improvements possible.
## Disabling keyboard copy by default
The default behaviour of automatically copying to clipboard was both a bit insecure on most devices (w.r.t. unfettered clipboard access before Q) as well as counterproductive for some use-cases. In light of these, we've flipped the default for clipboard copy to off. Existing users will not have their settings changed.
# Conclusion
There are more smaller improvements peppered around. We're constantly making improvements and adding new features, and welcome all constructive feedback through [Gitter](https://gitter.im/android-password-store/public) or [GitHub issues](https://github.com/android-password-store/Android-Password-Store/issues).
Lastly, Android Password Store development thrives on your donations. You can sponsor the project on [Open Collective](https://opencollective.com/Android-Password-Store), or me directly through GitHub Sponsors by clicking [here](https://github.com/sponsors/msfjarvis?o=esc). GitHub Sponsors on Tier 2 and above get expedited triage times and priority on issues. You can now also buy features, faster support with issues as well as quicker bugfixes through our [xs:code](https://xscode.com/msfjarvis/Android-Password-Store) page.
As promised, here are detailed release notes for the [v1.10.0](https://github.com/android-password-store/Android-Password-Store/releases/tag/v1.10.0) build of Android Password Store that is going out right now on the Play Store and to F-Droid in the coming days. This is a massive one even compared to our previous v1.9.0 major release, which was our largest release when it went out. Let's dive into the changes!
## New features
### TOTP support
I [removed support for HOTP and TOTP secrets](https://msfjarvis.dev/aps/pr/806) back in v1.9.0 due to multiple reasons, a) it was blocking important refactoring efforts, b) it had zero test coverage, and c) none of the maintainers used it. Play Store reviews swiftly reminded us that people did use the feature even in its wonky state, and demanded its return. I stuck to our decision as maintainers for a while, but active members of the pass community like [erayd](https://github.com/erayd) (who happens to be the maintainer for [browserpass](https://github.com/browserpass)!) were able to convince us otherwise and provided good, actionable feedback allowing us to [bring back TOTP](https://msfjarvis.dev/aps/pr/890) support into APS, better than ever before.
The new implementation is backed by a solid suite of tests and contains new features like the ability to import TOTP URIs using QR codes, being able to Autofill them into webpages as well as extracting OTPs from SMSes (not available on F-Droid due to GMS dependencies for SMS monitoring).
### Support for ED25519/ECDSA keys
With our ongoing efforts to switch over from the dated [Jsch](http://www.jcraft.com/jsch/) SSH library to the more up-to-date and maintained [SSHJ](https://github.com/hierynomus/sshj), we now fully support ED25519 and ECDSA keys! You no longer need to rely on RSA to authenticate from your phone to your Git host :)
In a future release, we'll be bringing more improvements to this area including generating and storing SSH keys in the [Android Keystore](https://source.android.com/security/keystore/) for enhanced security as well as support for fallback authentication.
### Proper support for per-directory keys
[pass](https://www.passwordstore.org/) has a neat feature where it allows you to use a separate GPG key for a subdirectory, such as for sharing passwords across a team. It achieves this by looking for a `.gpg-id` file starting from the current directory, up to the root of the store. The first file it finds is what it uses as the key for the GPG operations.
```shell
$ tree -a store
store
├── .gpg-id <--containsthekeyABCDE01234
└── subdirectory1
└── .gpg-id <--containsthekeyFGHIJ56789
```
In this directory structure, `pass generate subdirectory1/example.com` will use the `FGHIJ56789` key, and `pass generate example.com` will use `ABCDE12345`.
Previously, Password Store would only correctly handle decryption in this situation, and fail to select the right key for encrypting. The workaround for this was to manually select the key from settings that you wished to use, before creating a password. That's pretty stupid, and we're sorry you had to do that earlier. Now, Password Store uses an algorithm similar to the `pass` CLI to find the correct `.gpg-id` file and read the key from it. GnuPG is more 'forgiving', if you will, in what type of key values it can work with so there's a slim chance that your current workflow might now be broken. If this happens, please immediately either file an issue over on the [GitHub repository](https://msfjarvis.dev/aps) or email us at [aps@msfjarvis.dev](mailto:aps@msfjarvis.dev) with as much detail as you can and we'll resolve it ASAP.
## Bugfixes
### Better protection against invalid filename changes
Over the past few releases we've been hard at work improving the password edit flow, making it more accessible and 'obvious' to users and simultaneously prevent any hidden footguns from souring the experience. We received a bug report about [file renaming](https://msfjarvis.dev/aps/issue/928) having unexpected behavior that caused destructive actions in the store, and in response we [now have better safeguards against this](https://msfjarvis.dev/aps/pr/929) and have improved the UI to make things more clear to users.
### Export passwords asynchronously
Previously the password export would run on the main thread and potentially cause the app to completely freeze and throw a 'Password Store is not responding error'. This has been rectified, and the export now occurs in an entirely separate process.
### UI fixes
A bunch of UI feedback was provided to us after the last major release and we've worked to address it in this one. Long file/folder names now correctly wrap across lines, and the error UI for wrong password/passphrase is now aesthetically correct [[PR](https://msfjarvis.dev/aps/pr/892)].
### QoL improvements
We've been aggressively refactoring the codebase to use modern APIs like [ActivityResultContracts](https://msfjarvis.dev/aps/pr/910) and making large scale architectural changes to our old code in efforts to improve maintainability in the future. We also have work-in-progress rewrites of the [Git commands pipeline](https://msfjarvis.dev/aps/pr/865) and incoming support for [fallback authentication](https://msfjarvis.dev/aps/pr/825).
## General changes and improvements
### New icon and color scheme
Right off the bat, you will notice a brand new icon for Password Store. This was created for us by [Radek Błędowski](https://twitter.com/RKBDI), go check him out!
![New icon](/uploads/aps_banner.webp)
To complement the new icon, we've also updated our color scheme to better suit this new branding.
### Simplified XkPasswd implementation
While revisiting our UI during the icon change, we realised that the alternate XkPasswd password generator option we introduced back in v1.6.0 was a tad too complicated to use with a lot more knobs and switches than necessary. This has been fixed, and we hope that it's now at a level of accessibility that allows more users to try it out.
### Improvements to biometric lock transition and password list UI
The biometric authentication UI flow has been updated to show the authentication dialog over a transparent screen, before starting the app upon success. We've also retouched the password list to remove the leading icons, as we have been consistently receiving numerous comments about them being unnecessary and a bit ugly. In v1.4.0 we introduced child counts and iconographic hints to directories, and we feel they are more than sufficient to communicate the difference between them and password files. We welcome all feedback about these changes at [me@msfjarvis.dev](mailto:me@msfjarvis.dev).
## In conclusion
There are a lot more changes in this release than those included in this post, which you can check out [here](https://github.com/android-password-store/Android-Password-Store/milestone/10). We're constantly at work improving APS and all constructive feedback helps us create a better experience for users and ourselves, so please keep it coming (over email, if it's a suggestion. Play Store reviews are not good for back-and-forth communication).
Lastly, Android Password Store development thrives on your donations. You can sponsor the project on [Open Collective](https://opencollective.com/Android-Password-Store), or me directly through GitHub Sponsors by clicking [here](https://github.com/sponsors/msfjarvis?o=esc). GitHub Sponsors on Tier 2 and above get expedited triage times and priority on issues :)
We're back with yet another release! As I shared earlier this month, this is going to our last release for a while. There's a lot of work left to be done, and we're simply not big enough a team to have these larger changes be done separately from our main development. We'll still be doing bugfix releases if and when required, so please do file bug reports as and when you encounter issues.
## New features
### GPG key selection added to onboarding
Creating a new store from the app previously created an unusable store, because we never configured a GPG key in the `.gpg-id` file. This has now been remedied in two ways: empty `.gpg-id` files are correctly handled as invalid and included in our quickfix solution, and creating a new store will now request you to select a key and then write it into the `.gpg-id` file. Here's what the key selection screen looks like:
![GPG key selection screen from the APS October release](/uploads/aps-october-release-gpg-key-selection.webp)
### Allow configuring an HTTPS proxy
Before we close the gates on our regularly scheduled releases, our focus has been to address most longstanding issues and one of the major ones there has been [Proxy support](https://github.com/android-password-store/Android-Password-Store/issues/163). This has now been added, and can be accessed from the settings screen. Unfortunately, there are still a few caveats with this current implementation that may or may not change in a future patch release:
- No SOCKS5 support
- Relatively unhelpful error messages when proxy connection fails
### Add option to automatically sync repository
~~This too, has been a [consistent request](https://github.com/android-password-store/Android-Password-Store/issues/277) in the past. While our implementation does not exactly match what was requested, we feel it's good enough to be shipped. You now have the option to sync your repository on every launch to ensure things are always up-to-date when you get in the app.~~
Due to multiple bugs, this feature has been rolled back in [v1.13.1](https://github.com/android-password-store/Android-Password-Store/releases/tag/v1.13.1).
<!--![App launch screen showing the repository being synced](/uploads/aps-october-release-syncing-repository.webp)-->
## Fixes
### Improved error messaging
For a large set of connection related errors, the failure message would simply be 'Invalid remote: origin'. That is exactly as unhelpful as one might think, and now we try harder to extract the actual, more meaningful error message.
### Use Git's default user and email when none are configured
We don't force users to set a name and email before they make any changes requiring Git commits, but somewhere in the last couple releases we regressed our behavior around this. Rather than the `root <root@localhost>` committer, we were incorrectly using empty strings resulting in all commits being authored by ` <>`. This has now been resolved, and your commit history will now be adorned by `root@localhost` once more (but seriously, just set your name and email already).
### Improvements around phishing detection UX
APS has had comprehensive phishing detection built into our Autofill since day one. Our phishing-resistant search will not show your `google.com` passwords when you try to fill into `goggle.com`, and if the signature of an application changes after you first filled a password into it, we will warn you about the change. There were a couple issues with the way this was happening.
First, the phishing detection UI was a bit complicated, and also had some unreadable, black-on-dark text. Since this was never reported to us, I believe none of our users are being phished by their apps which is great news :) Regardless, it is now fixed.
Secondly, some complexity with how Android's Autofill APIs work resulted in the "no I'm not being phished, accept this new signature" case to not work correctly. This caused the user to be continually shown the phishing detection prompt until they force closed the target app and started it again. That's cumbersome, so we've fixed it now. Cheers to Fabian for his stellar work as always!
### Conclusion
As you can notice, this is a bit of a small release by our standards. Fabian's been busy with his Ph.D. (!!) and the new job he's starting at soon (!!), and me and Aditya have been busy with our day jobs as well. This doesn't spell doom for the project (yet), but your financial contributions over on [GitHub Sponsors](https://github.com/sponsors/msfjarvis) and [OpenCollective](https://opencollective.com/Android-Password-Store) are now much more important than ever to sustain the project during this time via bountied issues and simply compensating the current crop of developers for their time.
title = "Android Password Store September release"
toc = true
+++
Continuing with this new-ish tradition we have going, here are the detailed release notes for the [v1.12.0](https://github.com/Android-Password-Store/android-password-store/releases/tag/v1.12.0) release.
> Multiple important announcements at the end of the page, make sure to read the whole thing!
## New features
### Extend Autofill support to more browsers
[Devin J. Pohly](https://github.com/djpohly) and [Rounak Dutta](https://github.com/rounakdatta) collectively contributed support for 3 new Chromium-based browsers: [Bromite](https://www.bromite.org/), [Ungoogled Chromium](https://git.droidware.info/wchen342/ungoogled-chromium-android) and [Kiwi](https://kiwibrowser.com/).
### Allow sorting by recently used
This feature was requested [a while ago](https://msfjarvis.dev/aps/issue/535) and was [implemented by Alex Molinares](https://msfjarvis.dev/aps/pr/1031) early in the cycle. The database that keeps track of the recently used passwords is always active, so if and when you switch to this sorting mode you'll see everything already sorted based on your old usage patterns. Neat!
### Add ability to view Git commit log
Another, [even older](https://msfjarvis.dev/aps/issue/284) feature request has finally been addressed. This too, [came from an external contributor](https://msfjarvis.dev/aps/pr/1056) and was one of the best pull requests I have ever seen. It's a great feature, and I thoroughly enjoyed the entire process of its inclusion.
### SSH key generation and handling improvements
The old SSH key generation has been [scrapped and rewritten](https://msfjarvis.dev/aps/pr/1070) to use a set of safer cryptographic curve options that span the distance between widely supported and very secure. The [wiki page](https://github.com/android-password-store/Android-Password-Store/wiki/Generate-SSH-Key) has been updated for these changes with information on how we're securing access to the actual SSH keys, like storing the key file in the Android Keystore and requiring screen lock authentication before the key can be used.
### Fallback authentication for SSH
SSH servers are often configured to have multiple authentication methods, where you first attempt to authenticate with private keys and if that fails, fall back to passwords. This wasn't previously supported in APS, which would quit after the first failure. We've changed that to now offer the option of entering a password if the server is configured to fall back to it.
### Rewritten and redesigned onboarding flow
In a multi-step refactoring process, the initial flow of setting up the app has been completely revamped. The internals were completely overhauled to improve stability, weed out some gnarly hacks, and make the whole thing easier to test and understand. Maintainer [Aditya Wasan](https://github.com/Skrilltrax) did a fabulous job giving the [UI a facelift](https://msfjarvis.dev/aps/pr/1099). It's real pretty now ✨
### Show hidden folders now also shows hidden directories
Our old 'Show hidden folders' feature has now been simplified to show _all_ hidden files and folders in the repository. It is intended to make it easier to perform trivial maintenance tasks that would normally require access to a PC.
## Bugfixes
### SSH connection problems with Bitbucket
In our last major release, we included a change to [re-use SSH connections](https://msfjarvis.dev/aps/pr/1012) to speed up Git operations. This had an unfortunate side effect: Bitbucket users were unable to use SSH to connect to their repositories. Atlassian has been [aware of this problem](https://community.atlassian.com/t5/Bitbucket-questions/Can-t-repo-sync-anymore/qaq-p/354231) for quite some time now and did nothing about it, so we now include a [helpful message and an internal workaround](https://msfjarvis.dev/aps/pr/1093) when this particular type of error is encountered.
### Symlink support
While still potentially finicky, we're now confident that this is ready to be shipped to all users without the risk of crashes.
### Assorted UX improvements
As always, there are a handful of Quality of Life changes to make the app more enjoyable to use:
- When retrying password authentication, the option to see what you're typing would be obscured by the error icon for wrong password. This has been remedied, and the error state will now be cleared as soon as you enter anything into the password field.
- Authentication modes will now be dynamically hidden and shown based on the URL's schema so you're aware of what methods you have for authentication for any given remote repository.
- Since decryption can sometimes take a couple seconds due to how OpenKeychain works, we now hide the action buttons at the top of the screen until the decrypt operation has completed since using the buttons before that can leave the app in an odd state.
- Users will be prompted if they need to provide a username in their URLs. For example, if your repository is at `https://github.com/john.doe/passwords`, you will have to change the URL to `https://john.doe@github.com/john.doe/passwords` for HTTPS authentication to work.
- If it appears that an SSH URL contains a custom port but does not specify the `ssh://` schema, the user will be prompted to accept a quickfix that does it for them.
- Pressing the save button is no longer necessary to save changes to authentication mode.
- TOTP values might sometimes be outdated because we always wait 30 seconds to generate a new one. Now the app will calculate the time left before the first generated value goes stale, generate a new one once it does, and then resume the 30 second cycle.
There's definitely more fixes here, but we ended up rewriting, breaking and fixing so many things for this release that it's hard to tell what was actually broken in the previous release and what is just us fixing regressions during refactoring. We've been busy :)
## Important announcements
### Autofill parser is now a standalone library!
Our excellent Autofill capabilities are now bundled as a separate Android library and can be used by other password managers to improve their Autofill experiences. Detailed documentation will be coming over the next few days, keep an eye out [here](https://github.com/android-password-store/Android-Password-Store/tree/develop/autofill-parser) if it's something you're interested in.
### RFC for removal of Git support in external repos
Based on the issues raised in the repository and the support emails I've received, the maintainers have come to the conclusion that nearly all users who choose to store their pass repositories in their device storage or external SD card as opposed to the app's private, hidden directory are not users of Git and rely on solutions like Syncthing and Nextcloud to keep the repository in sync with their other devices.
As such, we are now in the process of removing Git support from these repositories. We've carefully evaluated how we want to do this, and have started with removing the ability to clone repositories to public storage in this release. If this doesn't blow up in our faces, we will be completing the transition in v1.13.0. If you believe the change adversely affects your usage of the app, we wanna know! Drop a comment on [GitHub](https://msfjarvis.dev/aps/issue/1118) and we will do our best to either propose an alternative for your use case or entirely scrap our plans if we discover that our initial inferences were misguided.
title = "Backing up your content from Google Photos"
+++
Google Photos has established itself as one of the most popular photo storage services, and like a typical Google service, it makes it impressively difficult to get your data back out of it :D
There are many good reasons why you'd want to archive your pictures outside of Google Photos, having an extra backup never hurts, maybe you want an offline copy for reasons, or you just want to get your stuff out so you can switch away from Google Photos entirely.
### How to archive your images from Google Photos
1. You can use [Takeout], except it **always** strips the EXIF metadata of your images, and often generates incomplete archives. Losing EXIF metadata is a deal-breaker, because you can no longer organize images automatically based on properties like date, location, and camera type.
2. You can download directly from [photos.google.com] which preserves metadata, but is embarassingly manual and basically impossible to use if you're trying archive a few years of history.
So, what's the solution?
### gphotos-cdp
[gphotos-cdp] is a tool that uses the nearly-perfect method number 2 and makes it automated. It does so by using the [Chrome DevTools Protocol] to drive an instance of the Google Chrome browser, and emulates all the manual actions you'd take as a human to ensure you get copies of your pictures with all the EXIF metadata retained.
### Setting up gphotos-cdp
> Disclaimer: I've only tested this on Linux. This _should_ be doable on other platforms, but it's not relevant to my needs so I will not be investigating that.
Ideally you'd want to run this tool on a schedule on a NAS or a server to keep archiving images automatically as they get added to your Google Photos. I personally run this inside a hosted VM on a daily schedule.
For gphotos-cdp to run in a non-interactive manner, it requires your browser data directory with your Google login cookies. You can easily create this with the following command:
```bash
google-chrome \
--user-data-dir=gphotos-cdp \
--no-first-run \
--password-store=basic \
--use-mock-keychain \
https://photos.google.com/
```
This will launch Google Chrome with a brand new profile. Login to Photos, and then close the browser. Optionally, re-run the command to ensure that you do not need to login again.
> The flags passed to google-chrome are extracted from the default set of parameters used by gphotos-cdp. I wish I could explain why each flag is necessary, but all I know is that it does the trick. I got them from [this GitHub comment] on the issue tracker for gphotos-cdp.
Once done, you'll have a `gphotos-cdp` directory that you'll need to move to the `/tmp` directory of whichever machine you wish to run gphotos-cdp on.
gphotos-cdp is written in [Golang] so you'll need to install it first. Once done, run the following command to install the latest version of gphotos-cdp
```bash
go install github.com/perkeep/gphotos-cdp@latest
```
Then you can go ahead and start using gphotos-cdp, as given below
```bash
~/go/bin/gphotos-cdp \ # go install puts things in ~/go/bin by default
-v \ # Enable verbose logging
-dev \ # Enable dev mode which always uses /tmp/gphotos-cdp as the profile directory
-headless \ # Run Chrome in headless mode so it works on servers and such
-dldir ~/photos # Download everything to ~/photos
```
The automation techniques used are not completely reliable and can often fail. You'll want to implement some kind of retry-on-failure logic to ensure this is run a few times every day.
### Monitoring
With anything built on such a brittle foundation, it's useful to be able to constantly monitor that things are working as they should.
Using [healthchecks.io] you can easily set up alerts that notify you of failures running the tool or unintentional gaps in the schedule you run the gphotos-cdp on. I use my [healthchecks-monitor] CLI in a [cron] job to run gphotos-cdp every day, and healthchecks.io notifies me via Telegram when it fails. The script running in cron looks like this
As evident, it's not an easy task to automatically archive your pictures from Google Photos. The setup is tedious and prone to breakage when any authentication related change happens, such as you accidentally logging out the "device" being used by gphotos-cdp or changing your password, in which case you will need to create the `gphotos-cdp` directory with Chrome again.
Also, the technique in this post could easily stop working at any time if Google chooses to break it. That being said, gphotos-cdp was last updated in 2020 and still continues to function as-is so there is some degree of hope that it can be used for quite a bit more.
Hopefully this setup causes you minimal grief and allows you to back up your precious memories without relying only on Google :)
Rust has supported producing statically linked binaries since [RFC #1721] which proposed the `target-feature=+crt-static` flag to statically link the platform C library into the final binary. This was initially only supported for Windows MSVC and the MUSL C library. While MUSL works for _most_ people, it
has many problems by virtue of being a work-in-progress such as [unpredictable performance] and many unimplemented features which programs tend to assume are present due to glibc being ubiquitous. In lieu of these concerns, support was added to Rust in 2019 to be able to [statically link against glibc].
Unfortunately, if you try to directly use it with `RUSTFLAGS='-C target-feature=+crt-static' cargo build` there is a good chance you'll run into an error similar to this:
```
cannot produce proc-macro for `async-trait v0.1.51` as the target `x86_64-unknown-linux-gnu` does not support these crate types
```
This is a bit of a head scratcher, because the target (your host machine) _definitely_ supports proc-macro crates. Turns out, even Rust contributors [were confused by this]. The "fix" for this is apparently to pass in the `--target` explicitly. The reason behind this seems to be a bug with cargo, where the `RUSTFLAGS` are applied to the target platform only when `--target` is explicitly provided. Without it, `RUSTFLAGS` values are set for the host only which results in the errors we see. More details are available [Rust issue #78210]
Therefore, the correct way to build a statically linked glibc executable for an x86_64 machine is this:
You may be unable to statically link your binary even after all this, due to dependencies that _mandate_ dynamic linking. In some cases this is avoidable, such as using [rustls] in place of OpenSSL for cryptography, and [hyper] in place of bindings to cURL for HTTP, not so much in others. Thanks to the convention of native-linking crates using the `-sys` suffix in their name it is fairly simple to find if your build has dependencies that dynamically link to libraries. Using `cargo`'s native `tree` subcommand and `grep`ing (or [ripgrep]ing for me), you can locate native dependencies. Running `cargo tree | rg -- -sys` against [androidx-release-watcher]'s `v4.1.0` release gives us this:
```bash
$ cargo tree | rg -- -sys
│ │ │ │ ├── curl-sys v0.4.45+curl-7.78.0
│ │ │ │ │ ├── libnghttp2-sys v0.1.6+1.43.0
│ │ │ │ │ ├── libz-sys v1.1.3
│ │ │ │ │ └── openssl-sys v0.9.66
│ │ │ │ ├── openssl-sys v0.9.66 (*)
│ │ │ ├── curl-sys v0.4.45+curl-7.78.0 (*)
│ └── web-sys v0.3.53
│ ├── js-sys v0.3.53
```
This indicates curl, zlib, openssl, and libnghttp2 as well as a bunch of WASM-related things are being dynamically linked into my executable. To resolve this, I looked at the build features exposed by [surf] and found that it selects the `"curl_client"` feature by default, which can be turned off and replaced with `"h1-client-rustls"` which uses an HTTP client backed by [rustls] and [async-std] and no dynamically linked libraries. Enabling [this build feature] removed all `-sys` dependencies from [androidx-release-watcher], allowing me to build static executables of it.
title = "Converting Gradle convention plugins to binary plugins"
socialImage = "uploads/gradle-social.webp"
+++
### Introduction
Gradle's [convention plugins] are a powerful feature that allow creating simple, reusable Gradle plugins that can be used across your multi-module projects to ensure all modules of a certain type are configured the same way. As an example, if you want to enforce that none of your Android library projects contain a `BuildConfig` class then the convention plugin for it could look something like this:
> `com.example.android-library.gradle.kts`
>
> ```groovy
> plugins {
> id("com.android.library")
> }
>
> android {
> buildFeatures {
> buildConfig = false
> }
> }
> ```
Then in your modules, you can use this plugin like so:
> `library-module/build.gradle.kts`
>
> ```groovy
> plugins {
> id ("com.example.android-library")
> }
> ```
## Setting up convention plugins in your project
Gradle's official sample linked above mentions `buildSrc` as the location for your convention plugins. I'm inclined to disagree, `buildSrc` has historically had issues with IDE support and it's special status within Gradle's project handling means any change within `buildSrc` invalidates caches for your **entire** project resulting in incredible amounts of time lost during incremental builds.
The solution to all of these problems is [composite builds], and [Josef Raska has a fantastic article] that thoroughly explains the shortcomings of `buildSrc` and how composite builds solve them.
A full explainer on the topic is slightly out of scope for this post, but I can wholeheartedly endorse Jendrik Johannes' [idiomatic-gradle] repository as an example of setting up the Gradle build of a real-world project while leveraging features introduced in recent versions of Gradle. I highly recommend also checking out their 'Understanding Gradle' [video series].
## Why would you want to make binary plugins out of convention plugins
First, let's answer this: what is a binary plugin?
A Gradle plugin that is resolved as a dependency rather than compiled from source is a binary plugin. Binary plugins are cool because the next best thing after a cached compilation task is one that doesn't exist in the first place.
For most use cases, convention plugins will need to be updated very infrequently. This means that having each developer execute the plugin build as part of their development process is needlessly wasteful, and we can instead just distribute them as maven dependencies.
This also makes it significantly easier to share convention plugins between projects without resorting to historically painful solutions like Git submodules or just straight up copy-pasting.
## Publishing your convention plugins
To their credit, Gradle supports this ability very well and you can actually publish all plugins within a build/project with minimal configuration. The changes required to publish [Android Password Store]'s convention plugins for Android are:
> `build-logic/android-plugins/build.gradle.kts`
>
> ```diff
> -plugins { `kotlin-dsl` }
> +plugins {
> + `kotlin-dsl`
> + id("maven-publish")
> +}
> +
> +group = "com.github.android-password-store"
> +
> +version = "1.0.0"
> ```
After that you can run `gradle -p build-logic publishToMavenLocal` and it will Just Work:tm:. You can configure additional publishing repositories in similar fashion to how you'd do it for a library project.
If like me you need to publish these to [Maven Central], you'll need slightly more setup since it enforces multiple security and publishing related best practices. Here's how I use [gradle-maven-publish-plugin] to configure the same (`gradle.properties` changes omitted for brevity, the GitHub repository explains what you need):
> `build-logic/settings.gradle.kts`
>
> ```diff
> +pluginManagement {
> + repositories {
> + mavenCentral()
> + gradlePluginPortal()
> + }
> + plugins {
> + id("com.vanniktech.maven.publish.base") version "0.19.0"
> + id("com.github.android-password-store.published-android-library") version "1.0.0"
> + id("com.github.android-password-store.kotlin-android") version "1.0.0"
> + id("com.github.android-password-store.kotlin-library") version "1.0.0"
> + id("com.github.android-password-store.psl-plugin") version "1.0.0"
> }
> ```
However, this fails because `kotlin-android` and `kotlin-library` plugins resolve to the same binary JAR that encompasses all plugins from the `build-logic/kotlin-plugins` module and results in a classpath conflict. To better understand how this resolution works, check out the docs on [plugin markers].
The way to resolve this problem is to define the plugin versions in your `settings.gradle.kts` file, where these classpath conflicts will be resolved automatically by Gradle:
> `settings.gradle.kts`
>
> ```diff
> @@ -14,6 +14,25 @@ pluginManagement {
> mavenCentral()
> gradlePluginPortal()
> }
> + plugins {
> + id("com.github.android-password-store.kotlin-android") version "1.0.0"
> + id("com.github.android-password-store.kotlin-library") version "1.0.0"
> + id("com.github.android-password-store.psl-plugin") version "1.0.0"
> + id("com.github.android-password-store.published-android-library") version "1.0.0"
> + }
> }
> ```
And you're off to the races!
## Closing notes
This post was motivated by my goal of sharing a common set of Gradle configurations across my projects such as [Android Password Store] and [Claw], which maintain a nearly identical set of convention plugins shared between the projects that I manually copy-paste back and forth. I've extracted the `build-logic` subproject of APS to a separate [aps-build-logic] repository, set it up for standalone development and configured publishing support. My goal is to supplement this with a continuous deployment workflow where an automatic version bump + release happens after each commit to the main, after which I can migrate my projects to it.
title = "Creating a continuously deploying static statuspage with GitHub"
+++
A status page is essentially a web page that reports the health and uptime of an organization's various online services. [GitHub](https://www.githubstatus.com/) has one and so does [Cloudflare](https://www.cloudflarestatus.com/). Most of these are powered by an [Atlassian](https://www.atlassian.com/) product called [Statuspage](https://www.statuspage.io/) but it's not always the [cheapest solution](https://www.statuspage.io/pricing?tab=public).
For hobbyist projects without any real budget (like this site and the couple others I run), Statuspage pricing is often too steep. To this effect, many open source projects exist to let you generate your own status page through an application that handles continuous updates of it. That works too! But what if you don't have a separate server to run the status page service on? Hosting on a server with other applications that its supposed to track is obviously not an option. Enter [static_status](https://github.com/Cyclenerd/static_status).
[static_status](https://github.com/Cyclenerd/static_status) is a bash script that as its name suggests, generates a fully static webpage that functions as a status page for the services you ask it to monitor. You can check out how it looks at [status.msfjarvis.dev](https://status.msfjarvis.dev). Pretty neat, right?
[status.msfjarvis.dev](https://status.msfjarvis.dev) is powered by a GitHub Action running every 30 minutes to deploy the generated static page to GitHub Pages and barely takes any time to set up. Here's how it works.
- First thing that you want to do is to setup the `CNAME` record that will let GitHub Pages service your status page to a subdomain of your website. Head to your domain registrar (Cloudflare for me) and add a CNAME record for `<your github username>.github.io`
![CNAME record for status.msfjarvis.dev at Cloudflare](/uploads/statuspage_cname_record.webp)
- Next, create a GitHub repository that will hold the Actions workflow for generating your status page as well as the actual status page itself. This repo can be private, as the generated sites are always publicly available.
![GitHub repository for our status page](/uploads/statuspage_github_repo.webp)
- Clone this empty repository. Now create a file with the name of `CNAME` and enter your custom domain into it. This lets GitHub Pages know where to redirect users if they ever access the site through your `.github.io` subdomain. Commit this file.
![CNAME file in repository](/uploads/statuspage_cname_file.webp)
- A quick glance at the static_status README will inform you about the `config` file that it uses to configure itself, and status_hostname_list.txt which has a list of all services it needs to check. `config` is easy to understand and modify, so I'll skip it (you can diff [mine](https://github.com/msfjarvis/status.msfjarvis.dev/blob/master/config) with upstream and use the changes to educate yourself should the need arise). This part should be very straightforward, though I did encounter a problem where using `ping` as the detection mechanism caused sites to falsely report as down. Switching to `curl` resolved the issue.
- Finally, time to add the automation to this whole thing. Any CI solution with a cron/schedule option will work, I used GitHub Actions, you don't have to. I set a schedule of once every 30 minutes, depending on what platform you're using for CD and what services you're hosting, you might want to choose a shorter period. Here's my GitHub Actions workflow.
```yaml
name: "Update status page"
on:
schedule:
- cron: "*/30 * ** *"
push:
jobs:
update-status-page:
runs-on: ubuntu-latest
steps:
- name: Install traceroute
run: sudo apt-get install traceroute -y
- name: Checkout config
uses: actions/checkout@v2
- name: Checkout static_status
uses: actions/checkout@v2
with:
repository: Cyclenerd/static_status
path: static_status
clean: false
- name: Generate status page
run: |
mkdir -p static_status/status/
cp config static_status/
cp status_hostname_list.txt static_status/
cp CNAME static_status/status/
cd static_status/
rm status_maintenance_text.txt
./status.sh
- name: Deploy
uses: peaceiris/actions-gh-pages@v2
env:
PERSONAL_TOKEN: ${{ secrets.PERSONAL_TOKEN }}
PUBLISH_BRANCH: gh-pages
PUBLISH_DIR: ./static_status/status
SCRIPT_MODE: true
with:
username: "MSF-Jarvis"
useremail: "msfjarvis+github_alt@gmail.com"
```
This installs `traceroute` which is needed by static_status, checks out my repository, clones static_status to the static_status directory, copies over config and hostname list to that folder, places the `CNAME` file in the static_status/status directory, runs the script to generate the status page, and finally publishes the static_status/status folder to the `gh-pages` branch, as my bot account.
The result of this is a simple and fast statuspage that can be hosted anywhere by simply coping the single `index.html` over. If you have a separate server to run this off, you can get away with replacing this entire process with a single crontab command. Being a bash script lets static_status run on essentially any Linux-based platform so you can actually deploy this from a Raspberry Pi with no effort. Hope this helps you to create your own status pages!
> Updated on 22 Jan 2020 with some additional comments from [@arunkumar_9t2](https://twitter.com/arunkumar_9t2). Look out for them as block quotes similar to this one.
This is not your average coding tutorial. I'm going to show you how to write actual Dagger code and skip all the scary and off-putting parts about the implementation details of the things we're using and how Dagger does everything under the hood.
With that out of the way, onwards to the actual content. We're going to be building a very simple app that does just one thing, show a Toast with some text depending on whether it was the first run or not. Nothing super fancy here, but with some overkill abstraction I'll hopefully be able to demonstrate a straightforward and understandable use of Dagger.
I've setup a repository at [msfjarvis/dagger-the-easy-way](https://github.com/msfjarvis/dagger-the-easy-way) that shows every logical collection of changes in its own separate commit, and also a PR to go from no DI to Dagger so you can browse changes in bulk as well.
## The mandatory theory
I know what I said, but this is just necessary. Bear with me.
### `Component`
A Component defines an interface that Dagger constructs to know the entry points where dependencies can be injected. It can also hold the component factory that instructs Dagger how to construct said component. A Component _also_ holds the list of modules.
### `Module`
A Module is any logical unit that contributes to Dagger's object graph. In simpler terms, any `class` or `object` that has declarations which tell Dagger how to construct a particular dependency, is annotated with `@Module`.
> _Arun's notes_
>
> Modules should be mentioned first here, as they're the smallest units of a Dagger setup, and Components build upon them. An alternate definition for a module can also be this: if we draw a graph, methods in @Module classes become the nodes and @Component is the holder of those nodes.
## Getting Started
To get started, clone the repository which contains all the useless grunt work already done for you. Use `git clone https://github.com/msfjarvis/dagger-the-easy-way` if you're unfamiliar with branch selection during clone.
The repo in this stage is very bare - it has the usual boilerplate and just one class, `MainActivity`. We're going to make this a bit more interesting shortly.
Switch to the `part-1` branch, which has a bit more in terms of commit history and code. This is what we're going to work with.
## Setting up the object graph
Remember `Component` and `Module`? It's gonna come in handy here.
Start off with [adding the Dagger dependencies](https://github.com/msfjarvis/dagger-the-easy-way/commit/f86208b89cee2c05becd4341e1b209dc2479aa2f), then add an **empty** Component and Module, which we did [here](https://github.com/msfjarvis/dagger-the-easy-way/commit/f1604adb4e99f342b213cefa9fada21efb6f49a2).
```kotlin
@Singleton
@Component(modules = [AppModule::class])
interface AppComponent {
}
@Module
object AppModule {
}
```
What we're doing here, is marking our `AppComponent` as a 'singleton', to indicate that it needs to only be constructed _once_ for the lifecycle of the application. We're also annotating it with `@Component` for obvious reasons, and adding our module to it to indicate that they're going together. This is empty right now but that's going to change soon.
> _Arun's notes_
>
> Annotating with @Singleton is only effective when configured properly. For this to be a singleton, you need to ensure you're creating this only once and that's your responsibility to fulfill. This is part of scoping and is a great topic to be covered in part 2.
If you check `MainActivity`, you'll notice that we're using [SharedPreferences](https://developer.android.com/reference/android/content/SharedPreferences.html). To demonstrate the use of Dagger, I'm going to replace that usage with one provided through Dagger. For that to happen though, Dagger needs to know how to create a `SharedPreferences`. Let's get that going!
```kotlin
@Module
object AppModule {
@Provides
@Reusable
fun provideSharedPrefs(context: Context): SharedPreferences = PreferenceManager.getDefaultSharedPreferences(context)
}
```
Breaking this down: `Provides` tells Dagger to bind the return value of the method to the object graph, and `Reusable` tells Dagger that you want to use one copy of this as many times as you can, but it's _okay_ to create a new instance if that's not possible.
If you pay attention to the [commit](https://github.com/msfjarvis/dagger-the-easy-way/commit/f1a60ffaf6f07f8654bde27fbd65bef08c248f4e) for this step, you'll see that we're also adding preferences to the `AppComponent`. This is just one of the many different patterns one can use with Dagger, and I'm using it just for the simplicity. We'll look into another way of doing this for the next part.
## Initializing our component
Now for Dagger to know when to create this graph, it needs to be able to know how to initialize the `Component` we wrote earlier. For this, we'll be adding a factory that constructs the `AppComponent`. Since we need a Context to be able to create `SharedPreferences`, we'll make our factory accept a context parameter.
> _Arun's notes_
>
> Worth nothing that the reason we create a factory method accepting Context instead of letting Dagger provide is because we don't have hold of Context during compile time. The instance is created by Android system and given to us which we then use Factory to give it to dagger.
Here's how the finished `AppComponent` looks like with the factory method.
```kotlin
@Singleton
@Component(modules = [AppModule::class])
interface AppComponent {
@Component.Factory
interface Factory {
fun create(@BindsInstance applicationContext: Context): AppComponent
}
val preferences: SharedPreferences
}
```
The `BindsInstance` annotation tells Dagger that we'll be providing our own Context and that it does not have to know how to create one.
As the parameter name suggests, we'll be using an application-scoped Context for this, so let's initialize the Component in an Application class. We'll be accessing our dependencies through this initialized component, and the Application class is always initialized first so that let's us avoid any situation where we try to refer to the component and find that it's null.
Create an Application class, make it extend `android.app.Application`, and add it to the manifest ( [Reference commit](https://github.com/msfjarvis/dagger-the-easy-way/commit/25d4dc223bfafd40ac9801e23ca9b09526ed9362)).
Now we'll be adding our component here. Since we'll be accessing it from other classes, we'll make it static. The Application class lives as long as our process does, so we're safe from a life-cycle perspective. Here's the finished `ExampleApplication` class.
> Nit: I like to do something like [this](https://github.com/arunkumar9t2/scabbard/blob/004116cf6a548022982c7869d7758725c18991f8/scabbard-sample/src/main/java/dev/arunkumar/scabbard/App.kt#L10). The reason is, since it is a val, it will not be editable and also lazy being lazy means it will cached.
Notice the `DaggerAppComponent` class that did not exist before. This is a Dagger generated version of our `AppComponent` interface that is suitable for instantiation. This class holds the factory method we created before, and returns an instance of `AppComponent` that let's us access the dependencies we installed into the component. When we initialize our component, Dagger also intelligently creates all the dependencies in our graph. Now all that's left for us is to use the dependencies we declared in our app.
## Injecting dependencies
Head on over to `MainActivity` now. Notice that we initialize a `SharedPreferences` object there, which can be replaced with the one we asked Dagger to create for us. Let's do that!
```diff
class MainActivity : AppCompatActivity() {
+ private val prefs = ExampleApplication.component.preferences
+
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
- val prefs: SharedPreferences = PreferenceManager.getDefaultSharedPreferences(this)
And that's it. Really. Now you're using Dagger to provide a dependency. It's that simple!
## Conclusion
As you've seen here, using Dagger does not always have to involve complexity. Dagger can be used in projects of any size, of any complexity, and in any fashion that you deem fit. The example above is a very simple use of Dagger, and has scope for further improvement which we'll be looking into.
This is my first time writing about using Dagger, having only [recently started using and liking it](/posts/my-dagger-story/). Please let me know about any parts that were too complex, factually incorrect or just lacking in any way, and I will be more than glad to improve this.
In the next part, we'll be looking into constructor injection, why it's generally better, and how to inject dependencies into classes that we don't own (like activities and fragments) with the help of the `@Inject` annotation. Thanks for reading this far!
Welcome back! In this post I'm taking a bit of detour from my planned schedule to write about **scoping**. We'll _definitely_ cover constructor injection in the next part :)
> All the code from this post is available on GitHub: [msfjarvis/dagger-the-easy-way](https://github.com/msfjarvis/dagger-the-easy-way/commits/part-2)
Dagger 2 provides `@Scope` as a mechanism to handle scoping. Scoping allows you to keep an object instance for the duration of your scope. This means that no matter how many times the object is requested from Dagger, it returns the same instance.
## Default scopes
In the previous tutorial, we looked at _two_ scopes, namely `@Singleton` and `@Reusable`. Singleton does what its name suggests, and "caches" the dependency instance for the lifecycle of the `@Component`, and Reusable tells Dagger that while we'd prefer that a cached instance be used, we're fine if Dagger needs to create another one. The new Dagger 2 [user guide](https://dagger.dev/users-guide) does a pretty good job differentiating between Singleton, Reusable and unscoped dependencies which I'll reproduce here.
```java
// It doesn't matter how many scoopers we use, but don't waste them.
@Reusable
class CoffeeScooper {
@Inject CoffeeScooper() {}
}
@Module
class CashRegisterModule {
@Provides
// DON'T DO THIS! You do care which register you put your cash in.
// Use a specific scope instead.
@Reusable
static CashRegister badIdeaCashRegister() {
return new CashRegister();
}
}
// DON'T DO THIS! You really do want a new filter each time, so this
// should be unscoped.
@Reusable
class CoffeeFilter {
@Inject CoffeeFilter() {}
}
```
## Why do we need scopes
I'll do a small demo to show the difference between unscoped and singleton dependencies, then we'll move on to defining our own scopes.
```kotlin
// AppComponent.kt
data class Counter(val name: String)
@Component(modules = [AppModule::class])
interface AppComponent {
fun getCounter(): Counter
}
@Module
class AppModule {
private var index = 0
@Provides
fun provideCounter(): Counter {
index++
return Counter("Counter $index")
}
}
```
These dependencies are all unscoped, along with the `AppComponent`. Knowing what we do about unscoped elements in a Dagger graph, predict the output of the following code:
```kotlin
class CounterApplication : Application() {
private val TAG = "CounterApplication"
override fun onCreate() {
super.onCreate()
val appComponent = DaggerAppComponent.builder()
.appModule(AppModule())
.build()
Log.d(TAG, appComponent.getCounter().name)
Log.d(TAG, appComponent.getCounter().name)
}
}
```
Running this on a device will print the following in your logcat
```kotlin
D/CounterApplication: Counter 1
D/CounterApplication: Counter 2
```
Totally expected, because unscoped dependencies have no lifecycle in the component, and hence are created every time you ask for one. Let's make them all into Singletons and see how that changes things.
```diff
data class Counter(val name: String)
+@Singleton
@Component(modules = [AppModule::class])
interface AppComponent {
fun getCounter(): Counter
@@ -12,6 +13,7 @@ class AppModule {
private var index = 0
@Provides
+ @Singleton
fun provideCounter(): Counter {
index++
return Counter("Counter $index")
```
Running the same code again, we get
```kotlin
D/CounterApplication: Counter 1
D/CounterApplication: Counter 1
```
Notice that we were handed the same instance. This is the power of scoping. It lets us have singletons within the defined scope.
Like Arun mentioned in the [additional notes](/posts/dagger-the-easy-way--part-1/#setting-up-the-object-graph) for the previous article, ensuring a singleton Component stays that way is the user's job. If you initialize the component again within the same scope, the new component instance will have a new set of instances. That is part of why we store our component in the [Application](https://developer.android.com/reference/android/app/Application.html) class, because it is the singleton for our apps.
## Creating our own scopes
In its most basic form, a scope is an annotation class that itself has two annotations, `@Scope` and `@Retention`. Assuming we follow an MVP architecture (purely for nomenclature purposes, scoping is not necessarily tied to your architecture), let's create a scope for our `CounterPresenter`.
```kotlin
@Scope
@Retention(AnnotationRetention.RUNTIME)
annotation class CounterScreenScope
```
Putting this annotation together with our presenter and our component, we finally get this:
```kotlin
@Scope
@Retention(AnnotationRetention.RUNTIME)
annotation class CounterScreenScope
data class Counter(val name: String)
class CounterPresenter(val counter: Counter)
@Module
class CounterScreenModule {
@Provides
@CounterScreenScope
fun provideCounterPresenter(counter: Counter): CounterPresenter {
fun counterScreenComponent(counterScreenModule: CounterScreenModule): CounterScreenComponent
}
@Module
class AppModule {
private var index = 0
@Provides
fun getCounter(): Counter {
index++
return Counter("Counter $index")
}
}
```
Phew, a lot happened there. Let's break it down.
```kotlin
class CounterPresenter(val counter: Counter)
```
This is simply a class that represents our presenter. We don't care much for implementation details here, so the class does nothing.
```kotlin
@Module
class CounterScreenModule {
@Provides
@CounterScreenScope
fun provideCounterPresenter(counter: Counter): CounterPresenter {
return CounterPresenter(counter)
}
}
```
`CounterScreenModule` holds the provider method for our presenter. The method is annotated with `@CounterScreenScope` to indicate that we want to scope its lifetime to our screen. Rather than being an `object` like our `AppModule`, it's a `class` because we need to instantiate it manually later.
```kotlin
@Singleton
@Component(modules = [AppModule::class])
interface AppComponent {
fun counterScreenComponent(counterScreenModule: CounterScreenModule): CounterScreenComponent
}
```
To our `AppComponent`, we've simply added a method to provide the `CounterScreenComponent`.
`CounterScreenComponent` is a [Subcomponent](https://dagger.dev/api/latest/dagger/Subcomponent.html). In simple, OOP terms, it's a Component that inherits from another Component. A Subcomponent can only have one parent, and the Subcomponent doesn't get to pick who, much like real life :P
The parent Component is responsible for ensuring that all the dependencies of a Subcomponent are available, other than modules.
## Putting it all together
After setting up our Dagger graph, instantiating everything becomes pretty easy.
```kotlin
class MainActivity : AppCompatActivity() {
@Inject
lateinit var presenter: CounterPresenter
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val appComponent = DaggerAppComponent.builder()
.appModule(AppModule())
.build()
val counterScreenComponent = appComponent
.counterScreenComponent(CounterScreenModule())
counterScreenComponent.inject(this)
Log.d(TAG, presenter.counter.name)
}
companion object {
private const val TAG = "MainActivity"
}
}
```
Thanks to how our graph is laid out, it is very easy to get subcomponent instances from our parent components.
## Alternative initialization
We can also use a `@Subcomponent.Factory` for `CounterScreenComponent` to initialize it in a fashion similar to our `AppComponent` from the previous part. The diff from this change goes something like this:
+ fun create(@BindsInstance counterScreenModule: CounterScreenModule): CounterScreenComponent
+ }
}
@Singleton
@Component(modules = [AppModule::class])
interface AppComponent {
- fun counterScreenComponent(counterScreenModule: CounterScreenModule): CounterScreenComponent
+ val counterScreenComponentFactory: CounterScreenComponent.Factory
}
@Module
```
## Closing Notes
That's it for this tutorial! Scoping is a rather complex concept, and it took me a long (really, really long) time to grasp its concepts and put this together. Its perfectly fine to not understand it immediately, take your time, and refer to one of the reference articles that I used (listed below) to see if maybe their explanations work better for you. Dagger away!
### References
- [Dagger 2: Scopes and Subcomponents](https://medium.com/tompee/dagger-2-scopes-and-subcomponents-d54d58511781)
summary = "GitHub Actions are awesome! Learn how to use it for continuous delivery of your static sites."
slug = "deploying-hugo-sites-with-github-actions"
socialImage = "uploads/actions_social.webp"
tags = ["static sites"]
title = "Deploying Hugo sites with GitHub Actions"
+++
For the longest time, I have used the [caddy-git] middleware for [caddyserver](https://caddyserver.com) to constantly deploy my [Hugo](https://gohugo.io) site from [GitHub](https://github.com/msfjarvis/msfjarvis.dev).
But this approach had a few problems, notably force pushing (I know, shush) caused the repository to break because the plugin didn't support those. While not frequent, it was annoying enough to seek alternatives.
Enter [GitHub Actions](https://github.com/features/actions).
GitHub's in-built CI/CD solution is quite powerful and easily extensible. I decided to give it a shot and use it for automated deployments.
Now, my use case isn't the most straightforward. I maintain two sites out of the same source repository, one production site with all my published posts, and another with all my drafts enabled so I can check my WIP posts live to find any formatting mistakes I may have overlooked when writing through [Forestry](https://forestry.io) or a text editor. I am also in the habit of creating and fixing my own problems so I prefer self-hosted solutions as and when possible.
## Step 1 - Deployment
The first part of this endeavour involved finding a new way to move static assets to the server. I thought about emulating how [caddy-git] works and using `ssh` to do a pull-and-build on my server itself. Then I found [this](https://github.com/peaceiris/actions-hugo) action that allows me to install `hugo` in the container for the build. That's when I decided to do the building in the Actions pipeline and push built assets using `rsync`.
## Step 2 - Execution
To handle my two-sites-from-one-repo usecase, I setup a build staging -> publish staging -> build prod -> publish prod pipeline.
```yaml
- name: Build staging
run: hugo --minify -DEFb=https://staging.msfjarvis.dev
You can find the `ci/deploy.sh` script [here](https://github.com/msfjarvis/msfjarvis.dev/blob/src/ci/deploy.sh). It's a very basic script that sets up the SSH authentication and rsync's the built site over.
summary = "I've decided to learn Zig, and here's how I'm preparing for it."
slug = "first-steps-with-zig"
tags = ["learn"]
title = "First steps with Zig"
+++
[Zig] is a systems programming language much akin to [Rust] and C, and has been showing up in my feeds a lot as of late. Many Zig programmers have [documented their experience with Zig] as much better than with Rust, which I have been programming in for the last year or so, citing simplicity and ease. I tend to agree that Rust can often be _complex_ to enforce the guarantee of being _correct_, so I set out to finally buy into the promise of Zig and give it a shot.
# Compiler and IDE setup
The [installing Zig] page recommends that while using the Zig stable releases is fine for evaluating it, their [stable release cadence] matches LLVM's ~6 months which means they are often rendered outdated by the fast pace of Zig development.
Since I wanted to stick with using [Nix] to manage my (currently) temporary Zig environment, I went with the stable 0.7.1 release available on nixpkgs.
A quick `nix-shell -p zig` later, I now had access to the Zig compiler.
```shell
➜ nix-shell -p zig
➜ zig version
0.7.1
```
To be able to use VSCode for writing Zig, I also installed the official [zls] language server for Zig. This did get me go-to-declaration support for the standard library, ~~but not syntax highlighting. I'm not sure if that's intended, or a bug with my local setup~~. Syntax highlighting is also present, thanks to Lewis Gaul for his suggestion of using the `tiehuis.zig` extension.
# Learning resources
The Zig team frankly admits that they do not yet have the resources to maintain extensive learning resources, but the Zig community has stepped forward to fill in those gaps. [ziglearn.org] is a great jumping off point for people who prefer to learn language basics directly, and there is a [rustlings] counterpart in [ziglings] for learning by looking at code.
On the official side of things, you get the [standard library reference] as one would expect, as well as a fairly detailed [language reference].
This is in contrast with Rust, which has an officially maintained [book] and maintains [rustlings] as a first-party learning resource. They do however are a significantly larger and older team, so maybe with sufficient funding we'll see Zig be able to devote effort towards this as well.
# Your first program
The `zig` CLI contains commands to generate new projects easily, so let's create a new binary project.
```
➜ zig init-exe
info: Created build.zig
info: Created src/main.zig
info: Next, try `zig build --help` or `zig build run`
```
The `build.zig` file appears to describe to the `zig` CLI how to build this program, and `src/main.zig` is our application code. Here's what `zig init-exe` gives you for a "hello world" program:
```zig
// src/main.zig
const std = @import("std");
pub fn main() anyerror!void {
std.log.info("All your codebase are belong to us.", .{});
}
```
Cheeky.
This post is just a brief overview of how I went about setting things up for learning Zig. I intend to post more detailed blogs as I progress :)
[zig]: https://ziglang.org
[rust]: https://rust-lang.org
[documented their experience with zig]: https://kevinlynagh.com/rust-zig/
summary = "Everybody probably understands how Cloudflare proxies A/AAAA records, but how it proxies CNAME records is also pretty interesting. Let's dive into how that happens and why it can often break other products that need you to set CNAME records."
slug = "how-cloudflare-proxies-cname-records"
socialImage = "uploads/cf_proxy_social.webp"
tags = ["cloudflare"]
title = "How Cloudflare proxies CNAME records"
+++
As people who've read my previous post would know, I recently started using [Purelymail](https://purelymail.com/) for my email needs (the how and why of it can be found [here](/posts/switching-my-email-to-purelymail/)). I also mentioned there, that Cloudflare's proxy-by-default nature caused Purelymail to not detect my CNAME settings and disabling the proxy did the job. I contacted Purelymail's Scott about this and he eventually pushed a fix out that \*should\* have fixed it, but since he did not have a Cloudflare account, he couldn't verify this exact case.
Well, the fix didn't work.
This made me wonder, _why?_ I trust that Scott is more aware of what he's doing than I am so the fix must have been legitimate and that something is special about Cloudflare's handling of this. So I did some testing! (yes I still use [dnscontrol](https://stackexchange.github.io/dnscontrol/))
test_domain_with_proxy.msfjarvis.dev. 300 IN A 104.28.14.93
test_domain_with_proxy.msfjarvis.dev. 300 IN A 104.28.15.93
...
```
The proxied CNAME record isn't actually a CNAME after all! Cloudflare creates an A record for it and handles the redirection internally. This makes the CNAME aspect of the record opaque to DNS lookups which in turn trips software like Purelymail's backend. I've reported my findings to Scott and am awaiting his response.
And that's it! Nothing too fancy, just something I found kinda weird.
summary = "Everyone uses Git their way. This is how I do it."
draft = true
slug = "how-i-use-git"
tags = ["developer workflow"]
title = "How I use Git"
+++
Every developer ends up using Git in some way or another these days, whether through the good old CLI or various Git GUI clients or the integrated options in their favorite IDEs. Because there are a myriad ways to use Git, and practically infinite extensibility, everyone internalizes patterns around how they work with Git. A favorite way of merging, a preferred pull strategy, a bunch of shell aliases to save keystrokes, and so on and so forth.
Over time I have also developed habits and patterns, augmented by features present within Git itself as well as external tools that integrate with Git.
This post is going to be a overview of the tools I use with Git, and by extension GitHub, to make my daily workflow more productive.
## `.gitconfig`
The `.gitconfig` file is essentially the backbone of Git customisation. Any Git-specific setting will reside in one of these config files.
> I say _files_, because Git includes a hierarchy of how Git settings take effect. Within a repository, settings are first looked up in `.git/config`, and then in `$HOME/.gitconfig` which allows having per-repository settings as necessary.
My `.gitconfig` file can be found in my dotfiles [here][1]. Most of the settings keys are self-explanatory, the rest I'll go over below.
- `core.pager` / `interactive.difffilter` switch my `git diff` and `git add --patch` views to use [diff-so-fancy][2].
![diff-so-fancy rendering the diff of a commit][3]
- `pretty.fixes` adds a ['pretty'][10] format called `fixes` which lists commits in the style that is used by the Linux kernel developers to link to the commits which introduced the bug they're fixing in their current commit. It adds a chain of historical reference to identify common bug patterns that are repeatedly occurring, which can then allow teams to devise ways of avoiding them.
```
➜ git log --pretty=fixes
Fixes: 689a369a3a3d ("Upgrade ConstraintLayout, Material and Timber (#1484)")
Fixes: a82f8dda8607 ("Disable explicit API for tests (#1483)")
Fixes: 70137f31917b ("gradle: switch to our fork of preference testing library (#1481)")
- `alias.branches` defines a subcommand that lists all remote branches in order of their last updates. This provides an easy overview of the status of your project's branches.
```
➜ git branches
70 minutes ago fork/develop
70 minutes ago origin/HEAD
70 minutes ago origin/develop
3 days ago origin/release-1.13
5 days ago origin/compose-decrypt-screen
12 days ago fork/migrate-to-kotest
2 weeks ago origin/import-keys
4 weeks ago fork/storage-refactor
8 months ago origin/api_30_support
10 months ago origin/release
```
From this overview you can see that the `develop` branch of my fork is synced with the main APS repo, a release was made 3 days ago, and I was trying out Jetpack Compose 5 days ago for one of the app's screens.
- `log.follow` will track files through renames, relying on Git's [in-built rename detection][11] that it also uses for `git cherry-pick` and `git merge`.
Before GitHub had the [GitHub CLI][4], it had [hub][5]. `hub` wraps `git` and adds a bunch of extra niceties on top such as being able to do `git clone <repo>` to clone any of your own repositories. The one that I still use though, is `hub sync`, which fast-forwards all local checkouts of remote branches to the latest state on the remote.
{{<asciinemaoHZZ68hPpZdcTmiyI9n9k2XRF>}}
## `git-absorb`
[`git-absorb`][6] is a Git extension that mimics Mercurial's [`hg absorb`][7] extension, which inspects the currently staged changes and tries to look through your recent commits to determine which of them should your changes be amended to. More information on this can be found [here][8]
// Add asciinema recording
## git-quickfix
[`git-quickfix`][9] is another Git extension, which allows moving commits to a new branch quickly. The most common use case for it is this: Imagine you're working on a feature branch, and notice a small problem that is unrelated to your current branch. `git-quickfix` would allow you to make a commit on your current branch, then _move it to a new branch_ in just one command.
// Add asciinema recording
## `gh`
[`gh`][4] is GitHub's very own CLI for interacting with their platform. Since my day-to-day work revolves around GitHub, `gh` is extremely helpful is being able to triage issues, raise PRs, view the status of CI jobs and much more. Definitely a must-have if you're a terminal fiend and use GitHub!
The most common question I get when I recommend open source as a launching pad for budding developers is "Where do I start?".
The answer: _anywhere!_
There's a plethora of open source software out there, and not everybody needs to have an encyclopaedic knowledge of the codebase to contribute. You can contribute small things like [fixing dead links in the README](https://github.com/portainer/portainer/commit/173c673d37ea2e4bb82d159b601e60109a435601) to [resolving trivial compilation warnings](https://github.com/mozilla-mobile/fenix/commits/master?author=msfjarvis) to simply [tweaking an issue template](https://github.com/opengapps/opengapps/commits/master/.github/ISSUE_TEMPLATE.md).
The reason I'm linking my own commits is because I want to let people know that the guy helming a [theme engine](https://github.com/substratum) is also out of his element at times and there's no shame in admitting it :)
Thanks to the adoption of specs like [all-contributors](https://allcontributors.org), OSS is more friendly and welcoming than ever. Contribute literally **anything** to a project you use and scale up from there.
Remember: You _will_ make mistakes in the process. Don't give up! There's always a project looking for any kind of help it can get. Start your search at home -- See what apps and desktop software you use that's open source, and if that's something you'd like to give back to or even fix something in, even if it's driven by the need to enhance your experience than your goodwill. Linus Torvalds, the creator of Linux [famously said](https://www.bbc.com/news/technology-18419231) this:
> I do not see open source as some big goody-goody "let's all sing kumbaya around the campfire and make the world a better place". No, open source only really works if everybody is contributing for their own selfish reasons.
And it's true! Most apps I contribute to right now, like [AdAway](https://github.com/AdAway/AdAway) and [Android Password Store](https://github.com/zeapo/Android-Password-Store), began as a manifestation of personal annoyance. I found things to be lacking, and decided to address it. In the end that benefitted both me and the project.
In conclusion, I'd like to reiterate this -- Contributing **anything** is contributing!
P.S. It's okay to be nervous about it. I spent two weeks researching SSL before submitting a simple null check to Google's [conscrypt](https://github.com/google/conscrypt/pull/471) library :P
With all my involvement in OSS development around Android, I come across a lot of new things on the daily. This blog will hopefully serve as a index for those findings, and an excuse for me to properly research and document them for myself and others.
title: Improving dependency sync speeds for your Gradle project
date: 2024-03-30T21:43:07.031Z
summary: Waiting for Gradle to download dependencies is so 2023.
socialImage: "uploads/gradle-social.webp"
categories:
- gradle
tags:
- gradle
- kotlin-multiplatform
- perf
---
Android developers are intimately familiar with the ritual of staring at your IDE for tens of minutes while Gradle imports a new project before they can start working on it. While not fully avoidable, there are many ways to improve the situation. For small to medium projects, the time spent on this import phase can be largely dominated by dependency downloads.
## Preface
This post is going to assume some things about you and your project, but you should be fine even if these aren't true for you.
- You're somewhat comfortable mucking around with Gradle
- Your project is using Gradle 8.7, the latest as of writing
If you're stuck on a lower version of Gradle, you will hit [this bug] with the code samples in the post. Replacing all calls to `includeGroupAndSubgroups` with `includeGroupByRegex` can let you work around it temporarily (Note the addition of the `.*` at the end):
```diff
- includeGroupAndSubgroups("com.example")
+ includeGroupByRegex("com.example.*")
```
## Obtaining a baseline
To get an idea for how long it actually takes your project to fetch its dependencies and to establish a baseline to compare improvements again, we can leverage Android Studio's relatively new [Downloads Info] view to see how many network requests are being made and how many of those are failing and contributing to our slower build. Gradle has a `--refresh-dependencies` flag which ignores its existing cache of downloaded dependencies and redownloads them from the remote repositories which will allow us to get consistent results, barring network and disk fluctuations.
In Android Studio, create a new run configuration for Gradle's in-built `dependencies` task that will resolve all configurations and give us a more representative number. The `--refresh-dependencies` flag will force a full re-download to ensure caches do not affect our benchmarks:
{{<figuresrc="run-configuration.webp"title="The Android Studio Run configuration window configured with the task ':android:dependencies --refresh-dependencies'">}}
When you run this task, you'll see the Build tool window at the bottom get populated with logs of Gradle downloading dependencies and the Downloads info tab will start accumulating the statistics for it.
{{<figuresrc="download-info.webp"title="Download info window showing a list of network requests and their sources while the task is running">}}
## How Gradle fetches your dependencies
The Gradle documentation for [dependency resolution] explains in some depth how the version conflict resolution and caching systems work, but that's pretty jargon-heavy and too much of a detour from what we're really here to do so I'll just Spark Notes™️ it and move on to more fun stuff.
- Gradle requires declaring **repositories** where dependencies are fetched from.
- Dependencies are looked up in each repository, **in declaration order**, until they are found in one.
- Gradle makes a lot of network requests as part of this lookup, and it is in our interest to reduce them.
## A quick attempt at optimisation
In the previous section I started off by mentioning **repositories**, which define _where_ Gradle will look for dependencies. The repositories setup for a typical Android project might look something like this:
This tells Gradle that you want plugin dependencies to be looked up from [Maven Central] and [gMaven], and for all other dependencies to come from [JitPack], [gMaven], [Maven Central], or the [Maven Central snapshots] repository; **in that order**. The first simple tweak you can make is to reorder these based on how many of your dependencies you expect to come from what repository.
For example, in a typical Android build most of your plugin classpath will be dominated by the Android Gradle Plugin (AGP), so you'd want gMaven to be come first so that Gradle does not waste time trying to find AGP on Maven Central.
```diff
pluginManagement {
repositories {
- mavenCentral()
google()
+ mavenCentral()
}
}
```
For the `dependencyResolutionManagement` block, JitPack is very likely to only host one or two of the dependencies you require, while most of them will be coming from gMaven and Maven Central, so shifting JitPack to the very end will significantly reduce the number of failed requests.
```diff
dependencyResolutionManagement {
repositories {
- maven("https://jitpack.io") { name = "JitPack" }
google()
mavenCentral()
+ maven("https://jitpack.io") { name = "JitPack" }
With the minor changes made above we have already significantly improved on our failed requests metric, but why stop at good when we can have _perfect_.
Gradle's repositories APIs also support the notion of specifying the expected "contents" of individual repositories, which tells Gradle what groups of dependencies are supposed to be available in what repositories. This allows it to prevent redundant network requests and significantly boosts sync performance.
These filters can be of two types:
- Declaring that certain artifacts can _only_ be resolved from certain repositories: [`exclusiveContent`]
- Declaring that a repository _only_ contains certain artifacts: [`content`]
The difference is subtle, but should become clearer shortly as we start hacking on our setup.
For our plugins block, we want only gMaven to supply AGP and everything else can come from Maven Central. Here's how to achieve that:
```diff
// settings.gradle.kts
pluginManagement {
repositories {
- google()
+ exclusiveContent { // First type of filter
+ forRepository { google() } // Specify the repository this applies to
+ filter { // Start specifying what dependencies are *only* found in this repo
+ includeGroupAndSubgroups("androidx")
+ includeGroupAndSubgroups("com.android")
+ includeGroup("com.google.testing.platform")
+ }
+ }
mavenCentral()
}
}
```
For other dependencies that are governed by the `dependencyResolutionManagement` block, the setup is similar. To demonstrate the usage of the second kind of filter, we're introducing an additional constraint: assume the build relies on the [Jetpack Compose Compiler], and we go back and forth between stable and pre-release builds of it. The pre-release builds can only be obtained from [androidx.dev], while the stable builds only exist on [gMaven]. If we tried to use `exclusiveContent` here, it would make Gradle only check one of the declared repositories for the artifact and fail if it doesn't find it there. To allow this fallback, we instead use a `content` filter as follows.
This setup tells Gradle the specific artifacts present in these repositories but does not enforce any restrictions on which repository said artifacts can come from. Now, if I use a pre-release version of the Compose Compiler, Gradle will first try to look it up in [gMaven] and then fall back to the `androidx.dev` repository.
In the above example we also see [JitPack] being mentioned, which we only wish to use for a specific dependency that's unavailable elsewhere. The [`exclusiveContent`] filter is precisely for this use case:
The Sonatype OSS snapshots repository is only intended to be used for snapshot releases of dependencies we'd otherwise source from Maven Central, so we can indicate to Gradle to only search for snapshots in there with a [`mavenContent`] directive:
If you're working with Kotlin Multiplatform, these directions will sadly not cover all the dependencies being fetched during your build. There's a YouTrack issue ([KT-51379]) that you can subscribe to for updates on this, but in the mean time here's the missing bits:
```kotlin
dependencyResolutionManagement {
repositories {
// workaround for https://youtrack.jetbrains.com/issue/KT-51379
If you're not using a JavaScript target in your project, it should be safe to skip the NodeJS and Yarn repositories but it's probably easier to keep it configured ahead of time in case you adopt JavaScript in the future.
## Conclusion
The exact percentage improvement you can expect can vary depending on how many dependencies you have as well as how many repositories were previously declared and in what order, but you should most definitely see a noticeable difference consistently. These are the before and after numbers for a project I optimised for my day job.
### Before
{{<figuresrc="before-fixes.webp"title="The Android Studio Dependency Sync window, showing a total sync duration of 5 minutes and 56 seconds of which 1 minute and 30 seconds went into failed network requests">}}
### After
{{<figuresrc="after-fixes.webp"title="The Android Studio Dependency Sync window, now showing the total sync taking only 3 minutes and 17 seconds with 0 failed requests">}}
Like and subscribe, and hit that notification bell so you don't miss my next post some time within this decade (hopefully).
title = "Integrating comments in Hugo sites with commento"
+++
Disqus is unequivocally the leader when it comes to hosted comments, and it works rather swimmingly with sites of all kinds with minimal hassle. But this ease has a gnarly flipside: [annoying referral links](https://stiobhart.net/2017-02-21-disqusting/) and a [huge bundle size](https://victorzhou.com/blog/replacing-disqus/) that significantly affects page load speeds.
As I was considering adding comments to this blog, I went through these posts and realised that Disqus is not going to be satisfactory enough, especially after the time and effort I put into improving bundle sizes and page loading. I started looking into the alternatives, and shortlisted [Isso](https://posativ.org/isso) and [Commento](https://commento.io/). Going through Isso documentation and [this post](https://stiobhart.net/2017-02-24-isso-comments/) it was clear that setup was going to be a bit of a chore, and that was the end of it.
Commento is open source just like Isso, but has a cloud-hosted option. I was interested in self-hosting, though, and I was glad to find that Commento delivered very well on that front too. [docker-compose](https://docs.commento.io/installation/self-hosting/on-your-server/docker.html#with-docker-compose) is an officially supported deployment method and I was pleased to see that setup went forward without a problem.
## Integrating with Hugo
The interesting part! Hugo offers a Disqus template internally, but any other comment system's going to need some legwork done. Commento's integration code is just two lines, as you can see below.
Hugo offers a powerful tool called [partials](https://gohugo.io/templates/partials/#use-partials-in-your-templates) that allows injecting code into pages from another HTML file. I quickly created a partial with the integration code, scoped out the domain with a variable, and ended up with this.
<noscript>Please enable JavaScript to load the comments.</noscript>
```
With this saved as `layouts/partials/commento.html` and `CommentoURL` set in my `config.toml`, I set out to wire this into the posts. Because of a [pre-existing hack](https://github.com/msfjarvis/msfjarvis.dev/commit/5447bb36258934d6a5bc86be99ef91a9eeb9eb17) that I use for linkifying headings, I already had the `single.html` file from my theme copied into `layouts/_default/single.html`. If you don't, copy it over and open it. Add the following lines, removing any mention of Disqus if you find it.
```go
{{ if and .Site.Params.CommentoURL (and (not .Site.BuildDrafts) (not .Site.IsServer)) -}}
<h2>Comments</h2>
{{ partial "commento.html" . }}
{{- end }}
```
With this, the comments section is only loaded when CommentoURL is defined, and the site is not running in server mode. This allows me to exclude showing comments when using the preview server on [Forestry](https://forestry.io) (highly recommended CMS for Hugo, by far my personal favorite). Since I also have a copy of my site with drafts enabled hosted on a separate subdomain, I had to factor that into the partial as well. Here's what I deploy on my own website.
```go
{{ if and .Site.Params.CommentoURL (and (not .Site.BuildDrafts) (not .Site.IsServer)) -}}
<!-- Rest is identical to the previous -->
```
And that's it! Now you should have a fully functioning comment system on your static sites that does not bloat the bundle size unnecessarily.
P.S. If anybody's interested to have me cover the template language for Hugo (conditionals, loops and the like), put it down in the comments :P
In the [previous post] I documented how I went about setting up my Zig environment, and it's now time to start learning things.
My preferred method of learning new languages is rebuilding an existing project in them, like I did when going from [Python] to [Kotlin] to [Rust]. For Zig, I've elected to rebuild my [healthchecks-rs] library. It's something I use on a day-to-day basis for keeping an eye on my backup jobs, and it would be a great addition to the [healthchecks.io ecosystem].
# Getting the basics down
Among the resources enlisted on the Zig [getting started] page, I opted to go with [ziglearn.org] for learning the ropes of the language. It is concise yet detailed, and the chapter-wise breakdown makes for great mental "checkpoints", much like the [Rust book].
For this post I'm going through [chapter 1].
# Thoughts™️
I'm going to use this section to jot down my thoughts about Zig, broken down by the sections on ZigLearn. I'll skip the parts that I don't have anything to say on.
## Assignment
The presence of `undefined` is _very_ interesting to me. It appears to be functionally identical to Rust's [Default trait], as shown in this snippet (had to skip to structs for this since I was so curious about it).
I have clearly not gotten very far yet, but initial thoughts here: Rust's trait-based implementation means I can customize the "default" values for my structs, which I'm not seeing in this implicit coercion yet. Guess we'll find out soon whether or not this can be handled explicitly :D
## Arrays
The syntax for array declarations is quite clear and explicit, which I like. Notably, while you can do this:
```zig
const implicitly_sized_array = [_]u8{}; // _ means "infer the size"
```
you cannot have an inferred size reference as the type:
```zig
const implicitly_sized_array[_]u8 = {};
```
Rust also [disallows this](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=f27a1a0b20feebe3e6d0a3417f25ce45), but the error is surprisingly worse than with Zig. Rust's resident diagnostics magician Esteban [assures me](https://twitter.com/ekuber/status/1393566561005314048) this is a regression and is being tracked.
The only problem I encountered here was that I can't figure out how to print an array!
```zig
const implicitly_sized_array = [_]u8{0, 1, 2, 3};
std.debug.print("This is an array: {}\n", .{implicitly_sized_array});
// Outputs: "This is an array: "
```
I found a [PR that overhauls formatting] but nothing there gave me any pointers on why my code doesn't work. Hopefully this'll get cleared up later.
## If
Nothing special here, aside from the early introduction to testing, which is slightly more pleasant than with Rust. I do however have qualms about the test output, which is unnecessarily noisy:
```shell
Test [2/2] test "while with continue expression"... expected 2080, found 10
/nix/store/nhd75c4sr3l9wlaspilkwawx5ixkn74w-zig-0.7.1/lib/zig/std/testing.zig:74:32: 0x206f9a in std.testing.expectEqual (test)
std.debug.panic("expected {}, found {}", .{ expected, actual });
^
/home/msfjarvis/git-repos/zig-playground/src/main.zig:29:24: 0x205abd in test "while with continue expression" (test)
testing.expectEqual(sum, 10);
^
/nix/store/nhd75c4sr3l9wlaspilkwawx5ixkn74w-zig-0.7.1/lib/zig/std/special/test_runner.zig:61:28: 0x22e161 in std.special.main (test)
} else test_fn.func();
^
/nix/store/nhd75c4sr3l9wlaspilkwawx5ixkn74w-zig-0.7.1/lib/zig/std/start.zig:334:37: 0x20749d in std.start.posixCallMainAndExit (test)
const result = root.main() catch |err| {
^
/nix/store/nhd75c4sr3l9wlaspilkwawx5ixkn74w-zig-0.7.1/lib/zig/std/start.zig:162:5: 0x2071d2 in std.start._start (test)
The `defer` language feature is something I've always been curious about since seeing it in Go, so I'm excited to discover use-cases for it when I finally have it available. Through ZigLearn I discovered that `defer` calls can be stacked to be executed in LIFO order, and Golang does it in the exact same fashion.
## Errors
I like how easy it is to define errors, but the syntax feels kinda icky. Having each error enum I declare 'magically' become a property on the `error` keyword doesn't sit right with me :(
```zig
const NumericError = error{};
fn mayError(shouldError: bool) anyerror!u32 {
return if (shouldError)
// This is different from what I'm accustomed to as a user of
// Either/Result type monads.
error.NumericError
else
10;
}
```
## Runtime Safety
Being able to turn off runtime safety features (like bounds checking) in specific blocks is pretty interesting! Not sure if I'll ever have a valid use for it though...
# Conclusion
I like most of what I've seen so far in Zig. Aside from the issues I mentioned above, the lack of string type is a very confusing thing for me. I've kinda come to expect it everywhere based on my previous experiences with Python, Java, Kotlin, and Rust; but maybe I'll now learn to appreciate how every character on my screen is just numbers :D.
I was very easily distracted today, so I only made it a third of the way in 3 hours for a chapter that is supposed to take 1 hour for the whole thing. Hoping to finish it tomorrow!
[Yesterday's post] was a bit shorter than I planned, since I didn't manage to go through as much of the ZigLearn [chapter 1] as I thought I would. Today we'll be wrapping it up.
# Thoughts™️
Same as yesterday, this section will be a brain dump of what I think of the things I learn today.
## Pointers
Rust made pointers a very friendly concept thanks to the borrow checker and amazing compiler diagnostics, and Zig seems to follow the same path with keeping them straightforward. I'm not the biggest fan of the `variable.*` syntax for dereferencing a pointer, since it breaks my existing muscle memory in a major way but I'm sure I'll get used to it in no time.
Just like Rust, mutable and immutable pointers are explicitly distinct which is a ✅ in my book.
There wasn't a lot of content on ZigLearn about [many-item pointers] so I'm still not sure I understand any of it. That's probably just me though.
I've known and used Rust's `usize` in my programs, but only after reading about [pointer-sized integers] on ZigLearn did I actually make the connection that the size of a `usize` is that of a pointer. 💡
## Enums
On a syntactic level, Zig enums are closer to Kotlin than to Rust w.r.t. declaring functions in them, which is very nice.
## Structs
The syntax note from enums applies here as well, with an additional nicety about pointers. Specifically, a struct function that accepts a pointer value will automatically dereference the value inside the function body. This only goes one level deep though, so keep that in mind.
```zig
const Rectangle = struct {
length: i32,
width: i32,
pub fn swap(self: *Rectangle) void {
// No explicit dereferencing needed!
const tmp = self.length;
self.length = self.width;
self.width = tmp;
}
};
```
## Unions
I have never worked with `union`s, but [tagged unions] gave me awful ideas about matching Kotlin's [sealed classes] functionality so I'm looking forward to writing some cursed code :D
## Integer rules and Floats
Zig's type coercion syntax is nicer than Rust's, though the lack of a runtime error concerned me initially. Rust's [TryInto] trait is explicit about the fact that the conversion is fallible, and thus returns a [Result]. Zig on the other hand attempts to validate these conversions at compile-time. Given this code:
This is good, but I haven't yet found a way to force the conversion to go through in instances where I can confirm that the incoming u32 definitely fits in a u8.
## Optionals
Zig's [Optionals] are a very good parallel for Rust's [Option], though Zig provides a lot more syntactic niceties.
The fact that you can use a while loop to capture values until they become null is pretty damn sweet.
```zig
var numbers_left: u32 = 4;
fn eventuallyNullSequence() ?u32 {
if (numbers_left == 0) return null;
numbers_left -= 1;
return numbers_left;
}
test "while null capture" {
var sum: u32 = 0;
while (eventuallyNullSequence()) |value| {
sum += value;
}
expect(sum == 6); // 3 + 2 + 1
}
```
# Conclusion
Everything following optionals was either uncontroversial or too powerful/low level for my current interests, so I admittedly glossed over some of the gory details.
[Chapter 2] introduces JSON, which we'll need for our eventual healthchecks.io library, so I'm looking forward to it! I do have work tomorrow, so we'll have to see if I can keep up the daily streak :)
Today I'll be getting familiar with common patterns in Zig like providing explicit allocators and learn about more of the standard library. This is of course, [chapter 2] of the ZigLearn.org curriculum.
# Thoughts™️
## Allocators
Rust's standard library performs allocations as and when necessary, automatically, and provides a separate `no_std` mode that eliminates the standard library from the build and only retains the [libcore] which is suitable for use in bare-metal deployments (more on that in [The Embedded Rust Book]). Zig takes an alternative approach, where the convention is that all functions in the the standard library that require allocations take in an explicit [Allocator] which can either be implemented by the user, or you can pick one of the many options available in the standard library itself.
I've never given an _extreme_ amount of thought to the allocations my programs perform, so I'm rather unopinionated on this. That being said, whenever I start working on the healthchecks library I'm definitely going to follow the ecosystem's practices and make allocations explicit for consumers.
## Filesystem
The FS APIs appear to be quite expansive, covering everything that I can think of, and I finally discovered a practical use-case for `defer`! I'm pretty sure I'll end up passing the incorrect [CreateFlags] on more than one occasion, but at least they're documented and discoverable, and I have _some_ memory of doing the same with Python back when I still believed in my country's education system :P
## Formatting
Finally an answer to my "how to format an array" mystery! It's fascinating that formatting also requires allocations, but then Rust does return an owned `String` rather than a `&str` when you format things so that should have been obvious in hindsight.
## JSON
The native JSON support looks pretty great, though I am left to wonder how much of [serde]'s flexibility is available here. Guess we'll find out soon enough :)
## Random numbers and crypto
Great to see native crypto in Zig! I'm not the target audience, but lack of crypto primitives in Rust seems to come up often so it's a nice plus for Zig.
## Formatting specifiers and Advanced Formatting
Zig's formatting system seems about as powerful as Rust's, _maybe more_, so it's definitely a plus for me since I've ended up debugging a lot of code with `println!("{:?}", $field)` in Rust 😅
# Conclusion
I skipped through the parts about HashMaps, Stacks, sorting and iterators since those are fairly straightforward concepts that Zig does not appear to reinvent in any way.
Overall, I'm liking everything I'm seeing. Very excited to start building things in Zig!
summary = "Getting a USB Bluetooth dongle to function properly on Linux proved to be somewhat of a trip, which I'm documenting here."
slug = "making-a-bluetooth-adapter-work-on-linux"
socialImage = "uploads/bluetooth_social.webp"
tags = ["bluetooth", "bt-audio"]
title = "Making a Bluetooth adapter work on Linux"
+++
I made a couple of purchases yesterday, including a Bluetooth speaker and a USB Bluetooth dongle to pair it to my computer. Now here's a couple things that you need to know about said computer:
- It runs Linux
- It runs a customized build of the Zen kernel with a very slimmed down config
- It has never had Bluetooth connectivity before
Thanks to this combination of factors, things got weird. I tried a bunch of things before getting it working, so it is entirely possible that I miss some steps that were important but I didn't think so while writing this. Let me know in the comments if I missed something.
### Getting the right packages
You're gonna need 1) a GUI to handle BT devices, b) the PulseAudio module for Bluetooth. For the GUI I used [blueberry](http://packages.linuxmint.com/search.php?release=ulyana§ion=main&keyword=blueberry), and [pulseaudio-module-bluetooth](https://packages.ubuntu.com/focal/pulseaudio-module-bluetooth) for PulseAudio support.
I did `apt install -y blueberry pulseaudio-module-bluetooth` to get these on Linux Mint, you should use whatever your distro's preferred package management interface is.
### Fixing up the kernel (optional)
I mentioned earlier that I run a very slimmed down config, which means nothing that I didn't already use was enabled. This included Bluetooth, so I went ahead and enabled all the configs for it [here](https://msfjarvis.dev/g/linux/992c2d8bce8b), then installed the new kernel and rebooted into it. You shouldn't need to do this if you do not run a custom kernel. To be completely sure, check your dmesg for Bluetooth initialization logs:
```shell
$ dmesg | rg Bluetooth
[ 0.146115] Bluetooth: Core ver 2.22
[ 0.146118] Bluetooth: HCI device and connection manager initialized
If you're not a relatively up-to-date distro, you might need to make some more manual adjustments before everything works. Open up `/etc/pulse/default.pa` in any editor with root access (so you can write your changes back), then look for `module-bluetooth-discover`. In my version of the file, I have this:
```pa
.ifexists module-bluetooth-discover.so
load-module module-bluetooth-discover
.endif
```
It means that if the module is discovered, it will be loaded. On older versions this might just be `# load-module module-bluetooth-discover`. In that case, uncomment the line.
Next, open up `/usr/bin/start-pulseaudio-x11` in the same way. Look for this:
This will manually load the module when X11 triggers PulseAudio init. This should ideally not be required so you can try without this change, but it won't break anything if you add it anyway.
Once done, reboot your computer and you should be able to pair and connect to devices and play audio through them.
summary = "Moshi is a fast and powerful JSON parsing library for the JVM and Android. Today we look into manually parsing JSON to and from Java/Kotlin classes"
slug = "manually-parsing-json-with-moshi"
socialImage = "/uploads/moshi_social.webp"
tags = ["moshi", "json parsing", "moshi read json from file"]
title = "Manually parsing JSON with Moshi"
toc = true
+++
### What is Moshi?
[Moshi] is a fast and powerful JSON parsing library for the JVM and Android, built by the former creators of Google's Gson to address some of its shortcomings and to have an alternative that was actively maintained.
Unlike Gson, Moshi has excellent Kotlin support and supports both reflection based parsing and a kapt-backed codegen backend that eliminates the runtime performance cost in favor of generating adapters during build time. The `kotlin-reflect` dependency required for doing reflection-based parsing can add up to 1.8 mB to the final binary, so it's recommended to use the codegen method if possible.
### What is an adapter?
An adapter is Moshi-speak for a class that can convert JSON into an object and an instance of that object into JSON. There are multiple types of JSON adapters supported by Moshi. The first is the one demonstrated in their README which contains two methods annotated `@ToJson` and `@FromJson`. The former takes an instance of the object and returns a String, and the latter takes a String and returns an instance of the object. This is the simplest type, and should be used for non-complex types that typically can be represented in simpler forms. [Here's the example Moshi uses](https://github.com/square/moshi#custom-type-adapters), and should be all the introduction you need for this particular type.
The other type is similar to what Moshi generates for its kapt-generated adapters, but leverages the `@ToJson`/`@FromJson` annotations. The method signatures here are a bit verbose, and these are the ones we're going to try to build.
### Why write your own adapters?
Good question. Consider this example class:
```kotlin
@JsonClass(generateAdapter = true)
class TextParts(val heading: String, val body: String? = null)
```
Pretty straightforward. The `JsonClass` annotation with `generateAdapter = true` will attempt to use the codegen backend to write an adapter automatically for this. Let's try converting this to JSON.
```kotlin
val text = TextParts("This is the heading", "And this is the body")
val moshi = Moshi.Builder().build()
// TextPartsJsonAdapter was generated by the codegen backend
println(TextPartsJsonAdapter(moshi).toJson(text))
{"heading":"This is the heading","body":"And this is the body"}
```
What this means is, given a JSON object that looks like this
```json
{ "heading": "This is the heading", "body": "And this is the body" }
```
We can get an instance of `TextParts` that looks like this
```kotlin
val text = TextParts("This is the heading", "And this is the body")
```
Cool! Now, let's make things unfortunate. Imagine your backend team is stretched thin, and due to a limitation with how they initially built their database schema, you can only get the above JSON in this form
```json
{
"heading": "This is the heading",
"extras": { "body": "And this is the body" }
}
```
If you try to parse this with the old `TextPartsJsonAdapter`, your app is going to crash, because the JSON and its Kotlin representation have diverged. The equivalent Kotlin for this new JSON is going to be something like this:
```kotlin
@JsonClass(generateAdapter = true)
class Extras(val body: String? = null)
@JsonClass(generateAdapter = true)
class TextParts(val heading: String, val extras: Extras? = null)
```
Many things changed here. Your direct access to the `body` field now needs to go through `extras`, which just isn't that nice. You're also now incurring the (albeit miniscule) overhead of generating two adapters rather than one. Wouldn't it be great if we could continue to have a flat object like before? Let's try to make that happen.
### How to write your own Moshi adapter?
With less effort than one might think! Let's put down the basic building blocks.
```kotlin
class TextPartsJsonAdapter {
// Moshi is flexible about the parameters of these two methods, and for simpler types
// you will find it easier to follow the example from the Moshi README which does not
// use JsonReader/JsonWriter and instead directly converts items to and from their String
// representations. The method names are also not enforced, as Moshi only uses the
// annotations to find relevant methods. The internal implementation of how they do it
// can be found here: https://git.io/JLwnb
@FromJson
fun fromJson(reader: JsonReader): TextParts? {
TODO("Not implemented")
}
@ToJson
fun toJson(writer: JsonWriter: value: TextParts?) {
TODO("Not implemented")
}
}
```
Now we're ready to start parsing. First, let's implement the `toJson` part, where we take an instance of the object and then try to write the equivalent JSON for it. Since this is comparatively easier, I'm going to do it in one go and leave comments inline to explain what's happening.
```kotlin
@ToJson
fun toJson(writer: JsonWriter: value: TextParts?) {
// Null values shouldn't arrive to the adapter, this error lets callers know
// what builder options need to be passed to the Moshi.Builder() instance
// to avoid this particular situation.
if (value == null) {
throw NullPointerException("value was null! Wrap in .nullSafe() to write nullable values.")
}
// Use the Kotlin `with` scoping method so we don't need to call
// all methods with the `writer.` prefix.
with(writer) {
// Start the JSON object.
beginObject()
// Since our `extras` field is nullable, and our backend will send
// it as a literal null rather than skip it, we want null values to
// be written into the final JSON.
serializeNulls = true
// Create a JSON field with the name 'heading'
name("heading")
// Set the value of the 'heading' field to the actual heading
value(value.heading)
// Create the 'extras' field
name("extras")
if (value.body != null) {
// If the body text exists, then start a new object and add a
// body field
beginObject()
name("body")
value(value.bodyText)
endObject()
} else {
// Otherwise we put down a literal null
nullValue()
}
// End the top-level object.
endObject()
}
}
```
Parsing JSON manually is relatively easy to screw up and Moshi will let you know if you get nesting wrong (missed a closing `endObject()` or `endArray()`) and other easily detectable problems, but you should definitely have tests for all possible cases. I'll let the readers do that on their own, but if you _really_ need to see an example then let me know below.
Anyways, that's the object -> JSON part sorted. Now let's try to do the reverse. Here's where we are as of now.
```kotlin
fun fromJson(reader: JsonReader): TextParts? {
TODO("Not implemented")
}
```
Same as writing JSON, we need to start by making an object.
```diff
fun fromJson(reader: JsonReader): TextParts? {
+ // We'll be constructing the object at the end so these
+ // will store the values we read.
+ var heading: String? = null
+ var body: String? = null
+ with(reader) {
+ beginObject()
+ endObject()
+ }
TODO("Not implemented")
}
```
We have a fixed set of keys that we expect to read, so go ahead and configure a couple instances of `JsonReader.Options` that we will use to find the keys in this JSON.
`reader.hasNext()` is going to continue iterating through the document's tokens until it's completed, which lets us look through the entire document for the parts we need. The `selectName(JsonReader.Options)` method will return the index of a matched key, so `0` there means that the `heading` key was found. In response to that, we want to read it as a string and throw if it is null (since it's non-nullable in `TextParts`). The `Util.unexpectedNull` method is a little nicety that is part of Moshi's internals and is used by its kapt-generated adapters to provide better error messages and we're going to do the same.
When I said that `selectName` returns the index of the matched key, I didn't mention that it returns -1 when it comes across a key that isn't in the Options object. Since we don't care about them, we're going to skip both their name and value and continue right on ahead. Now, we're going to try and parse that inner `extras` object. A lot is about to happen quickly, but bear with me as I explain things.
```diff
"text",
this
)
+ 1 -> {
+ // "extras" is nullable, so we first try to see if it is null.
+ // If it isn't, this will throw and we can then safely assume
+ // a non-null value and proceed.
+ try {
+ nextNull<Any>()
+ } catch (_: JsonDataException) {
+ beginObject()
+ while (hasNext()) {
+ when (selectName(extrasKeys)) {
+ 0 -> body = nextString()
+ -1 -> {
+ // Skip unknown values
+ skipName()
+ skipValue()
+ }
+ }
+ }
+ endObject()
+ }
+ }
-1 -> {
// Skip unknown values
skipName()
skipValue()
```
Now that you look at it, not really that different from what we did above. The only new thing here is the `nextNull` method, which simply tries to find a null value and throws the `JsonDataException` if the value wasn't null.
```diff
}
endObject()
}
- TODO("Not implemented")
+ // Satisfy the typechecker and throw in case the JSON body
+ // didn't contain the 'heading' field at all
+ require(heading != null) { "heading must not be null" }
+ return TextParts(heading, body)
}
```
And that's it! The final adapter is going to look like this
```kotlin
class TextPartsJsonAdapter {
val topLevelKeys = JsonReader.Options.of("heading", "extras")
val extrasKeys = JsonReader.Options.of("body")
@FromJson
fun fromJson(reader: JsonReader): TextParts? {
// We'll be constructing the object at the end so these
// "extras" is nullable, so we first try to see if it is null.
// If it isn't, this will throw and we can then safely assume
// a non-null value and proceed.
try {
nextNull<Any>()
} catch (_: JsonDataException) {
beginObject()
while (hasNext()) {
when (selectName(extrasKeys)) {
0 -> body = nextString()
else -> {
// Skip unknown
skipName()
skipValue()
}
}
}
endObject()
}
}
-1 -> {
skipName()
skipValue()
}
}
}
endObject()
}
// Satisfy the typechecker and throw in case the JSON body
// didn't contain the 'heading' field at all
require(heading != null) { "heading must not be null" }
return TextParts(heading, body)
}
@ToJson
fun toJson(writer: JsonWriter: value: TextParts?) {
// Null values shouldn't arrive to the adapter, this error lets callers know
// what builder options need to be passed to the Moshi.Builder() instance
// to avoid this particular situation.
if (value == null) {
throw NullPointerException("value was null! Wrap in .nullSafe() to write nullable values.")
}
// Use the Kotlin `with` scoping method so we don't need to call
// all methods with the `writer.` prefix.
with(writer) {
// Start the JSON object.
beginObject()
// Since our `extras` field is nullable, and our backend will send
// it as a literal null rather than skip it, we want null values to
// be written into the final JSON.
serializeNulls = true
// Create a JSON field with the name 'heading'
name("heading")
// Set the value of the 'heading' field to the actual heading
value(value.heading)
// Create the 'extras' field
name("extras")
if (value.body != null) {
// If the body text exists, then start a new object and add a body field
beginObject()
name("body")
value(value.bodyText)
endObject()
} else {
// Otherwise we put down a literal null
nullValue()
}
// End the top-level object.
endObject()
}
}
}
```
This is certainly a lengthy job to do, and this blog post is a result of nearly 8 hours I spent writing JSON adapters by hand. Certainly not recommended if avoidable, but sometimes you just need to. When it comes to it, now you hopefully know how :)
title = "Mastodon on your own domain without hosting a server, Netlify edition"
+++
## Preface
I recently came across [a blog post](https://blog.maartenballiauw.be/post/2022/11/05/mastodon-own-donain-without-hosting-server.html) from [Maarten Balliauw](https://mastodon.online/@maartenballiauw) that explained how they had managed to create an ActivityPub compatible identity for themselves, without hosting Mastodon or any other ActivityPub server.
I recommend going to their blog and reading the whole thing, but here's a TL;DR
- [ActivityPub](https://activitypub.rocks/) has the notion of an "actor" that sends messages
- This "actor" must be discoverable via a protocol called [WebFinger](https://webfinger.net)
- WebFinger is ridiculously easy to implement
For all practical purposes, WebFinger is essentially a JSON document that is served at `/.well-known/webfinger` from a domain and is used to identify "actors" across the Fediverse.
Maarten's approach to implementing this was to simply place the JSON document at `/.well-known/webfinger` on their domain `balliauw.be`, which allowed `@maarten@balliauw.be` to become a WebFinger-compatible identity that can be searched for on Mastodon and will return their actual `@maartenballiauw.be@mastodon.online` profile.
Maarten did however note that since they're relying on static hosting, they're unable to restrict what identities they can enforce as valid, and thus a search for `@anything@balliauw.be` will also return their `mastodon.online` identity.
## The implementation
I wanted to also set up something like this, but without the limitation Maarten had run into. Since my website runs on Netlify, I decided to try out using an [Edge Function](https://docs.netlify.com/edge-functions/overview/) to build this up.
Similar to Maarten, I first obtained my current Fediverse identity from the Mastodon server I am on: [androiddev.social](https://androiddev.social) (incredible props to [Mikhail](https://androiddev.social/@friendlymike) for making it a reality).
With this in hand, now we can get started on wiring this up into our website.
First, create an Edge Function using the Netlify CLI. Here's the options I chose.
```
➜ yarn exec ntl functions:create --name webfinger
? Select the type of function you'd like to create: Edge function (Deno)
? Select the language of your function: TypeScript
? Pick a template: typescript-json
? Name your function: webfinger
◈ Creating function webfinger
◈ Created netlify/edge-functions/webfinger/webfinger.ts
? What route do you want your edge function to be invoked on?: /.well-known/webfinger
◈ Function 'webfinger' registered for route `/.well-known/webfinger`. To change, edit your `netlify.toml` file.
```
Next, add the following code to the TypeScript file just created for you. I've added comments inline to explain what each part of the code does so you can customize it according to your needs.
```typescript
// Netlify Edge Functions run on Deno (https://deno.land), so imports use URLs rather than package names.
import { Status } from "https://deno.land/std@0.136.0/http/http_status.ts";
import type { Context } from "https://edge.netlify.com";
summary = "I recently migrated Password Store to Material You, Google's latest iteration of Material Design. Here's how it went."
slug = "migrating-aps-to-material-you"
tags = ["android-password-store"]
title = "Migrating APS to Material You"
socialImage = "uploads/m3-social.webp"
+++
With much fanfare, Google released the next iteration of Material Design: **Material You**. It's received mixed reviews, but I found it extremely pleasant to use and the homogeneity of Google apps following the platform colors felt great. That's what prompted me to update APS to Material You and join in :)
As expected, the library ecosystem (specifically the [material-components-android] library) took a while to support Material You but with the [1.5.0-alpha05] release of the MDC Android library, things are finally at a place where migration to Material You (henceforth referred to as M3) is viable for simpler apps like APS.
APS has had some design work done to it but for the most part remains the culmination of ad-hoc choices (often bad) over a period of several years. With this migration I sought to change that and make things a bit more cohesive as well as give the app some much needed _oomph_.
## Getting the basics in
I began with scouring through the resources in the conveniently isolated [M3 website], where the Material Design team has helpfully created a lot of great tools and content to help developers and designers through the process. There's the "[Migration to Material 3]" blog post, and the [Material Theme Builder] to generate palettes/styles/themes for apps, for both Jetpack Compose and Android's View-based system. These were extremely helpful in getting a headstart on the whole process. It's all documented in the commit history of the [migration PR] but I figure some additional context can't hurt.
Once I had the themes in, I decided to take the opportunity to also introduce a custom typeface. The app has been using Roboto since forever and it felt like it was time to spice things up. I decided to go with [Manrope] since it is a font I've previously used and found to be excellent for visual appeal and accessibility. I'm still not a 100% confident in my choice, so if people have better options in mind I'd love to know down in the comments.
Once the new font face was in, I opted to enable dynamic colors. Admittedly not the right choice, since I should've validated the "default" palette first but that's what I did ¯\\_(ツ)_/¯.
## Bugfixes and improvements™️
Once the M3 themes were all prepped, it was time to actually start migrating.
I switched our activities to use the M3 themes, and immediately started noticing bugs from non-idiomatic and straight up incorrect theming we've been lugging around for the past couple years.
First step was to update our iconography, which was using inconsistent tints throughout. I updated all of them to use `?attr/colorControlNormal` which made them blend in correctly with the rest of the updated UI.
Earlier this year we had migrated a selection UI in one of our screens to use [Chips], but we never got the theming right so it always looked kind of wrong. With M3, we were able to revert back to `MaterialButtonToggleGroup` without regressing on the [accessibility issue] which made us do it in the first place.
There were a lot more smaller changes that were made to address the remaining visual bugs
- Our onboarding flow was using an incorrect interpretation of `?attr/colorPrimary` for theming, and was migrated to use `?android:attr/colorBackground`
- A lot of screens were using hard-coded colors, which were migrated to theme attributes
- Many screens used hard-coded styles for buttons and text fields, and were also migrated to theme attributes
- Multiple layouts also referenced typography styles directly and were migrated to the corresponding M3 attributes, based on the mapping table in the "[Migration to Material 3]" article.
- System bars and Toolbar had to be given explicit styles and colors to match the "flat" aesthetic from our M2 designs.
## The final stretch
With the visual fixes out of the way, I went in and cleaned up the themes and styles. I commonized shared attributes such as fonts and widget styles, created M3 variants of other special-purpose themes we had, and got rid of all the now unused M2 theming. Overall, the PR touched 60+ separate files and generated a final diff of `+603,-314` lines. The PR can be seen [here](https://msfjarvis.dev/aps/pr/1532).
We use a third-party library by [Max Rumpf] called [ModernAndroidPreferences] for our settings UI, and it hard-coded the use of AppCompat dialogs. Max was extremely helpful and made that customisable for us over the weekend which allowed us to use the appropriate Material You dialogs consistently. Huge thanks to Max, and check out his library! <3
## Screenshots!
### Before
![Screenshot gallery of a few APS screens before the Material 3 migration](/uploads/aps_m2_gallery.webp)
### After
![Screenshot gallery of a few APS screens after the Material 3 migration](/uploads/aps_m3_gallery.webp)
## Closing notes
APS is a very low-effort app when it comes to UI work. We do not have a custom design system, everything follows Material to a T, and we try to stay in that lane. Our migration took me around 9 hours of work over two days, most of which was really spent on menial work such as manually checking all layouts for hard-coded styles and replacing them with attributes. This isn't representative of what this process would look like for any project which rolls its own design system on top of Material, since they have a lot more to do before they can even _begin_ the migration of their screens.
I'd like to thank the Material Design team once more for the fabulous work they have done both in creating Material You as well as the technical documentation around it. [Material Theme Builder] was an extremely crucial tool for me that set the tone of the whole process, and I would have certainly repeated the same mistakes I did with Material 2 if it wasn't for the tooling and guidance from the team.
[Dagger](https://dagger.dev) is infamous for very good reasons. It's complicated to use, the documentation is an absolute shitshow, and simpler 'alternatives' exist. While [Koin](http://insert-koin.io/) and to a lesser extent [Kodein](https://kodein.org/di/) do the job, they're still service locators at their core and don't automatically inject dependencies like Dagger does.
## Background
Before I start, some introductions. I'm the sole developer of [Viscerion](https://play.google.com/store/apps/details?id=me.msfjarvis.viscerion), an Android client for [WireGuard](https://www.wireguard.com/). It is a fully [open source](https://github.com/msfjarvis/viscerion) project just like the upstream Android client which served as the base for it. I forked Viscerion as a playground project that'd let me learn new Android tech in an unencumbered way and be something that I'd actually use, hence guaranteeing some level of sustained interest. I do still contribute major fixes back upstream, so put down your pitchforks :P
Like I said before, Viscerion is a learning endeavour, so I decided to learn. I rewrote most of the app in Kotlin, implemented some UI changes to make the app more friendlier to humans and added a couple features here and there.
And then I decided to tackle the Dependency Injection monster.
## The beginnings
The dependency injection story always begins with the search for a library that does it for you. You look at Dagger, go to the documentation, look up what a Thermosiphon is, then scratch Dagger off the list and move on. Kotlin users will then end up on one of [Koin](http://insert-koin.io/) or [Kodein](https://kodein.org/di/) and that'll be the end of their story.
That was mine as well! Before Viscerion was forked, the app used to have Dagger (albeit sorely underused as I can tell now that I know a fair bit about it) but then it was [swapped out](https://github.com/WireGuard/wireguard-android/commit/712b6c6f600ef6eb683d356a6e9a05e9415b7e12) for singleton access from the Application class. I really, really wanted to try out the fancy 'Dependency Injection' thing everybody loved so I did some searching around, went through the aforementioned motions and [settled on Koin](https://github.com/msfjarvis/viscerion/pull/131).
It was great! I could get any dependency anywhere and that allowed me to write all kinds of hot garbage, and garbage I wrote. But despite all that, I still really wanted to give Dagger another shot. I tried multiple times to make it work, the remains of which have been force pushed away long since. Then I came across [Fred Porciúncula](https://twitter.com/tfcporciuncula)'s [dagger-journey](https://github.com/tfcporciuncula/dagger-journey) repository and the accompanying talk that tried to fill the usability gap that Dagger's documentation never could. He was largely successful in being able to teach me how to use Dagger, and the first "proper" attempt I made at using Dagger was [largely decent](https://github.com/msfjarvis/viscerion/pull/196/files). I was still missing a lot of knowledge that made me [slip back into my hate-train](https://github.com/msfjarvis/viscerion/pull/196#issuecomment-557907972).
## The turning point
Around mid-December 2019, [Arun](https://twitter.com/arunkumar_9t2) released the 0.1 version of his Dagger 2 dependency graph visualizer, [Scabbard](https://arunkumar.dev/introducing-scabbard-a-tool-to-visualize-dagger-2-dependency-graphs/). It looked **awesome**. I reshared it and shoved in my residual Dagger hate for good measure because isn't that what the internet is for. I was confident that Dagger shall never find a place in my code and my friend [Sasikanth](https://sasikanth.dev) was hell-bent on ensuring otherwise.
Together, we dug up my previous efforts and I started [a PR](https://github.com/msfjarvis/viscerion/pull/214) so he could review it and help me past the point I dropped out last time. He helped [me on GitHub](https://github.com/msfjarvis/viscerion/pull/214#pullrequestreview-336919368), privately on Telegram and together in about 2 days Viscerion was completely Koin-free and ready to kill. I put down my thoughts about the migration briefly [on the PR](https://github.com/msfjarvis/viscerion/pull/214#issuecomment-569541678), which I'll reproduce and expand on below.
> - Dagger is ridiculously complex without a human to guide you around.
I will die on this hill. Without Sasikanth's help I would have never gotten around to even _trying_ Dagger again.
> - Koin's service locator pattern makes it far too easy to write bad code because you can inject anything anywhere.
Again, very strong opinion that I will continue to have. I overlooked a clean way of implementing a feature and went for a quick and dirty version because Koin allowed me the freedom to do it. Dagger forced me to re-evaluate my code and I ended up being able to extract all Android dependencies from that package and move it into a separate module.
> - Dagger can feel like a lot of boilerplate but some clever techniques can mitigate that.
Because the Dagger documentation wasn't helpful, I didn't realise that a `Provides` annotated method and an `@Inject`ed constructor was an either-or situation and I didn't need to write both for a class to be injectable. Sasikanth [with the rescue again](https://github.com/msfjarvis/viscerion/pull/214#discussion_r361800427).
> - Writing `inject` methods for every single class can feel like a drag because it is.
> - Injecting into Kotlin `object`s appears to be a no-go. I opted to [refactor out the staticity where possible](https://github.com/msfjarvis/viscerion/pull/214/commits/9eb532521f51d0f7bb66a2a78aa1fc5688128a22), [pass injected dependencies to the function](https://github.com/msfjarvis/viscerion/commit/e23f878140d4bda9e2c54d6c2684e07994066fd6#diff-28007a5799b03e7b556f5bb942754031) or [fall back to \'dirty\' patterns](https://github.com/msfjarvis/viscerion/pull/214/commits/fc54ec6bb8e99ec639c6617765e814e12d91ea1a#diff-74f75ab44e1cd2909c4ec4d704bbbab7R65) as needed. Do what you feel like.
I have no idea if that's even a good ability to begin with, so I chose to change myself rather than fight the system.
> - I still do not _love_ Dagger. Fuck you Google.
This, I probably don't subscribe to anymore. Dagger was horrible to get started with, but I can now claim passing knowledge and familiarity with it, enough to be able to use it for simple projects and be comfortable while doing so.
## To summarize
Like RxJava, Dagger has become an industry standard of sorts and a required skill at a lot of Android positions, so eventually you might wind up needing to learn it anyway, so why wait? Dagger is not _terrible_, just badly presented. Learning from existing code is always helpful, and that was part of how I learned. Use my PR, and post questions below and I'll do my best to help you like I was helped and hopefully we'll both learn something new :)
summary = "GitHub recently rolled out Packages to the general public, allowing the entire develop-test-deploy pipeline to get centralized at GitHub. Learn how to use it to publish your Android library packages."
title = "Publishing an Android library to GitHub Packages"
+++
> UPDATE(06/06/2020): The Android Gradle Plugin supports Gradle's inbuilt `maven-publish` plugin since version 4.0.0, so I've added the updated process for utilising it at the beginning of this guide. The previous post follows that section.
GitHub released the Package Registry beta in May of this year, and graduated it to public availability in Universe 2019, rebranded as [GitHub Packages](https://github.com/features/packages "GitHub Packages"). It supports NodeJS, Docker, Maven, Gradle, NuGet, and RubyGems. That's a LOT of ground covered for a service that's about one year old.
Naturally, I was excited to try this out. The [documentation](https://help.github.com/en/github/managing-packages-with-github-packages/about-github-packages) is by no means lacking, but the [official instructions](https://help.github.com/en/github/managing-packages-with-github-packages/configuring-gradle-for-use-with-github-packages) for using Packages with Gradle do not work for Android libraries. To make it compatible with Android libraries, some small but non-obvious edits are needed which I've documented here for everybody's benefit.
> GitHub Packages currently does **NOT** support unauthenticated access to packages, which means you will always require a personal access token with the `read:packages` scope to be able to download packages during build. I emailed GitHub support about this, and their reply is attached at the end of this post.
I've also created a [sample repository](https://github.com/msfjarvis/github-packages-deployment-sample/) with incremental commits corresponding to the steps given below, for people who prefer to see the code directly.
To be able to deploy packages, you will require a Personal Access Token from GitHub with the `write:packages` scope. Follow the steps [here](https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/creating-a-personal-access-token#creating-a-token) to create the token if you have never done so before.
### For AGP >= 4.0.0
All you need to do is ensure you're on at least Gradle 6.5 and AGP 4.0.0, then configure as follows.
// Simple convenience function to hide the nullability of `findProperty`.
private fun getProperty(key: String): String {
return findProperty(key)?.toString() ?: error("Failed to find property for $key")
}
create<MavenPublication>("release") {
from(components.getByName("release"))
groupId = getProperty("GROUP")
artifactId = "deployment-sample-library"
version = getProperty("VERSION")
}
}
}
}
```
Then, set the `GROUP` and `VERSION` properties in `gradle.properties`
```groovy
GROUP=msfjarvis
VERSION=0.1.0-SNAPSHOT
```
And that should be it! You can check the migration commit [here](https://github.com/msfjarvis/github-packages-deployment-sample/commit/260fd3154fd393d3969afd048dc2c77d03619b1d).
When you are ready to publish, run `./gradlew -Pgpr.user=<username> -Pgpr.key=<personal access token> publish` from your repository and everything should correctly deploy.
### For AGP <4.0.0
#### Step 1
Copy the official integration step from GitHub's [guide](https://help.github.com/en/github/managing-packages-with-github-packages/configuring-gradle-for-use-with-github-packages#authenticating-with-a-personal-access-token), into your Android library's `build.gradle` / `build.gradle.kts`. If you try to run `./gradlew publish` now, you'll run into errors. We'll be fixing that shortly. \[[Commit link](https://github.com/msfjarvis/github-packages-deployment-sample/commit/d69235577a1d4345cecb364a3a3d366bf894c5a6)\]
Switch out the `maven-publish` plugin with [this](https://github.com/wupdigital/android-maven-publish) one. It provides us an Android component that's compatible with publications and precisely what we need. \[[Commit link](https://github.com/msfjarvis/github-packages-deployment-sample/commit/1452c4a0c15d394b73dc3384f02834788dfe1bda)\]
Switch to using the `android` component provided by `wup.digital.android-maven-publish`. This is the one we require to be able to upload an [AAR](https://developer.android.com/studio/projects/android-library) artifact. \[[Commit link](https://github.com/msfjarvis/github-packages-deployment-sample/commit/7cc6fcd6ffa5774433bce76ac6929435dbbb77cc)\]
```diff
--- library/build.gradle
+++ library/build.gradle
@@ -42,7 +42,7 @@ publishing {
}
publications {
gpr(MavenPublication) {
- from(components.java)
+ from(components.android)
}
}
}
```
#### Step 4
Every Gradle/Maven dependency's address has three attributes, a group ID, an artifact ID, and a version.
We'll need to configure these too. I prefer using the `gradle.properties` file for this purpose since it's very easy to access variables from it, but if you have a favorite way of configuring build properties, use that instead! \[[Commit link](https://github.com/msfjarvis/github-packages-deployment-sample/commit/cee74a5e0b3b76d1d7a2d4eb9636d80fb1db49d6)\]
```diff
--- gradle.properties
+++ gradle.properties
@@ -19,3 +19,7 @@ android.useAndroidX=true
android.enableJetifier=true
# Kotlin code style for this project: "official" or "obsolete":
kotlin.code.style=official
+
+# Publishing config
+GROUP=msfjarvis
+VERSION=0.1.0-SNAPSHOT
--- library/build.gradle
+++ library/build.gradle
@@ -43,6 +43,10 @@ publishing {
publications {
gpr(MavenPublication) {
from(components.android)
+ groupId "$GROUP"
+ artifactId "deployment-sample-library"
+ // Use your configured version outside CI, the SHA of the top commit inside.
+ version System.env['GITHUB_SHA'] == null ? "$VERSION" : System.env['GITHUB_SHA']
}
}
}
```
#### Step 5
Now all that's left to do is configure GitHub Actions. Go to the Secrets menu in your repository's settings, then create a `PACKAGES_TOKEN` secret and provide the access token you generated earlier. Head over to the [documentation](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets#creating-encrypted-secrets) for Secrets if you wanna know how this works under the hood.
Now, let's add the actual configuration that'll get Actions up and running.
```diff
--- /dev/null
+++ .github/workflows/publish_snapshot.yml
@@ -0,0 +1,13 @@
+name: "Release per-commit snapshots"
+on: push
+
+jobs:
+ setup-android:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@master
+ - name: Publish snapshot
+ run: ./gradlew publish
+ env:
+ USERNAME: msfjarvis
+ PASSWORD: ${{ secrets.PACKAGES_TOKEN }}
```
That's it! Once you push to GitHub, you'll see the [action running](https://github.com/msfjarvis/github-packages-deployment-sample/commit/42e1f6609bf9f2abe8e181296a57d86df648b4d4/checks?check_suite_id=322323808) in your repository's Actions tab and a [corresponding package](https://github.com/msfjarvis/github-packages-deployment-sample/packages/60429) in the Packages tab once the workflow finishes executing.
### Closing notes
The requirement to authenticate for packages is a significant problem with GitHub Packages' adoption, giving an edge to solutions like [JitPack](https://jitpack.io) which handle the entire process automagically. As mentioned earlier, I did contact GitHub support about it and got this back.
![GitHub support reply about authentication requirement for packages](/uploads/github_packages_support_response.webp)
My interpretation of this is quite simply that **it's gonna take a while**. I hope not :)
summary = "Analytics platforms are often overwhelming and a privacy nightmare -- here's how to bring analytics to the backend with very simple tooling"
Analytics are a very helpful aspect of any development. They allow developers to know what parts of their apps are visited the most often and can use more attention, and for bloggers to know what content does or does not resonate with their readers.
There are many, many analytics providers and software stacks each with their specific pros and cons, but nearly all managed analytics come with the overarching concern of privacy of user data. [Google Analytics](https://analytics.google.com/) is a _huge_ analytics vendor, with the capabilities to almost accurately extrapolate even the **age** of your visitors. That's nuts, and honestly scary.
Analytics platforms often drown us in data and statistics most of which we don't really care for or use. Wouldn't it be far easier if we were able to both remove the client-side aspect of analytics, as well as remove unused information and focus on what we need? Enter [Goaccess](https://goaccess.io).
## What is Goaccess
Goaccess is an **open-source**, **real-time** web log analyzer. In other words, it parses your webserver's logs and generates actionable reports from them in HTML, JSON or CSV, based on your needs. It is highly configurable and allows you to also modify the generated report by anonymizing IPs, ignoring crawlers, determining the real operating systems of the users and some more.
## The setup
To create a compelling analytics experience, we'll need to use Goaccess' `--real-time-html` option, that creates an HTML report, and an accompanying `WebSocket` server that will dispatch a request to update the page data every time goaccess parses updated logs. Here's a peek at Goaccess' terminal visualizer, to get an idea about the datasets you can expect from the web version.
![Goaccess in the terminal](/uploads/goaccess_terminal.webp)
Goaccess supports most common webserver log formats, and [some more](https://goaccess.io/man#options) with the option to provide your own format if you're using custom solutions. I'm using `VCOMMON`, as that is the default log format of my webserver of choice, [Caddy](https://caddyserver.com). Here's the command executed by the systemd unit that I use for goaccess. I'll explain every option in a bit.
```bash
goaccess --log-format=VCOMMON \
--ws-url=wss://stats.example.com/ws \
--output=${STATS_DIR}/index.html \
--log-file=/etc/logs/requests.log \
--no-query-string \
--anonymize-ip \
--double-decode \
--real-os \
--real-time-html
```
- `--ws-url`: This option allows us to specify the path for our WebSocket server that's responsible for dispatching updates.
- `--output`: File to dump HTML reports into.
- `--log-file`: The source file to read logs from.
- `--no-query-string`: Does not parse the query string from URLs (`example.org/contact?utm_source=twitter` => `example.org/contact`). This can greatly decrease memory consumption and is often not helpful.
- `--double-decode`: Attempts to decode values like user-agent, request and referrer that are often encoded twice.
- `--real-os`: Displays the real OS names behind the browsers.
- `--real-time-html`: The hero of the show -- the option that makes our analytics real-time and self-updating in the browser.
The final step in this process is to expose the local WebSocket server to the `/ws` endpoint of your domain to allow real-time updates to work. Here's how I do it in Caddy.
```bash
https://stats.example.com/ws {
proxy / localhost:7890 {
websocket
}
}
```
And that's it! Your analytics page should be up at your specified URL, updating on every new request and visitor.
summary = "Rust programs are pretty fast on their own, but you can slightly augment their performance with some simple tricks."
slug = "simple-tricks-for-faster-rust-programs"
socialImage = "uploads/cuddlyferris.webp"
tags = ["perf"]
title = "Simple tricks for faster Rust programs"
+++
Rust is _pretty_ fast. Let's get that out of the way. But sometimes, _pretty_ fast is not fast enough.
Fortunately, it's also _pretty_ easy to slightly improve the performance of your Rust binaries with minimal code changes. I'm gonna go over some of these tricks that I've picked up from many sources across the web (I'll post a small list of **very** good blogs run by smart Rustaceans who cover interesting Rust related things).
## Turn on full LTO
Rust by default runs a "thin" LTO pass across each individual [codegen unit](https://doc.rust-lang.org/rustc/codegen-options/index.html#codegen-units). This can be optimized with a very simple addition to your `Cargo.toml`
```toml
[profile.release]
codegen-units = 1
lto = "fat"
```
This makes the following changes to the `release` profile:
- Forces `rustc` to build the entire crate as a single unit, which lets LLVM make smarter decisions about optimization thanks to all the code being together.
- Switches LTO to the `fat` variant. In `fat` mode, LTO will perform [optimization across the entire dependency graph](https://doc.rust-lang.org/rustc/codegen-options/index.html#lto) as opposed to the default option of doing it just to the local crate.
## Use a different memory allocator
Some time ago, Rust switched from using `jemalloc` on all platforms to the OS-native allocator. This caused serious performance regressions in many programs like [fd](https://github.com/sharkdp/fd). To switch back to `jemalloc`, check out [this](https://github.com/sharkdp/fd/pull/481) PR for the changes required.
Note that this alone is not guaranteed to be helpful, and a lot of programs see little to no benefit, so please run your own benchmarks with [hyperfine](https://github.com/sharkdp/hyperfine) to confirm whether or not it helped you.
## Cows!
Rustaceans [love their cows](https://www.reddit.com/r/rust/comments/8o1pxh/the_secret_life_of_cows/), and it's one of the most underrated APIs in the Rust standard library. It's claim to fame is relatively simple - it's a smart copy-on-write pointer. Or well, a smart clone-on-write pointer, as copy means something different in Rust as opposed to other languages.
Given a data wrapped in a `std::borrow::Cow`, you can avoid cloning the data if you only want immutable read access, which saves memory and improves runtime performance as well. Over a large codebase, these savings pile up to create a noticeable enough difference. Here's an example from the Rust standard library that explains this well.
```rust
use std::borrow::Cow;
fn abs_all(input: &mut Cow<[i32]>) {
for i in 0..input.len() {
let v = input[i];
if v <0{
// Clones into a vector if not already owned.
input.to_mut()[i] = -v;
}
}
}
// No clone occurs because `input` doesn't need to be mutated.
let slice = [0, 1, 2];
let mut input = Cow::from(&slice[..]);
abs_all(&mut input);
// Clone occurs because `input` needs to be mutated.
let slice = [-1, 0, 1];
let mut input = Cow::from(&slice[..]);
abs_all(&mut input);
// No clone occurs because `input` is already owned.
let mut input = Cow::from(vec![-1, 0, 1]);
abs_all(&mut input);
```
# References
- Pascal Hertleif's [blog](https://deterministic.space/) - He's a very popular and active Rust developer and writes amazing, insightful articles.
- Amos Wenger's [blog](https://fasterthanli.me) - Amos' articles often go over important topics like API design through a comparison angle between Rust and another language to highlight differences and benefits to each approach.
- Stjepan Glavina's [blog](https://stjepang.github.io/) - He's done a lot of interesting perf-related work including optimising sorting in the stdlib and building async libraries. His writeups for the library work are very intriguing and go into great detail about the process.
summary = "The Viscerion experiment that started more than a year ago is now coming to an end. Here's what's happening."
slug = "sunsetting-viscerion"
socialImage = "uploads/viscerion_social.webp"
tags = ["personal"]
title = "Sunsetting Viscerion"
+++
Viscerion is one of my more known and loved apps that I myself continue to enjoy working on and using. The project started back in 2018 following a short stint with WireGuard working on their own Android app, and is now being shut down.
> TL;DR: The work I have been doing on Viscerion for the past year will become a part of the upstream WireGuard app over the next 6 months under an agreement between me and Jason Donenfeld, the WireGuard creator and lead developer.
## The story behind Viscerion
When I initially started this project, it was called WireGuard-KT, and as the dumb and literal name suggests, began with me rewriting the app into Kotlin. I was not a huge fan of Kotlin at that point in time, but was eager to learn and this was the perfect opportunity. My ambitions were rather too lofty for the upstream project at that time and they had to let me go from the internship position, presenting the option to pursue everything I had planned, in a personal capacity.
When I was working on the upstream app, I was seeding builds of my staging branches to a group of friends, who also became the first users/testers of WireGuard-KT. They encouraged me to publish the app to the Play Store which has since been unpublished over copyright concerns about the similarity of the name and resulted in the rebranding of the project as Viscerion.
## Fast-forwarding to today
Jason contacted me, extending an invitation to bring my work from Viscerion to upstream under a paid contract which would involve shutting down Viscerion since the reason why it was created in the first place was now void (consider this like Inbox and Gmail but an alternate universe where the most important features weren't being skipped over). After coming to a mutual agreement over what features and changes would be and what would be the process of deprecating Viscerion, I was officially hired and given full push access.
## What's going to happen with Viscerion
I have submitted a final [5.2.11](https://github.com/msfjarvis/viscerion/releases/latest) release to the [Play Store](https://play.google.com/store/apps/details?id=me.msfjarvis.viscerion), and the repository has been made read-only. The Play Store listing will be unpublished after 60 days and will only be available to existing users. Hearty thanks to every single user of Viscerion that has helped make this experiment a roaring success and to Jason for finally coming around :p
summary = "I recently moved from forwarding my email through Google to hosting it through Purelymail.com. Here are some thoughts about the process and the motivation behind it"
Email is a very crucial part of my workflow, and I enjoy using it (and also why I'm beyond excited for what Basecamp has in store with [hey.com](https://hey.com)). I have switched emails a couple times over the many years I have had an internet presence, finally settling on [me@msfjarvis.dev](mailto:me@msfjarvis.dev) when I bought my domain. There began the problem.
I attempt to self-host things when reasonable, to retain some control and not have a single point of failure outside my control that would lock me out. With email, that part is a constant, uphill battle against spam filters to ensure your domain doesn't land in a big filter list that will then start trashing all your email and make life hard. Due to this, I never self-hosted email, instead choosing to forward it through Google Domains (the registrar for this domain) to my existing Google account. While this is a very reliable approach, it still involves depending heavily on my Google account. This has proven to be a problem in many ways, including being locked out after opting into Advanced Protection and people's accounts being banned for a number of reasons completely unrelated to email. If something like this were to happen to me, I would lose both my Google as well as my domain email instantly. A very scary position to be in for anybody.
A couple days ago, [Jake Wharton](https://twitter.com/JakeWharton) retweeted a blog post from [Roland Szabo](https://twitter.com/rolisz) titled ['Moving away from GMail'](https://rolisz.ro/2020/04/11/moving-away-from-gmail/). I read through it, looked at PurelyMail, and was convinced that it was really the solution for my little problem. I am a big believer in paying in dollaroos rather than data so I really loved the transparency behind pricing, data use, infrastructure and just about everything else. Signed up!
## Migration
Like any other email provider, all you need to configure for PurelyMail to work is DNS. I use Cloudflare for my sites, so there was nothing to do on the Google Domains side of things. I left the forwarding setup as-is to allow any lagging DNS resolvers to still be able to get email to me, even if its to my Google account. I hope to get rid of that setting in the near future since I believe the change will have propagated by then. I maintain my DNS settings under a git repository, using StackExchange's excellent [dnscontrol](http://stackexchange.github.io/dnscontrol/) tool. DNSControl operates on a JS-like syntax that is parsed, evaluated and then used to publish to the DNS provider of choice. Neat stuff! The changes required looked something like this:
```diff
diff --git dnsconfig.js dnsconfig.js
index 29b8d1a927ab..01ea2af1d448 100644
--- dnsconfig.js
+++ dnsconfig.js
@@ -6,7 +6,7 @@
var REG_NONE = NewRegistrar('none', 'NONE');
var DNS_CF = NewDnsProvider('cloudflare', 'CLOUDFLAREAPI');
The only 'unexpected' change I had to make was to disable Cloudflare's proxy feature for the CNAME records. Once that was done, PurelyMail was instantly able to verify all DNS records and I was in business.
## Pros and Cons of the switch
I've been on PurelyMail for about a day now and poked around enough to have a comprehensive idea of what's different from my usual GMail flow, so let's get into that.
### Pros
#### You pay for it
By now everybody must have realized a simple fact: If you're not paying, you're the product. I do not wish to be a product. Your hard-earned money is more likely to keep companies from being shady than your emails. PurelyMail is a one man operation, which makes it more trustworthy to me than Google's massive scale. Google does not care for a single user, PurelyMail will.
#### Transparency
PurelyMail tells you upfront about what they charge and how they arrive at that number. There is no contract period, and if you wish to have fine grained control over what you pay, you can use their advanced pricing section to calculate your costs based on your exact needs. The website is straightforward and to the point, there is no glossy advertising to obscure flaws, and their security practices are all [documented](https://purelymail.com/docs/security) on their site, front and center.
#### Failsafe
My GMail is tied to my Google account, which means anything that flags my Google account will bring down my ability to have email. This is a scary position to be in. Having my email separate from my Google account frees me from that looming danger.
#### Easy export
PurelyMail has a tool called [`mailPort`](https://purelymail.com/docs/mailPort) that lets you move email between PurelyMail and other providers. You can bring your entire mailbox to PurelyMail when switching to it, or back to wherever you go next should it not feel sufficient for your needs. No questions asked, and no bullshit. It just works.
#### No client lock-in
Because PurelyMail has no bells and whistles, you won't be penalized on the feature side of things if you use one client compared to another. Things stay consistent.
### Cons
#### You pay for it
I am in a fortunate position where I can pay for things solely based on principle, without having to worry _too_ much. Not everybody is similarly blessed, or you may simply have technical issues with being able to pay online for internet things. Stripe and PayPal are not available globally, and fees are often insane. I completely understand.
#### Roundcube is great, but it ain't no GMail
PurelyMail uses the Roundcube frontend for its webmail offering, with a couple extra themes. It's not the prettiest, and does not have a lot of bells and whistles that you might get accustomed to from GMail. The change is a bit rough honestly, but the pros certainly outweigh the cons. On the bright side, its easier to influence product direction at PurelyMail, so get on the issue tracker and request or vote for features!
#### No dedicated client
Not having a specialized client unfortunately also means that you'll have to shop around for what works. I still use the GMail mobile app, but K-9 Mail is also pretty decent.
## Conclusion
I have begun moving my various accounts to my domain mail as and when they remind me of their existence (55 left still, if my [pass](https://passwordstore.org/) repository is to be believed), and hope to eventually be able to get by without the pinned GMail tab in my browser :)
PurelyMail has proven to be an excellent platform so far. Support has been swift and helpful, and I haven't had any bad surprises. I hope to be a content user for as long as I possibly can :)
summary = "Kotlin's been great for me -- and millions others, as evident by its explosive growth. Long-time Java developers may feel hesitant to give it a shot. This series aims to smoothen this transition, letting people know what benefits they might reap from Kotlin, and what differences should they be careful about."
title = "#TeachingKotlin - Kotlin for Android Java developers"
+++
Anybody familiar with my work knows that I am a fan of the [Kotlin](https://kotlinlang.org/ "Kotlin") programming language, especially it's interoperability with Java with respect to Android. I'll admit, I've not been a fan since day one. The abundant lambdas worried me and everything being that much shorter to implement was confusing to a person whose first real programming task was in the Java programming language.
As I leaped over the initial hurdle of hesitation and really got into Kotlin, I was mindblown. Everything is so much better! Being able to break away from Java's explicit verbosity into letting the language do things for you is a bit daunting at first but over time you'll come to appreciate the time you save and in turn how many potential problems you can avoid by simply not having to do everything yourself. [Can't have bugs if you don't write code](https://github.com/kelseyhightower/nocode) :p
As I've gotten more and more into the Kotlin ecosystem and community and converted developers into adopting Kotlin, into taking that first step, I've realised most of them have a common set of concerns and often a lack of knowledge about what Kotlin actually brings to the table and what are the drawbacks of using a "new" language over an established behemoth like Java.
Hence I've decided to publish a series of posts outlining exactly that -- What to expect when moving to Kotlin from Java, the benefits and the common pitfalls as well as current limitations that may or may not hinder said move. The first post of the series will go up on the upcoming Monday evening 6:00 PM IST (Indian Standard Time), and all following ones will be published at the same time every week. I'd like to keep this up for as long as possible and so I'm not declaring this as a `n`-part series right off the bat. We'll figure it out as we go :)
summary = "Part 1 of my #TeachingKotlin, this post goes over Kotlin classes, objects and how things like finality and staticity vary between Java and Kotlin."
slug = "teaching-kotlin--classes-and-objects"
tags = []
title = "#TeachingKotlin Part 1 - Classes and Objects and everything in between"
Classes in Kotlin closely mimic their Java counterparts in implementation, with some crucial changes that I will attempt to outline here.
Let's declare two identical classes in Kotlin and Java as a starting point. We'll be making changes to them alongside to show how different patterns are implemented in the two languages.
Java:
{{<highlightjava>}}
class Person {
private final String name;
public Person(String name) {
this.name = name;
}
}
{{</highlight>}}
Kotlin:
{{<highlightkotlin>}}
class Person(val name: String)
{{</highlight>}}
The benefits of using Kotlin immediately start showing! But let's go over this in a sysmetatic fashion and break down each aspect of what makes Kotlin so great.
## Constructors and parameters
Kotlin uses a very compact syntax for describing primary constructors. With some clever tricks around default values, we can create many constructors out of a single one!
Notice the `val` in the parameter name. It's a concise syntax for declaring variables and initializing them from the constructor itself. Like any other property, they can be mutable (`var`) or immutable (`val`). If you remove the `val` in our `Person` constructor, you will not have a `name` variable available on its instance, i.e., `Person("Person 1").name` will not resolve.
The primary constructor cannot have any code so Kotlin provides something called 'initializer blocks' to allow you to run initialization code from your constructor. Try running the code below in the [Kotlin playground](https://play.kotlinlang.org/)
{{<highlightkotlin>}}
class Person(val name: String) {
init {
println("Invoking constructor!")
}
}
val \_ = Person("Matt")
{{</highlight>}}
Moving on, let's add an optional age parameter to our classes, with a default value of 18. To make it convenient to see how different constructors affect values, we're also including an implementation of the `toString` method for some classing print debugging.
Java:
{{<highlightjava>}}
class Person {
private final String name;
private int age = 18;
public Person(String name) {
this.name = name;
}
public Person(String name, int age) {
this(name);
this.age = age;
}
@Override
public String toString() {
return "Name=" + name + ",age=" + Integer.toString(age);
}
}
{{</highlight>}}
Kotlin:
{{<highlightkotlin>}}
class Person(val name: String, val age: Int = 18) {
override fun toString() : String {
// I'll go over string templates in a future post, hold me to it :)
return "Name=$name,age=$age"
}
}
{{</highlight>}}
Lots of new things here! Let's break them down.
Kotlin has a feature called 'default parameters', that allows you to specify default values for parameters, thus making them optional when creating an instance of the class.
Let's take these for a spin on [repl.it](https://repl.it)!
Both work perfectly well, but you know which one you'd enjoy writing more ;)
An important note here is that constructors with default values don't directly work with Java if you're writing a library or any code that would require to interop with Java. Use the Kotlin `@JvmOverloads` annotation to handle that for you.
{{<highlightkotlin>}}
class Person @JvmOverloads constructor(val name: String, val age: Int = 18) {
override fun toString() : String {
return "Name=$name,age=$age"
}
}
{{</highlight>}}
Doing this will generate constructors similar to how we previously wrote in Java, to allow both Kotlin and Java callers to work.
## Finality of classes
In Kotlin, all classes are final by default, and cannot be inherited while Java defaults to extensible classes. The `open` keyword marks Kotlin classes as extensible, and the `final` keyword does the opposite on Java.
Java:
{{<highlightjava>}}
public class Man extends Person { /_ Class body _/ } // Valid in Java
{{</highlight>}}
Kotlin:
{{<highlightkotlin>}}
class Man(val firstName: String) : Person(firstName) // Errors!
{{</highlight>}}
Trying it out in the Kotlin REPL
{{<highlightkotlin>}}
> > > class Person @JvmOverloads constructor(val name: String, val age: Int = 18) {
> > > ... override fun toString() : String {
> > > ... return "Name=$name,age=$age"
> > > ... }
> > > ... }
> > > class Man(val firstName: String) : Person(firstName)
> > > error: this type is final, so it cannot be inherited from
> > > class Man(val firstName: String) : Person(firstName)
^
{{</highlight>}}
Makes sense, since that's default for Kotlin. Let's add the `open` keyword to our definition of `Person` and try again.
{{<highlightkotlin>}}
> > > open class Person @JvmOverloads constructor(val name: String, val age: Int = 18) {
> > > ... override fun toString() : String {
> > > ... return "Name=$name,age=$age"
> > > ... }
> > > ... }
> > > class Man(val firstName: String) : Person(firstName)
> > > println(Man("Henry"))
> > > Name=Henry,age=18
> > > {{</highlight>}}
And everything works as we'd expect it to. This is a behavior change that is confusing and undesirable to a lot of people, so Kotlin provides a compiler plugin to mark all classes as `open` by default. Check out the [`kotlin-allopen`](https://kotlinlang.org/docs/reference/compiler-plugins.html#all-open-compiler-plugin) page for more information about how to configure the plugin for your needs.
## Static utils classes
Everybody knows that you don't have a real project until you have a `StringUtils` class. Usually it'd be a `public static final` class with a bunch of static methods. While Kotlin has a sweeter option of [extension functions and properties](https://kotlinlang.org/docs/tutorials/kotlin-for-py/extension-functionsproperties.html), for purposes of comparison we'll stick with the old Java way of doing things.
Here's a small function I use to convert Android's URI paths to human-readable versions.
Java:
{{<highlightjava>}}
public static final class StringUtils {
public static String normalizePath(final String str) {
// I'll cover this declaration style too. It's just the first post!
fun normalizePath(str: String): String = str.replace("/document/primary:", "/sdcard/")
}
{{</highlight>}}
A recurring pattern with Kotlin is concise code, as you can see in this case.
That's all for this one! Let me know in the comments about what you'd prefer to be next week's post about or if you feel I missed something in this one and I'll definitely try to make it happen :)
summary = "The second post in #TeachingKotlin series, this post goes over Kotlin's variables and their attributes, like visibility and getters/setters."
Let's start with a simple [data class](https://kotlinlang.org/docs/reference/data-classes.html#data-classes) and see how the variables in there behave.
```kotlin
data class Student(val name: String, val age: Int, val subjects: ArrayList<String>)
```
To use the variables in this class, Kotlin let's you directly use the dot notation for accessing.
```kotlin
>>> val s1 = Student("Keith Hernandez", 21, arrayListOf("Mathematics", "Social Studies"))
>>> println(s1.name)
Keith Hernandez
>>> println(s1) // data classes automatically generate `toString` and `hashCode`
Student(name=Keith Hernandez, age=21, subjects=[Mathematics, Social Studies])
```
For Java callers, Kotlin also generates getters and setter methods.
```java
final Student s1 = new Student("Keith Hernandez", 21, arrayListOf("Mathematics", "Social Studies"));
System.out.println(s1.getName());
System.out.println(s1);
```
The same properties apply to variables in non-data classes as well.
```kotlin
>>> class Item(id: Int, name: String) {
... val itemId = id
... val itemName = name
... }
>>> val item = Item(0, "Bricks")
>>> println(item.itemId)
0
>>> println(item)
Line_4$Item@46fb460a
>> >
```
As you can notice, the `toString` implementation is not identical to our data classes but that's a topic for another post. Back to variables!
## Customizing getters and setters
While Kotlin creates getters and setters automatically, we can customize their behavior.
```kotlin
class Item(id: Int, name: String) {
var itemId = id
var itemName = name
var currentState: Pair<Int,String> = Pair(itemId, itemName)
set(value) {
itemId = value.first
itemName = value.second
field = value
}
override fun toString() : String {
return "id=$itemId,name=$itemName"
}
}
```
Let's take this for a spin in the Kotlin REPL and see how our `currentState` field behaves.
```kotlin
>>> val item = Item(0, "Nails")
>>> println(item)
id=0,name=Nails
>>> item.currentState = Pair(1, "Bricks")
>>> println(item)
id=1,name=Bricks
```
Notice how setting a new value to currentState mutates the other variables as well? That's because of our custom setter. These setters are identical to a normal top-level function except a reference to the field in question is available as the variable `field` for manipulation.
## Visibility modifiers
Kotlin's visibility modifiers aren't very well explained. There's the standard `public`, `private` and `protected`, but also the new `inner` and `internal`. I'll attempt to fill in those gaps.
### `inner`
`inner` is a modifier that only applies to classes declared within another one. It allows you to access members of the enclosing class. A sample might help explain this better.
```kotlin
class Outer {
private val bar: Int = 1
inner class Inner {
fun foo() = bar
}
}
val demo = Outer().Inner().foo() // == 1
```
The keyword `this` does not behave as some would normally expect in inner classes, go through the Kotlin documentation for `this` [here](https://kotlinlang.org/docs/reference/this-expressions.html) and I'll be happy to answer any further questions :)
### `internal`
`internal` applies to methods and properties in classes. It makes the field/method 'module-local', allowing it to be accessed within the same module and nowhere else. A module in this context is a logical compilation unit, like a Gradle subproject.
That's all for today! Hope you're liking the series so far. I'd love to hear feedback on what you want me to cover next and how to improve what I write :)
summary = "Part 3 of #TeachingKotlin covers some subtle differences between Kotlin and Java that might affect your codebases as you start migrating to or writing new code in Kotlin."
title = "#TeachingKotlin Part 3 - Caveats coming from Java"
+++
When you start migrating your Java code to Kotlin, you will encounter multiple subtle changes that might catch you off guard. I'll document some of these gotchas that I and other people I follow have found and written about.
## Splitting strings
Java's `java.lang.String#split` [method](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#split-java.lang.String-) takes a `String` as it's first argument and creates a `Regex` out of it before attempting to split. Kotlin, however, has two variants of this method. One takes a `String` and uses it as a plaintext delimiter, and the other takes a `Regex` behaving like the Java method we mentioned earlier. Code that was directly converted from Java to Kotlin will fail to accommodate this difference, so be on the lookout.
## Runtime asserts
Square's [Jesse Wilson](https://twitter.com/jessewilson) found through an [OkHttp bug](https://github.com/square/okhttp/issues/5586) that Kotlin's `assert` function differs from Java's in a very critical way - the asserted expression is _always_ executed. He's written about it on his blog which you can check out for a proper write up: [Kotlin’s Assert Is Not Like Java’s Assert](https://publicobject.com/2019/11/18/kotlins-assert-is-not-like-javas-assert/).
TL; DR Java's `assert` checks the `java.lang.Class#desiredAssertionStatus` method **before** executing the expression, but Kotlin does it **after** which results in unnecessary, potentially significant overhead.
```java
// Good :)
@Override void flush() {
if (Http2Stream.class.desiredAssertionStatus()) {
if (!Thread.holdsLock(Http2Stream.this) == false) {
throw new AssertionError();
}
}
}
```
```kotlin
// Bad :(
override fun flush() {
if (!Thread.holdsLock(this@Http2Stream) == false) {
if (Http2Stream::class.java.desiredAssertionStatus()) {
throw AssertionError()
}
}
}
```
## Binary incompatibility challenges
[Jake Wharton](https://twitter.com/JakeWharton) wrote in his usual in-depth detail about how the Kotlin `data` class modifier makes it a challenge to modify public API without breaking source and binary compatibility. Kotlin's sweet language features that provide things like default values in constructors and destructuring components become the very thing that inhibits binary compatibility.
Take about 10 minutes out and give Jake's article a read: [Public API challenges in Kotlin](https://jakewharton.com/public-api-challenges-in-kotlin/).
## Summary
While migrating from Java to Kotlin is great, there are many subtle differences between the languages that can blindside you and must be taken into account. It's more than likely that these problems may never affect you, but it's probably helpful to know what's up when they do :)
summary = "GitHub Actions is a power CI/CD platform that can do a lot more than your traditional CI systems. Here's some tips to get you started with exploring its true potential."
slug = "github-actions-tips-tricks"
socialImage = "/uploads/actions_social.webp"
tags = ["tips and tricks", "schedules", "jobs", "workflows"]
title = "Tips and Tricks for GitHub Actions"
+++
GitHub Actions has grown at a rapid pace, and has become the CI platform of choice for most open source projects. The recent changes to Travis CI's pricing for open source is certainly bound to accelerate this even more.
Due to it being a first-party addition to GitHub, Actions has nearly infinite potential to run jobs in reaction to changes on GitHub. You can automatically set labels to newly opened pull requests, greet first time contributors, and more.
Let's go over some things that you can do with Actions, and we'll end it with some safety related tips to ensure that your workflows are secure from both rogue action authors as well as rogue pull requests.
## Running workflows based on a cron trigger
GitHub Actions can trigger the execution of a workflow in response to a large list of events as given [here](https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows), one of them being a cron schedule. Let's see how we can use the schedule feature to automate repetitive tasks.
For [Android Password Store](https://msfjarvis.dev/aps), we maintain a list of known [public suffixes](https://publicsuffix.org/) to be able efficiently detect the 'base' domain of the website we're autofilling into. This list changes frequently, and we typically sync our repository with the latest copy on a weekly basis. Actions enables us to do this automatically:
```yaml
name: Update Publix Suffix List data
on:
schedule:
- cron: "0 0 * * 6"
jobs:
update-publicsuffix-data:
# The actual workflow doing the update job
```
Putting the cron expression into [crontab guru](https://crontab.guru/#0_*_*_*_6), you can see that it executes at 12AM on every Saturday. Going through the merged pull requests in APS, you will also notice that the [publicsuffixlist pull requests](https://github.com/android-password-store/Android-Password-Store/pulls?q=is%3Apr+is%3Amerged+sort%3Aupdated-desc+label%3APSL) indeed happen no sooner than 7 days apart.
Mine is a very naive example of how you can use cron triggers to automate parts of your workflow. The [Rust](https://github.com/rust-lang) project uses these same triggers to implement a significantly more important aspect of their daily workings. Rust maintains a repository called [glacier](https://github.com/rust-lang/glacier) which contains a list of internal compiler errors (ICEs) and code fragments to reproduce each of them. Using a similar cron trigger, this repository checks each new nightly release of Rust to see if any of these compiler crashes were resolved silently by a refactor. When it comes across a ICE that was fixed (compiles correctly or fails with errors rather than crashing the compiler), it files a [pull request](https://github.com/rust-lang/glacier/pulls?q=is%3Apr+author%3Aapp%2Fgithub-actions+sort%3Aupdated-desc) moving the reproduction file to the `fixed` pile.
## Running jobs based on commit message
Continuous delivery is great, but sometimes you want slightly more control. Rather than run a deployment task on each push to your repository, what if you want it to only run when a specific keyword is in the commit message? Actions has support for this natively, and the deployment pipeline of this very site relies on this feature:
# Set up wrangler and push to the staging environment
```
This snippet defines a job that is only executed when the top commit of the push contains the text `[deploy]` in its message, and another that only runs when the commit message contains `[staging]`. Together, these let me control if I want a change to not be immediately deployed, deployed to either the main or staging site, or to both at the same time. So now I can update a draft post without a full re-deployment of the main site, or make a quick edit to a published post that doesn't need to be reflected in the staging environment.
The core logic of this operation is composed of three parts. The [github context](https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions#github-context), the [if conditional](https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions#jobsjob_idif) and the [contains](https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions#contains) method. The linked documentation for each does a great job at explaining them, and has further references to allow you to fulfill even more advanced use cases.
## Testing across multiple configurations in parallel
Jobs in a workflow run in parallel by default, and GitHub comes with an amazing matrix functionality that can automatically generate multiple jobs for you from a single definition. Take this specific example:
In GitHub Actions, we can simply provide the platforms (Windows, MacOS and, Ubuntu) and the Rust channels (Stable, Beta, and Nightly) inside a single job and let it figure out how to make the permutations and create separate jobs for them. To configure such a matrix, we write something like this:
```yaml
jobs:
check-rust-code:
strategy:
# Defines a matrix strategy
matrix:
# Sets the OSes we want to run jobs on
os: [ubuntu-latest, windows-latest, macOS-latest]
# Sets the Rust channels we want to test against
rust: [stable, beta, nightly]
# Make the job run on the OS picked by the matrix
runs-on: ${{ matrix.os }}
steps:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
components: rustfmt, clippy
# Installs the Rust toolchain for the channel picked by the matrix
toolchain: ${{ matrix.rust }}
```
This will automatically generate 9 (3 platforms \* 3 Rust channels) parallel jobs to test this entire configuration, without requiring us to manually define each of them. [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) at its finest :)
## Make a job run after another
By default, jobs defined in a workflow file run in parallel. However, we might need a more sequential order of execution for some cases, and GHA does include support for this case. Let's try another real world example!
[LeakCanary](https://github.com/square/leakcanary) has a [checks job](https://github.com/square/leakcanary/blob/f5343aca6e019994f7e69a28fac14ca18e071b88/.github/workflows/main.yml) that runs on each push to the main branch and on each pull request. They wanted to add support for snapshot deployment, in order to finally retire Travis CI. To make this happen, I simply added a [new job](https://github.com/square/leakcanary/pull/2044/commits/a6f6c204559396120836b27c0b2a46d3e444c728) to the same workflow, having it run only on push events and have a dependency on the checks job. This ensures that there won't be a snapshot deployment until all tests are passing on the main branch. The relevant parts of the workflow configuration are here:
```yaml
on:
pull_request:
push:
branches:
- main
jobs:
checks:
# Runs automated unit and instrumentation tests
snapshot-deployment:
# Only run if the push event triggered this workflow run
if: "github.event_name == 'push'"
# Run after the 'checks' job has passed
needs: [checks]
```
# Mitigating security concerns with Actions
GitHub Actions benefits from a vibrant ecosystem of user-authored actions, which opens it up to equal opportunities for abuse. It is relatively easy to work around the common ones, and I'm going to outline them here. I'm no authority on security, and these recommendations are based on a combination of my reading and understanding. These _should_ be helpful, but this list is not exhaustive, and you should exercise all the caution you can.
## Use exact commit hashes rather than tags
Tags are moving qualifiers, and can be [force pushed at any moment](https://julienrenaux.fr/2019/12/20/github-actions-security-risk/). If the repository for an Action you use in your workflows is compromised, the tag you use could be force pushed with a malicious version that can send your repository secrets to a third-party server. Auditing the source of a repository at a given tag, then using the SHA1 commit hash it currently points to as the version addresses that concern due to it being nearly impossible to fake a new commit with the exact hash.
To get the commit hash for a specific tag, head to the Releases page of the repository, then click the short SHA1 hash below the tag name and copy the full hash from the URL.
![A tag along with its commit hash](/uploads/actions_tips_tricks_commit_hash.webp)
A more extreme fix for this problem is to [vendor](https://stackoverflow.com/questions/26217488/what-is-vendoring) each third-party action you use into your own repository, and then use the local copy as the source. This puts you in charge of manually syncing the source to each version, but allows you to restrict the allowed Actions to ones in your repository thereby greatly increasing security. However, having to manually sync can get tedious if your workflows involve a lot of third-party actions. However, the same manual sync also gives you slightly better visibility into the changes between versions since they'd be available in a single PR diff.
To use an Action from a local directory, replace the `uses:` line with the relative path to the local copy in the repository.
```diff
job:
checks:
- name: Checkout repository
# Assuming the copy of actions/checkout is at .github/actions/checkout
- - uses: actions/checkout@v2
+ - uses: ./.github/actions/checkout
```
## Replace `pull_request_target` with `pull_request`
[`pull_request_target`](https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows#pull_request_target) grants a PR access to a github token that can write to your repository, exposing your code to modification by a malicious third-party who simply needs to open a PR against your repository. Most people will already be using the safe [`pull_request`](https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows#pull_request) event, but if you are not, audit your requirements for `pull_request_target` and make the switch.
```diff
-on: [push, pull_request_target]
+on: [push, pull_request]
```
{{<horizontal_line>}}
I'm still learning about Actions, and there is a lot that I did not cover here. I highly encourage readers to refer the GitHub docs for [Workflow syntax](https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions) and [Context and expressions syntax](https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions) to gain more knowledge of the workflow configuration capabilities. Let me know if you find something cool that I did not cover here!
summary: Renovate is an extremely powerful tool for keeping your dependencies
up-to-date, and its flexibility is often left unexplored. I'm hoping to change
that.
draft: false
slug: tips-and-tricks-for-using-renovate
tags:
- dependency-management
- renovate
title: Tips and tricks for using Renovate
---
[Mend Renovate](https://www.mend.io/free-developer-tools/renovate/) is a free to use dependency update management service powered by the open-source [renovate](https://github.com/renovatebot/renovate), and is a compelling alternative to GitHub's blessed solution for this problem space: [Dependabot](https://docs.github.com/en/code-security/dependabot). Renovate offers a significantly larger suite of supported language ecosystems compared to Dependabot as well as fine-grained control over where it finds dependencies, how it chooses updated versions, and a lot more. TL;DR: Renovate is a massive upgrade over Dependabot and you should evaluate it if _any_ aspect of Dependabot has caused you grief, there's a good chance Renovate does it better.
I'm collecting some tips here about "fancy" things I've done using Renovate that may be helpful to other folks. You'll be able to find more details about all of these in their very high quality docs at [docs.renovatebot.com](https://docs.renovatebot.com/).
## Disabling updates for individual packages
There are times where you're sticking with an older version of a package (temporarily or otherwise) and you just don't want to see PRs bumping it, wasting CI resources for an upgrade that will probably fail and is definitely not going to be merged. Renovate offers a convenient way to do this:
```json
{
"packageRules": [
{
"managers": ["gradle"],
"packagePatterns": ["^com.squareup.okhttp3"],
"enabled": false
}
]
}
```
## Grouping updates together
Renovate already includes preset configurations for [monorepos](https://github.com/renovatebot/renovate/blob/b4d1ad8e5210017a3550c9da4342b0953a70330a/lib/config/presets/internal/monorepo.ts) that publish multiple packages with identical versions, but you can also easily add more of your own. As an example, here's how you can combine updates of the serde crate and its derive macro.
Sometimes there are cases where you may need to set an upper bound on a package dependency to avoid breaking changes or regressions. Renovate offers intuitive support for the same.
Dependency versions are sometimes specified without their package names, for example in config files. These cannot be automatically detected by Renovate, but you can use a regular expression to teach it how to identify these dependencies.
For example, you can specify the version of Hugo to build your Netlify site with in the `netlify.toml` file in your repository.
```toml
[build.environment]
HUGO_VERSION = "0.109.0"
```
This is how the relevant configuration might look like with Renovate
```json
{
"regexManagers": [
{
"description": "Update Hugo version in Netlify config",
You can read more about Regex Managers [here](https://docs.renovatebot.com/modules/manager/regex/).
## Making your GitHub Actions usage more secure
According to GitHub's [official recommendations](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-third-party-actions), you should be using exact commit SHAs instead of tags for third-party actions. However, this is a pain to do manually. Instead, allow Renovate to manage it for you!
```json
{
"extends": [
"config:base",
":dependencyDashboard",
"helpers:pinGitHubActionDigests"
]
}
```
## Automatically merging compatible updates
Every person with a JavaScript project has definitely loved getting 20 PRs from Dependabot about arbitrary transitive dependencies that they didn't even realise they had. With Renovate, that pain can also be automated away if you have a robust enough test suite to permit automatic merging of minor updates.
With this configuration, Renovate will push compatible updates to `renovate/$depName` branches and merge it automatically to your main branch if CI runs on the branch and passes. To make that happen, you will also need to update your GitHub Actions workflows.
```diff
name: Run tests
on:
pull_request:
branches:
- main
+ push:
+ branches:
+ - renovate/**
```
## Closing notes
This list currently consists exclusively of things I've used in my own projects. There is way more you can achieve with Renovate, and I recommend going through the docs at [docs.renovatebot.com](https://docs.renovatebot.com/) to find any useful knobs for the language ecosystem you wish to use it with. If you come across something interesting not covered here, let me know either below or on Mastodon at [@msfjarvis@androiddev.social](https://androiddev.social/@msfjarvis)!
summary = "Building libraries is hard, and keeping track of your public API surface harder. Kotlin 1.4's explicit API mode tries to make the latter not be difficult anymore."
draft = true
slug = "tips-for-building-kotlin-libraries"
socialImage = "/uploads/kotlin_social.webp"
tags = ["libraries"]
title = "Tips and tricks for building libraries in Kotlin"
+++
Building a library is arguably a far more involved task than building an application. You need to be _extra_ mindful of your dependencies, and ensure that you are not breaking source and/or binary compatibility unintentionally. When doing so in Kotlin, you may also need to also provide an idiomatic API surface for Java callers if you're offering JVM support.
I have _some_ experience building libraries, and have had the fortune of seeing a **lot** of other, much smarter people do it. This post aims to serve as a collection of what I've learned by doing things myself and observing others, that will hopefully be helpful to people trying their hand at library development.
## Avoid `data` classes in your public API
Kotlin's [data classes](https://kotlinlang.org/docs/reference/data-classes.html#data-classes) are a fantastic language feature, but unfortunately they pose many challenges. Jake Wharton has written about this in great detail over on [his blog](https://jakewharton.com/public-api-challenges-in-kotlin/), but I will reproduce the problem here as a TL;DR for people who just want to get an overview of the problem.
Here's an example class:
```kotlin
data class Example(
val username: String,
val id: Int,
)
```
Compiling this with `kotlinc` then disassembling it with `javap` gives us this:
```java
public final class Example {
public final java.lang.String getUsername();
public final int getId();
public Example(java.lang.String, int);
public final java.lang.String component1();
public final int component2();
public final Example copy(java.lang.String, int);
public static Example copy$default(Example, java.lang.String, int, int, java.lang.Object);
public java.lang.String toString();
public int hashCode();
public boolean equals(java.lang.Object);
}
```
Now, let's add a new field there. The resultant diff will look like this:
What we did here was add a secondary constructor with the previous signature, as a way of preserving backwards compatibility. As Jake notes in his article, even this effort from us breaks the public API. Let's compile and disassemble this again to see why.
```diff
Compiled from "Example.kt"
public final class Example {
public final java.lang.String getUsername();
+ public final java.lang.String getRealname();
public final int getId();
+ public Example(java.lang.String, java.lang.String, int);
+ public Example(java.lang.String, java.lang.String, int, int, kotlin.jvm.internal.DefaultConstructorMarker);
public Example(java.lang.String, int);
public final java.lang.String component1();
- public final int component2();
- public final Example copy(java.lang.String, int);
- public static Example copy$default(Example, java.lang.String, int, int, java.lang.Object);
+ public final java.lang.String component2();
+ public final int component3();
+ public final Example copy(java.lang.String, java.lang.String, int);
+ public static Example copy$default(Example, java.lang.String, java.lang.String, int, int, java.lang.Object);
public java.lang.String toString();
public int hashCode();
public boolean equals(java.lang.Object);
```
If the problem is not immediately apparent, consider this: `component2()` is no longer returning an `int`. This breaks destructing from Kotlin. The `copy` method's API also changed, which is another binary incompatible change.
You can read more details about how to structure your public classes to avoid this, in Jake's post that I linked above.
## (Ab)use `@SinceKotlin` for offering Java-only APIs
Full disclosure: I picked this up from [LeakCanary](https://github.com/square/leakcanary) so credit goes entirely to [Piwai](https://twitter.com/piwai) for thinking of it.
[`@SinceKotlin`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-since-kotlin/) is an annotation offered in Kotlin that allows things to be marked with the Kotlin version they were first introduced. This allows the usage of classes/methods/properties et al be checked during compile time based on the `-api-version` compiler flag.
For example, if you write this code:
```kotlin
@SinceKotlin("1.4")
class Example(val username: String)
```
and try to compile it like this:
```bash
$ kotlinc example.kt -Werror -api-version 1.3
error: warnings found and -Werror specified
example.kt:1:1: warning: the version is greater than the specified API version 1.3
@SinceKotlin("1.4")
^
```
you can see that compilation fails. I had to pass in `-Werror` manually here, but I believe the Kotlin Gradle Plugin handles making this an error automatically.
How does that help us offer Java-only APIs though? Well, here's how Piwai [did it](https://github.com/square/leakcanary/blob/69d54f36ed9d3204624d214835ba99898665a346/leakcanary-android-core/src/main/java/leakcanary/LeakCanary.kt#L177-L184):
```kotlin
/**
* Construct a new Config via [LeakCanary.Config.Builder].
* Note: this method is intended to be used from Java code only. For idiomatic Kotlin use
* `copy()` to modify [LeakCanary.config].
*/
@Suppress("NEWER_VERSION_IN_SINCE_KOTLIN")
@SinceKotlin("999.9") // Hide from Kotlin code, this method is only for Java code
fun newBuilder() = Builder(this)
```
Since the version here is set to `999.9`, which hopefully Kotlin will never be over, any attempt to use this will result in a compiler error. The only way to work around this madness is to be equally mad and pass `-api-version 999.9`, which you'd never do, right? 😬
summary = "Rust is an amazing systems language that is on an explosive rise thanks to its memory safety guarantees and fast, iterative development. In this post, I recap some of the tooling that I use with Rust to make coding in it even more fun and intuitive"
[Rust] is a memory-safe systems language that is blazing fast, and comes with no runtime or garbage collector overhead. It can be used to build very performant web services, CLI tools, and even [Linux kernel modules](https://github.com/fishinabarrel/linux-kernel-module-rust)!
[Rust] also provides an assortment of tools to make development faster and more user-friendly. I'll be going over some of them here that I've personally used and found to be amazing.
## cargo-edit
[cargo-edit] is a crate that extends Rust's Cargo tool with `add`, `remove` and `upgrade` commands that allow you to manage dependencies with ease. The [documentation](https://github.com/killercup/cargo-edit/blob/master/README.md#available-subcommands) goes over these options in detail.
I personally find `cargo-edit` useful in projects with a lot of dependencies as it gets tiresome to manually hunt down updated versions.
## cargo-clippy
[cargo-clippy] is an advanced linter for Rust that brings together **331** ([at the time of writing](https://rust-lang.github.io/rust-clippy/stable/index.html)) different lints in one package that's built and maintained by the Rust team.
I've found it to be a great help alongside the official documentation and ["the book"](https://doc.rust-lang.org/book/) as a way of writing cleaner and more efficient Rust code. As a beginner Rustacean, I find it very helpful in breaking away from my patterns from other languages and using more "rust-y" constructs and expressions in my code.
## rustfmt
[rustfmt] is the official formatting tool for Rust code. It's an opinionated, zero-configuration tool that "just works". It has not reached a `1.0` release yet, which entails some [caveats](https://github.com/rust-lang/rustfmt#limitations) with its usage but in my experience it will work for most people and codebases without any hassle.
As a Kotlin programmer I am very used to having an official styleguide for consistent formatting across all projects. `rustfmt` brings that same convenience to Rust development, which is major since Rust does not have any official IDE which would do it automatically.
## rls
[rls] is Rust's implementation of Microsoft's [language-server-protocol](https://microsoft.github.io/language-server-protocol/), an attempt at standardizing the interface between language tooling and IDEs to allow things like code completion, find all references and documentation on hover to work seamlessly across different IDEs. [VSCode](https://code.visualstudio.com/) implements the `language-server-protocol` and integrates seamlessly with `rls` using the [rust-lang.rust](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust) extension to create a compelling IDE experience.
Being a beginner, the ability for code to be checked within the editor and not requiring builds for each change is a huge speed-up in the learning and development process. Documentation about crates and errors being available directly on hover is certainly helpful in furthering my knowledge and understanding of the language.
## Conclusion
So this is my list of must-have tooling that has helped me continuously improve as a Rustacean. I'm VERY curious to hear what others are using! I opted to stick with official tools where possible since they've proven very reliable and I seem to find considerably more help online with them, but I'd love to try out non-official alternatives that offer significant benefits :)
This post was supposed to be a monolith directory of all the CLI-based tooling that I use to get things done throughout my day, but it turned out to be just a bit too long so I elected to split it out into separate posts.
Let's talk about [direnv](https://github.com/direnv/direnv).
## What is direnv?
On the face of it, it's not very interesting. Their GitHub description simply reads 'Unclutter your .profile', which gives you a general idea of what to expect but also grossly undersells it.
What direnv does, is improve the experience with things like [12 factor apps](https://en.wikipedia.org/wiki/Twelve-Factor_App_methodology). It enables per-directory configurations that would otherwise be 'global'. Let's look into how I use it, to get a robust idea of what you can expect.
## Why do I use it?
I have a separate account for proprietary work related things [here](https://github.com/hshandilya-navana), which means that any GitHub tooling I use now needs to be configured with separate credentials for when I'm interacting with work repositories. Bummer!
`direnv` makes this simpler by allowing for environment variables to be set for those repositories only. I mostly use the official GitHub CLI from [here](https://github.com/cli/cli) to interact with the remote repo, so providing a separate GitHub token is just a matter of setting the `GITHUB_TOKEN` environment variable to one that is allowed to interact with the current repo. With direnv, all you need to do is create a `.envrc` file in the repository directory with this:
```bash
export GITHUB_TOKEN=<redacted>
```
and `direnv` will automatically set it when you enter the directory, and more importantly: **reset** it back to its previous value when you exit. This 'unloading' feature makes `direnv` extremely powerful.
{{<asciinemaqMkuyVjPSkhNqO6Jo0eQnLiyt>}}
`direnv` also comes with a rich stdlib that lets you do far more than just export environment variables.
Setting up a Python virtualenv:
{{<asciinemairkZWRh00gFVIcH41BRcOvowm>}}
Stripping entries from `$PATH`:
{{<asciinemavbzolwrYnXzBFvhAqMJEFBNRv>}}
Adding entries into `$PATH`:
{{<asciinemaC1EhhAoy1y3vSwJaIc0R8o0RY>}}
> You'll notice an unfamiliar `rg -c` command there, it's [ripgrep](https://github.com/BurntSushi/ripgrep), and the `-c` flag counts the number of matches in the string if there are any, and nothing otherwise. We'll talk about it later in this series :)
The possibilities are huge! To check out the stdlib yourself, run `direnv stdlib` after installing `direnv`.
This was part 1 of the [Tools of the trade](/categories/tools-of-the-trade/) series.
Continuing [this series](/categories/tools-of-the-trade/), let's talk about [fd](https://github.com/sharkdp/fd).
## What is fd?
`fd` is an extremely fast replacement for the GNU coreutils' `find(1)` tool. It's written in Rust, and is built for humans, arguably unlike `find(1)`.
## Why do I use it?
Other than the obvious speed benefits, one of the most critical improvements you'll notice in your workflow with `fd` is the presence of good defaults. By default `fd` ignores hidden files and folders, and respects `.gitignore` and similar files. Here's a small comparison to show you the differences between `fd` and `find(1)`'s default behaviors.
Running both `find` and `fd` on the repository for this website, then piping the results into [del.dog](https://del.dog):
```bash
$ find | paste
https://del.dog/raw/greconillo
```
```bash
$ fd | paste
https://del.dog/raw/thelerrell
```
If you check both those links, you'll observe that `find(1)` has a significantly higher number of results compared to `fd`. Looking closely, you'll also notice that `find(1)` has dumped the entire `.git` directory into the results as well, alongwith the `public` directory of Hugo which contains the built site. These are surely important directories, but you almost **never** want to search through your `.git` directory or build artifacts. `fd` shines here by excluding them automatically, while being significantly faster than `find(1)` even when they're both returning the exact number of results.
On top of these, `fd` also comes with a very rich set of options that let you do many typically complex operations within `fd` itself.
### Converting all JPEG files to PNG
```bash
$ fd -tf jpg$ -x convert {} {.}.png
```
Some new things here!
- `-tf` means we only want files. There are multiple options for this in `fd`, including directory, executable, symlink, and even UNIX pipes and sockets.
- `jpg$` is our search term, in RegEx. `fd` makes use of [BurntSushi](https://github.com/BurntSushi)'s excellent [regex](https://github.com/rust-lang/regex) library for extremely quick RegEx parsing, and is able to thus support it by default. You can override this by passing `-g`/`--glob` to use glob-based matching instead. RegEx itself is too complicated, and my experience with it too limited, to actually cover it all here. All you need to know here is that the `$` at the end simply means that we want `jpg` to be the final characters of our matching term.
- `-x` is one of two exec modes provided by `fd`. `-x` runs the provided command for each term separately, in a multi-threaded fashion, so for long-running tasks you might want to reduce CPU load by restricting threads using `--threads <num>`.
- `{}` and `{.}` are part of `fd`'s execution placeholders that let you manipulate search results a bit more before handing them off to external commands. `{}` is replaced with the result as-is, and `{.}` strips the file extension. There are a couple more that you can check out using `fd --help`.
- `convert` is an external command from the ImageMagick suite of tools.
### Finding and deleting all files with a specific extension
```bash
$ fd -HItf \\.xml$ -X rm -v
```
Mostly familiar now, but with some key differences.
- `-H` and `-I` combined are used to include **h**idden and **i**gnored files into the results.
- `\\.xml$` is a more expansive RegEx that ensures that you only delete files that match `a_file.xml` and not `this_is_not_an_xml`, by ensuring that we match on `.xml` and not just `xml`. The double backslash is an escape sequence, because `.` has a special meaning in RegEx that we do not want here.
- `-X` is the other exec mode, batch. It runs the given command by passing all results as parameters in one go. Since we want to delete files, and `rm` lets you specify an arbitrary amount of arguments, we can use this and thus only run `rm` once.
### Updating all git repositories in a directory
```bash
$ fd -Htd ^.git$ --maxdepth 1 -x hub -C {//} sync
```
Already feels like home!
- `-Htd` together search for hidden folders.
- `^.git$` matches exactly on `.git` by mandating that `.git` be both the first (^) and last ($) characters.
- `--maxdepth 1` is a speed optimization to make `fd` only check the current directory and not traverse.
- `-x` again runs each command separately
- `{//}` gives us the parent directory. For `msfjarvis.dev/.git`, this will give you `msfjarvis.dev`.
[hub](https://hub.github.com) is a `git` wrapper that provides some handy features on top like `sync` which updates all locally checked out branches from their upstream remotes. You can re-implement this with some leg work but I'll leave that as an exercise for you.
And that's about it! Let me know what you think of `fd` and if you're switching to it.
This was part 3 of the [Tools of the trade](/categories/tools-of-the-trade/) series.
In the second post of [this series](/categories/tools-of-the-trade/), let's talk about [fzf](https://github.com/junegunn/fzf).
## What is fzf?
In its simplest form, `fzf` is a **f**u**zz**y **f**inder. It lets you search through files, folders, any line-based text using a simple fuzzy and/or regex backed system.
On-demand, `fzf` can also be super fancy.
## Why do I use it?
Because `fzf` is a search tool, you can use it to find files and folders. My most common use-case for it is a simple bash function that goes like this:
```bash
# find-and-open, gedit? Sorry I'll just stop.
function fao() {
local ARG;
ARG="${1}";
if [ -z "${ARG}" ]; then
nano "$(fzf)";
else
nano "$(fzf -q"${ARG}")";
fi
}
```
It starts up a fzf session and then opens up the selected file in `nano`.
{{<asciinemagCwYg97C1NbRVgCUK0Dd1byVl>}}
By default, `fzf` is a full-screen tool and takes up the entire height of your terminal. I've restricted it to 40% of that, as it looks a bit nicer IMO. You can make more such changes by setting the `FZF_DEFAULT_OPTS` environment variable as described in the [layout section](https://github.com/junegunn/fzf#layout) of the fzf docs.
But that's not all! You can get _real_ fancy with `fzf`.
For example, check out the output of `fzf --preview 'bat --style=numbers --color=always --line-range :500 {}'` [here](https://asciinema.org/a/WFFx2negPw5iXbCZe1YlAZeqj) (a bit too wide to embed here :()
> `bat` is a `cat(1)` clone with syntax highlighting and other nifty features, and also a tool I use on the daily. We'll probably be covering it soon :)
You can also bind arbitrary keys to actions with relative ease.
{{<asciinemal7OPG4xQv5QVtvyxQfmly2eiE>}}
The syntax as evident, is pretty simple
```
<key-shortcut>:execute(<command>)<+abort>
```
The `+abort` there is optional, and signals `fzf` that we want to exit after running the command. Detailed instructions are available in the `fzf` [README](https://github.com/junegunn/fzf#readme).
And that's it from me. Post any fancy `fzf` recipes you come up with in the comments below!
This was part 2 of the [Tools of the trade](/categories/tools-of-the-trade/) series.
In the fourth post of [this series](/categories/tools-of-the-trade/), we're talking about [SDKMAN!](https://sdkman.io).
## What is SDKMAN?
SDKMAN is a **SDK** **Man**ager. By employing its CLI, you can install a plethora of JVM-related tools and SDKs to your computer, without ever needing root access.
## Why do I use it?
Since I primarily work with [Kotlin](https://kotlinlang.org/), having an up-to-date copy of the Kotlin compiler becomes helpful for quickly testing code samples with its inbuilt REPL (**R**ead **E**valuate **P**rint **L**oop). I also use [Gradle](https://gradle.org) as my build tool of choice, and tend to stay up-to-date with their releases in my projects. Finally, to run all these Java-based tools, you're gonna need Java itself. Linux distros tend to package very outdated versions of the JDK which becomes a hindrance when building standalone JVM apps that I want to use the latest Java APIs in. SDKMAN allows you to install Java from multiple sources including AdoptOpenJDK, Azulsystems' Zulu and many more.
The real kicker here is the fact that you can keep multiple versions of the same thing installed. Using Java 14 everywhere but a specific project breaks over Java 8? Just install it alongside!
To make this side-by-side ability even more useful, SDKMAN let's you create a `.sdkmanrc` file in a directory, and it will switch your currently active version of any installed tool to the version specified in the file. People following the series from the beginning might recall this sounds awfully like direnv, because it is. However, SDKMAN is noticeably slow when executing these changes, presumably because its tearing down an existing symlink and creating a new one. For that reason, SDKMAN ships with the `sdk_auto_env` feature (automatically parse `.sdkmanrc` when you change directories) off by default, requiring you to manually type `sdk env` each time you enter a directory where you have a `.sdkmanrc`.
Since the auto env feature matches what direnv does, I just use it directly. So, rather than doing this:
```bash
# .sdkmanrc
java=8.0.262-zulu
```
I do:
```bash
# .envrc
# In bash, doing `${VARIABLE/src/dest}` replaces `src` with
# `dest` in `${VARIABLE}`, but you still need to write it
And that's really it! SDKMAN's pretty neat, but for the most part it just stays out of your way and thus there's not a lot to talk about it. The next post's gonna be much more hands-on :-)
This was part 4 of the [Tools of the trade](/categories/tools-of-the-trade/) series.
summary: NixOS allows running arbitrary Docker containers declaratively, these
are some of my notes on my usage of this functionality.
draft: true
---
NixOS comes with the ability to [declaratively manage docker containers](https://nixos.wiki/wiki/NixOS_Containers#Declarative_docker_containers), which functions as a nice escape hatch when something you want to run doesn't have a native Nix package or is not easy to run within NixOS.
All the available configuration options can be found [here](https://search.nixos.org/options?channel=unstable&from=0&size=50&sort=alpha_desc&query=virtualisation.oci-containers.containers), so rather than explain all of it I'll just walk through my own experience of getting a container up for [Linkding](https://github.com/sissbruecker/linkding).
`podman containers list` works only if you're root, not with `sudo podman containers list`.
title: Using Retrofit to disguise scraping as a REST API
date: 2023-09-13T07:08:10.659Z
summary: We've all used Retrofit to interact with REST APIs for as long as we
can remember, but what if there was no API?
draft: true
---
Square's Retrofit is best known for being the gold standard of REST clients in the JVM/Android ecosystem, but it's excellent API design also lends itself to great extensibility which we will leverage today.
While trying to implement post search functionality in [Claw](https://msfjarvis.dev/g/compose-lobsters), my [lobste.rs](https://lobste.rs) client I stumbled into a _tiny_ problem: there was no API! lobste.rs has a [web-based search](https://lobste.rs/search) but no equivalent mechanism via the JSON API I was using for doing everything else within the app.
The search page uses URL query parameters to specify the search term which made it quite easy to reliably construct a URL which would contain the posts we were interested in, and it looked something like this: `/search?q={query}&what=stories&order=newest&page={page}`.
Retrofit has a [Converter](https://github.com/square/retrofit/blob/40c4326e2c608a07d2709bfe9544cb1d12850d11/retrofit/src/main/java/retrofit2/Converter.java) API which lets users convert request/response bodies to and from their HTTP representations. We will leverage this to convert the raw HTML body we will receive from the search page into a list of LobstersPost objects.
summary = "I was an early adopter of the Gradle Kotlin DSL, deploying it to multiple Android projects of mine, but lately it has been more trouble than I could care for. Here are my grievances with it."
title = "Why I went back to the Gradle Groovy DSL"
+++
About an year ago when I first discovered the [Gradle Kotlin DSL](https://docs.gradle.org/current/userguide/kotlin_dsl.html), I was very quick to [jump](https://github.com/msfjarvis/viscerion/commit/c16d11a816c3c7e3f7bab51ef2f32569b6b657bf) [on](https://github.com/android-password-store/Android-Password-Store/commit/3c06063153d0b7f71998128dc6fb4e5967e33624) [that](https://github.com/substratum/substratum/commit/ebff9a3a88781d093565526b171d9d5b8e9c1bed) [train](https://github.com/substratum/substratum/commit/5065e082055cde19e41ee02920ca07d0e33c89f5). Now it feels like a mistake.
The initial premise of the Gradle Kotlin DSL was very cool. You get first class code completion in the IDE, and you get to write Kotlin rather than the arguably weird Groovy. People were excited to finally be able to write complex build logic using the `buildSrc` functionality that this change introduced.
However the dream slowly started fading as more and more people started using the Kotlin DSL and the shortcomings became more apparent. My grievances with the Kotlin DSL are multifold as I'll detail below.
Just a disclaimer, This post is not meant to completely trash the Kotlin DSL's usability. It has it's own very great benefits and people who leverage those should continue using it and disregard this post :-)
### Build times
The Gradle Kotlin DSL inflates build times _significantly_. Compiling `buildSrc` and all the `*.gradle.kts` files for my [app](http://github.com/msfjarvis/viscerion/tree/1ea6f07f8219aa42139977f37ebbcb230d7f78e7 "app") takes upto 10 seconds longer than the Groovy DSL. Couple that with the fact that changing any file from `buildSrc` invalidated the entire compiler cache for me made iterative development extremely painful.
### Half-baked API surface
Gradle doesn't seem to have invested any actual time in converting the original Groovy APIs into Kotlin-friendly versions before they peddled the Kotlin DSL to us. Check the samples below and decide for yourself.
Property access syntax and discoverable variable names should have been the norm since day one for it to actually be a good Kotlin DSL.
### Complexity
The Kotlin DSL is not very well documented outside Gradle's bits and pieces in documentation. Things like [this](https://github.com/msfjarvis/viscerion/commit/c851571e33189c345329ea3934ad1af15edbe6fb "this") were incredibly problematic to implement in the Kotlin DSL, at least for me and I found it to be incredibly frustrating.
## Conclusion
Again, these are my pain points with the Kotlin DSL. I still use it for some of my projects but I am not going to use it in new projects until Gradle addresses these pains.
summary = "(Mostly) everybody agrees that Android upgrades are good, but how very crucial they are to security and privacy often gets overlooked. Let's dig into that."
A couple days ago I came across a security conscious user who was quick to point out why a particular feature had to be added to APS, but failed to realise the fact that the problem wouldn't even exist if they were running the latest version of Android (we'll talk about the behavior change that fixed it later here).
Android upgrades bring massive changes to the platform, improving security against both known and unknown threats. You sign off that benefit when you buy into an incompetent OEM's cheap phones, and it has become a bit too 'normal' than anybody would prefer.
That's not what we're going to talk about, though. This post is going to be purely about privacy, and how it has changed, nay, improved, over the years of Android. My apps support a minimum of Android 6, so I will begin with the next version, Android 7, and go through Google's release notes, singling out privacy related changes.
## Android 7
Android 7 had a very passing focus on privacy and thus did not have a lot of obvious or concrete changes around it. Background execution limits introduced in Android 6 were improved in Android 7 to apply even more restrictions after devices became stationary, which can be loosely interpreted as 'bad' for data exfiltration SDKs that apps ship but in reality didn't do much.
## Android 8
### Locking down background location access
In Android 8, access to background location was [severely throttled](https://developer.android.com/about/versions/oreo/android-8.0-changes#abll). Apps received less frequent updates for location and thus couldn't track you in real time.
### Introduction of Autofill
The Android Autofill framework was debuted, along with support for [Web form Autofill](https://developer.android.com/about/versions/oreo/android-8.0-changes#wfa). This paved the way for password managers to fill fields for you without relying on hacked up accessibility services or the Android clipboard. This was a major win!
### Better HTTPS defaults
Android 8.0's implementation of HttpsURLConnection did not perform [insecure TLS/SSL protocol version fallback](https://developer.android.com/about/versions/oreo/android-8.0-changes#networking-all), which means connections that failed to negotiate a requested TLS version would now abort rather than fall back to an older version of TLS.
### ANDROID_ID changes
Access to the `ANDROID_ID` field [was changed significantly](https://developer.android.com/about/versions/oreo/android-8.0-changes#privacy-all). It is generated per-app and per-signature as opposed to the entire system making it harder to fingerprint users who have multiple apps installed with the same advertising-related SDKs.
## Android 9
### Limited access to sensors
Beginning Android 9, [background access to device sensors](https://developer.android.com/about/versions/pie/android-9.0-changes-all#bg-sensor-access) was greatly reduced. Access to microphone and camera was completely denied, and so was the gyroscope, accelerometer and other sensors of that class.
### Granular call log access
For apps that need to access the user's call logs for any reason, a [new permission group was introduced](https://developer.android.com/about/versions/pie/android-9.0-changes-all#restrict-access-call-logs). Now, you don't require granting access to all phone-related permissions to let an app back up your call logs.
### Restricted access to phone numbers
There are multiple ways to monitor phone calls on Android, and with the introduction of the `CALL_LOG` permission group, [these were locked down](https://developer.android.com/about/versions/pie/android-9.0-changes-all#restrict-access-phone-numbers) to only expose phone numbers to apps that were allowed explicit access to call logs.
### Making Wi-Fi and cellular networks less privacy invasive
A combination of changes to [what permissions apps require](https://developer.android.com/about/versions/pie/android-9.0-changes-all#restricted_access_to_wi-fi_location_and_connection_information) to know about your WiFi and [how much personally identifiable data is provided by these APIs](https://developer.android.com/about/versions/pie/android-9.0-changes-all#information_removed_from_wi-fi_service_methods) further improves your privacy against rogue apps. Disabling device location [now disables the ability to get information on cell towers](https://developer.android.com/about/versions/pie/android-9.0-changes-all#telephony_information_now_relies_on_device_location_setting) your phone is connected to.
### No more serials
Requesting access to the device serial number [now requires phone state permissions](https://developer.android.com/about/versions/pie/android-9.0-changes-28#build-serial-deprecation) making it more explicit when apps are trying to fingerprint you.
## Android 10
### Scoped Storage
Probably the most controversial change in 10, Scoped Storage segregated the device storage into scopes and [gave apps access to them without needing extra permissions](https://developer.android.com/about/versions/10/privacy/changes#scoped-storage).
### Explicit background permission access
Android 10 introduces the `ACCESS_BACKGROUND_LOCATION` permission and [completely disables background access](https://developer.android.com/about/versions/10/privacy/changes#app-access-device-location) for apps targeting SDK 29 that don't declare it. For older apps, the framework treats granting location access as effectively background location access. When the app upgrades to target SDK 29, the background permission is revoked and must be explicitly requested again.
### Removal of contacts affinity
Beginning Android 10, the system no longer [keeps track of what contacts you interact with most](https://developer.android.com/about/versions/10/privacy/changes#contacts-affinity) and thus search results are not weighted anymore.
### MAC randomization enabled by default
Connecting to a Wi-FI network now uses a [randomized MAC address](https://developer.android.com/about/versions/10/privacy/changes#randomized-mac-addresses) to prevent fingerprinting.
### Removal of access to non-resettable identifiers
Access to identifiers such as IMEI and serial was [restricted to privileged apps](https://developer.android.com/about/versions/10/privacy/changes#non-resettable-device-ids) which means apps served by the Play Store can no longer see them.
### Restriction on clipboard access
This is the problem we first talked about. Apps before Android 10 could monitor clipboard events and potentially exfil confidential data like passwords. [In Android 10 this was completely disabled](https://developer.android.com/about/versions/10/privacy/changes#clipboard-data) for apps that were not in foreground or not your active input method. This change was made with **no** compatibility changes, which means even older apps would not be able to access clipboard data out of turn.
### More WiFi and location improvements
Apps can no longer [toggle WiFi](https://developer.android.com/about/versions/10/privacy/changes#enable-disable-wifi) or [read a list of configured networks](https://developer.android.com/about/versions/10/privacy/changes#configure-wifi), and getting access to methods that expose device location requires the `ACCESS_FINE_LOCATION` permission to make it obvious that an app is doing it. The last change also affects telephony related APIs, a full list is available [here](https://developer.android.com/about/versions/10/privacy/changes#telephony-apis).
### Permissions controls
Apps no longer have [silent access to screen contents](https://developer.android.com/about/versions/10/privacy/changes#screen-contents), and the platform now prompts users [to disallow permissions for legacy apps](https://developer.android.com/about/versions/10/privacy/changes#user-permission-legacy-apps) that target Android 5.1 or below that would earlier be granted at install time. [Physical activity recognition](https://developer.android.com/about/versions/10/privacy/changes#physical-activity-recognition) is now given its own permission and common libraries for the purpose like Google's Play Services APIs will now send empty data when an app requests activity without the permissions.
## Android 11 (tentative)
### Storage changes
- Apps targeting Android 11 are [no longer allowed to opt out of scoped storage](https://developer.android.com/about/versions/11/privacy/storage#scoped-storage).
- All encompassing access to a large set of directories and files is [completely disabled](https://developer.android.com/about/versions/11/privacy/storage#file-directory-restrictions), including the root of the internal storage, the `Download` folder, and the data and obb subdirectories of the `Android` folder.
### Permission changes
- Location, microphone and camera related permissions can now [be granted on a one-off basis](https://developer.android.com/about/versions/11/privacy/permissions#one-time), meaning they'll automatically get revoked when the app process exits.
- Apps that are not used for a few months will [have their permissions automatically revoked](https://developer.android.com/about/versions/11/privacy/permissions#auto-reset).
- A new `READ_PHONE_NUMBERS` permission [has been added](https://developer.android.com/about/versions/11/privacy/permissions#phone-numbers) to call certain APIs that expose phone numbers.
### Location changes
- [One time access](https://developer.android.com/about/versions/11/privacy/location#one-time-access) is now an option for location, allowing users to not grant persistent access when they don't wish to.
- Background location needs to [be requested separately now](https://developer.android.com/about/versions/11/privacy/location#background-location) and asking for it together with foreground location will throw an exception.
### Data access auditing
To allow apps to audit their own usage of user data, [a new callback is provided](https://developer.android.com/about/versions/11/privacy/data-access-auditing#log-access). Apps can implement it and then log all accesses to see if there's any unexpected data use that needs to be resolved.
### Redacted MAC addresses
Unpriviledged apps targeting SDK 30 will no longer be able to get the device's real MAC address.
# Closing notes
As you can tell, improving user privacy is a constant journey and Android is doing a better job of it with every new release. This makes it crucial that you stay up-to-date, either by buying phones from an OEM that delivers timely updates for a sufficiently long support period, or by using a trusted custom ROM like [GrapheneOS](https://grapheneos.org/) or [LineageOS](https://lineageos.org/).
summary = "Paparazzi enables a radically faster and improved UI testing workflow, and using a small workaround we can bring that to our multiplatform Compose projects"
title = "Writing Paparazzi tests for your Kotlin Multiplatform projects"
+++
## Introduction
[Paparazzi] is a Gradle plugin and library that enables writing UI tests for Android screens that run entirely on the JVM, without needing a physical device or emulator. This is massive, since it significantly increases the speed of UI tests as well as allows them to run on any CI system, not just ones using macOS or Linux with KVM enabled.
Unfortunately, Paparazzi does not directly work with Kotlin Multiplatform projects so you cannot apply it to a KMP + Android module and start putting your tests in the `androidTest` source set (not to be confused with `androidAndroidTest`. Yes, I know). Why would you want to do this in the first place? Like everything cool and new in Android land, [Compose]! Specifically, [compose-jb], JetBrains' redistribution of Jetpack Compose optimised for Kotlin Multiplatform.
I've [sent a PR] to Paparazzi that will resolve this issue, and in the mean time we can workaround this limitation.
## Setting things up
To begin, we'll need a new Gradle module for our Paparazzi tests. Since Paparazzi doesn't understand Kotlin Multiplatform yet, we're gonna hide that aspect of our project and present it a pure Android library project. Set up the module like so:
```kotlin
// paparazzi-tests/build.gradle.kts
plugins {
id("com.android.library")
id("app.cash.paparazzi")
}
android {
buildFeatures { compose = true }
}
```
Now, add dependencies in this module to the modules that contain the composables you'd like to test. As you might have guessed, this approach currently limits you to only being able to test public composables. However, if you're trying to test the UI exposed by a "common" module like I am, that might not be such a big deal.
```kotlin
// paparazzi-tests/build.gradle.kts
dependencies {
testImplementation(projects.common)
}
```
And that's pretty much it! You can now be off to the races and start writing your tests:
Consult the [Paparazzi documentation] for the Gradle tasks reference and customization options.
## Recipes
### Disable release build type for test module
If you use `./gradlew check` in your CI, our new module will be tested in both release and debug build types. This is fairly redundant, so you can disable the release build type altogether:
```kotlin
// paparazzi-tests/build.gradle.kts
androidComponents {
beforeVariants { variant ->
variant.enable = variant.buildType == "debug"
}
}
```
### Running with JDK 12+
You will run into [this issue] if you use JDK 12 or above to run Paparazzi-backed tests. I've [started working] on a fix for it upstream, in the mean time it can be worked around by forcing the test tasks to run with JDK 11.
```kotlin
// paparazzi-tests/build.gradle.kts
tasks.withType<Test>().configureEach {
javaLauncher.set(javaToolchains.launcherFor {
languageVersion.set(JavaLanguageVersion.of(11))
})
}
```
### Testing with multiple themes easily
Using an enum and Google's [TestParameterInjector] you can write a single test and have it run against all your themes.
summary = "Quick how-to for writing ad-hoc checks for your own Nix Flakes"
slug = "writing-your-own-nix-flake-checks"
tags = ["nix", "nix flakes", "flake checks"]
title = "Writing your own Nix Flake checks"
+++
## Preface
Ever since discovering [nix(3) flake check](https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-flake-check.html) from [crane](https://github.com/ipetkov/crane) (wonderful tool btw, highly recommend it if you're building Rust things), I've wanted to be able to quickly write my own flake checks. Unfortunately, as with everything Nix, dummy-friendly documentation was hard to come by so I started trying out a bunch of things until I ended up with something that worked, which I'll share below.
## The premise
I had been using a basic shell script with a `nix-shell` shebang for a while to run formatters on my scripts repo and while it worked, `nix-shell` startup is fairly slow and it just wasn't cutting it for me. So I decided to try porting it to `nix flake check` which would benefit from evaluation caching and be faster while removing the overhead of `nix-shell` from the utility script.
## The thing you're here for
Like everything in Nix, the checks needed to be derivations that Nix will build and run the respective `checkPhase` of. So naively, I put together this to run the [alejandra](https://github.com/kamadorueda/alejandra) Nix formatter, [shfmt](https://github.com/mvdan/sh) to format shell scripts and [shellcheck](https://shellcheck.net/) to lint them:
```nix
outputs = {
self,
nixpkgs,
flake-utils,
}:
flake-utils.lib.eachDefaultSystem (system: let
pkgs = import nixpkgs {inherit system;};
files = pkgs.lib.concatStringsSep " " [
# Individual shell scripts from the repository
];
fmt-check = pkgs.stdenv.mkDerivation {
name = "fmt-check";
src = ./.;
doCheck = true;
nativeBuildInputs = with pkgs; [alejandra shellcheck shfmt];
checkPhase = ''
shfmt -d -s -i 2 -ci ${files}
alejandra -c .
shellcheck -x ${files}
'';
};
in {
checks = {inherit fmt-check;};
});
```
I needed a space separated list of my shell scripts to pass to shfmt and shellcheck, so I used a library function from nixpkgs called `concatStringsSep` that takes a list, and concatenates it together with the given separator. That's the `files` binding declared in the snippet above.
Here I ran into my first problem: Nix expects every derivation to generate an output which meant this doesn't actually build.
```plaintext
➜ nix flake check
error: flake attribute 'checks.fmt-check.outPath' is not a derivation
```
There's been [some discussion](https://github.com/NixOS/nixpkgs/issues/16182) about this but the TL;DR is that `mkDerivation` must produce an output. So I tried to cheat around this requirement by faking an output.
```diff
diff --git flake.nix flake.nix
index b7fef3b99110..a531a30ad88e 100644
--- flake.nix
+++ flake.nix
@@ -18,6 +18,7 @@
];
fmt-check = pkgs.stdenv.mkDerivation {
name = "fmt-check";
+ dontBuild = true;
src = ./.;
doCheck = true;
nativeBuildInputs = with pkgs; [alejandra shellcheck shfmt];
checkPhase = ''
@@ -25,6 +26,11 @@
alejandra -c .
shellcheck -x ${files}
'';
+ installPhase = ''
+ mkdir "$out"
+ '';
};
in {
checks = {inherit fmt-check;};
```
`dontBuild` does exactly what you'd think and makes Nix not execute the `buildPhase` of the derivation, and the `mkdir $out` in the `installPhase` generates the output directory Nix was looking for which is still valid even if completely empty.
You can make this slightly faster by using a smaller stdenv that won't pull in a compiler toolchain or be rebuilt when said toolchain is updated:
```diff
diff --git flake.nix flake.nix
index 7ce7a2ba80f8..b69db13fbc6d 100644
--- flake.nix
+++ flake.nix
@@ -16,7 +16,7 @@
files = pkgs.lib.concatStringsSep " " [
# bunch of shell scripts since I didn't have an extension I could glob against
];
- fmt-check = pkgs.stdenv.mkDerivation {
+ fmt-check = pkgs.stdenvNoCC.mkDerivation {
name = "fmt-check";
dontBuild = true;
src = ./.;
```
## The final result
This is what the flake looked like for me after all this
I use the `gnome-terminal` that ships with Linux Mint's Cinnamon Edition with `bash` and a custom prompt from [starship](https://starship.rs). The editor I use depends on what code I am working with:
- Web, Python: VS Code
- Rust: VS Code with rust-analyzer or IntelliJ IDEA with intellij-rust
- Android: Android Studio
- Kotlin: IntelliJ IDEA
My terminal-based text editor of choice is currently [micro](https://micro-editor.com/).
### Development environment
I use [Nix](https://nixos.org/nix/) with [home-manager](https://github.com/nix-community/home-manager) to maintain my development environment and dotfiles. My current Nix configuration can be found in my [dotfiles repo](https://github.com/msfjarvis/dotfiles/blob/main/nixos/ryzenbox-configuration.nix).
## Hardware
### PC
- CPU: Ryzen 5 1600 (6C/12T) @ 3.2 GHz
- GPU: Nvidia GeForce GTX 1650 Super
- RAM: 16GB Kingston HyperX
- Motherboard: ASRock A320M Pro4
- SSD: 250GB Samsung 860 EVO
### Phone
- 128GB Google Pixel 7 running Android 13.
Some files were not shown because too many files have changed in this diff
Show More