Roles which include all the Set IAM Policy permissions for the necessary services;
Role/s which include all the non-Set IAM Policy permissions for the necessary services.
Currently these are defined by explicitly listing all the permissions that should be granted (as far as they are known at the time we edit the definition). I’ve recently been thinking about an approach that might help move toward a more manageable approach.
Use the Google pre-defined roles where possible. This will help make sure that documentation, error message, examples, etc. is more directly applicable to the client’s environment.
Some pre-defined roles can’t be used because they mix Set IAM Policy with other permissions that the client wants to manage separately. In this case, use Terraform to define custom roles, but do so based on the definition of the pre-defined role.
Here’s a sketch of what this might look like.
variable "target_role" {
type = string
description = "ID of the target role."
}
# Fetch the existing role.
data "google_iam_role" "role" {
name = var.target_role
}
locals {
role_components = split("/", var.target_role)
role_name = element(local.role_components, length(local.role_components) - 1)
# Every permission in the target role that **IS** a setIamPolicy permission.
setiam_permissions = [
for permission in data.google_iam_role.role.included_permissions:
permission if length(regexall("^.*[.]setIamPolicy$", permission)) == 1
]
# Every permission in the target role that **IS NOT** a setIamPolicy permission.
normal_permissions = [
for permission in data.google_iam_role.role.included_permissions:
permission if length(regexall("^.*[.]setIamPolicy$", permission)) == 0
]
}
resource "google_project_iam_custom_role" "nonpriv_role" {
role_id = "custom.${role_name}.nonpriv"
title = "${data.google_iam_role.role.title} - (Non-priv)"
description = "(Custom non-privileged version) ${data.google_iam_role.role.description}."
permissions = local.normal_permissions
}
resource "google_project_iam_custom_role" "priv_role" {
role_id = "custom.${role_name}.priv"
title = "${data.google_iam_role.role.title} - (Priv)"
description = "(Custom privileged version) ${data.google_iam_role.role.description}."
permissions = local.setiam_permissions
}
output "permissions_role" {
value = resource.google_project_iam_custom_role.nonpriv_role
}
output "set_iam_policy_role" {
value = resource.google_project_iam_custom_role.priv_role
}
]]>security
tool added in Mac OS X 10.3:
security find-certificate -a -p /System/Library/Keychains/SystemRootCertificates.keychain > cacerts.pem
security find-certificate -a -p /Library/Keychains/System.keychain >> cacerts.pem
If there are missing CA certificates you need to trust, just append them to the end of the file:
cat MyTlsStrippingCorporateProxyCA.pem >> cacerts.pem
You can store the cacerts.pem
file somewhere convenient – maybe somewhere
under ~/Library/
would be sensible on macOS – and then export the many and
varied environment variables that will configure various tools to use the file:
export AWS_CA_BUNDLE="$HOME/Library/cacerts.pem"
export CURL_CA_BUNDLE="$HOME/Library/cacerts.pem"
export HTTPLIB2_CA_CERTS="$HOME/Library/cacerts.pem"
export REQUESTS_CA_BUNDLE="$HOME/Library/cacerts.pem"
export SSL_CERT_FILE="$HOME/Library/cacerts.pem"
export NODE_EXTRA_CA_CERTS="$HOME/Library/cacerts.pem"
]]>$ ledger --version
Ledger 3.3.2-20230330, the command-line accounting tool
with support for gpg encrypted journals and without Python support
Copyright (c) 2003-2023, John Wiegley. All rights reserved.
This program is made available under the terms of the BSD Public License.
See LICENSE file included with the distribution for details and disclaimer.
and check for “with support for gpg encrypted journals”.
If it’s present, then all you have to do is encrypt your journal files and
ledger
will transparently decrypt the data as it reads. Doing this is a
simple matter of encrypting the files with yourself as the recipient. If you
only use gpg --encrypt
to encrypt files for yourself (and not to send to
other people) then the easiest way might be to configure GnuPG to encrypt with
your own key by default:
$ echo default-recipient-self >> ~/.gnupg/gpg.conf
Alternatively, you can remember to pass the --recipient
argument specifying
the key ID or email address for your own key when you run gpg --encrypt
.
Now you can just create some encrypted journal files:
$ cat 2024.journal | gpg --encrypt --armor > 2024.journal.gpg
$ ledger -f 2024.journal.gpg bal
AUD 2,187.50 Assets
AUD 2,150.00 Bank
AUD 37.50 Cash
AUD 12.50 Expenses:Food:Dining
AUD -2,200.00 Income:Gifts
--------------------
0
For the sake of convenience, I give the encrypted files an extension like .asc
or .gpg
so that tools like vim-gnupg can transparently decrypt them for
editing.
Ledger handles encryption transparently at the file-access level, so you can split up your configuration and journal postings into different files and make each encrypted or unencrypted as you like. Personally, I like to have a single plain-text file that defines my chart of accounts and then includes encrypted journal files containing the actual posting.
]]>git
. If anything below doesn’t make sense consult the
sources linked at the end of the post.
We’ll install a bunch of stuff with Homebrew:
git
to get a newer version than what Apple ships
ykman
to view and tweak Yubikey configuration
pinentry
(to pop up a window to enter your Yubikey PIN when required)
GnuPG 2 to take care of doing all the cryptography
brew install \
git \
ykman \
gnupg \
pinentry \
pinentry-mac
A modern Yubikey probably supports a lot of different interfaces: OTP, PIV, OpenPGP, FIDO U2F, FIDO2, OATH. Many have one or more PIN that help to prevent unauthorised usage. If you haven’t already, you should configure the various codes for all interfaces configured on your Yubikey.
We’re focussed on setting up OpenPGP so let’s just take care of that.
You can use ykman
to check the current policy for OpenPGP on your Yubikey:
$ ykman openpgp info
OpenPGP version: 3.4
Application version: 5.4.3
PIN tries remaining: 3
Reset code tries remaining: 0
Admin PIN tries remaining: 3
Require PIN for signature: Once
Touch policies:
Signature key: Off
Encryption key: Off
Authentication key: Off
Attestation key: Off
You can use gpg --edit-card
to modify the various passwords. When started,
it’ll output the current card configuration and prompt for commands. Use admin
to enter administration mode, then passwd
to control the card passwords.
If you have a brand new card will have the following details:
$ gpg --edit-card
...
gpg/card> admin
Admin commands are allowed
gpg/card> passwd
gpg: OpenPGP card no. D2760001240100000006196516380000 detected
1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit
You can change the number of attempts allowed for the PIN, reset code, and admin PIN:
$ ykman openpgp access set-retries 10 10 10
The general process looks something like:
$ gpg --gen-key
$ gpg --expert --edit-key KEY_ID
gpg> addkey
# Select whichever "set your own capabilities" option you like; it's probably a
# good idea to use the same key type as your main key.
#
# Enable Authenticate, disable the other actions (Sign and Encrypt)
#
# Set the key expiry to the same as the first key. In my case it was 3y
Export your key so that you can keep a backup off-line somewhere. Make sure that it is safe, secure, and offline. Print it out or write it on a CD or something and keep it with your important papers.
Then you can move the new keys to your Yubikey:
$ gpg --edit-key KEY_ID
# Switch to viewing private keys:
gpg> toggle
# First, move the primary key to the Yubikey. The key list will show usage of SC
# denoting a signing key, so move it to the "Signature key" slot.
gpg> keytocard
# Then select the first sub-key.
gpg> key 1
# The key list will show usage of E denoting encryption, so move it to the
# "Encryption key" slot in the Yubikey.
gpg> keytocard
# Finally, deselect the first sub-key and select the second sub-key:
gpg> key 1
gpg> key 2
# This sub-key will have usage "A", so move it to the "Authentication key" slot.
gpg> keytocard
At various points in this process, GnuPG will ask for the private key passphrase (if you set one when generating the key) and the Yubikey Admin PIN.
echo "pinentry-program $(which pinentry-mac)" >> ~/.gnupg/gpg-agent.conf
killall gpg-agent
You can verify that GnuPG is configured and working correctly by signing a sample message:
$ echo "Hello world" | gpg --clearsign
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Hello world
-----BEGIN PGP SIGNATURE-----
...
-----END PGP SIGNATURE-----
Depending on the settings for your key, you may need to enter your Yubikey PIN
(usually only the first time you’ve used it this session) and/or touch the key
(see the ykman
settings above to control this).
git
You need to tell git
which key to use when signing things and might like to
set a few other options which control when things are signed and when signatures
are displayed:
$ git config --global user.signingkey KEY_ID
Some settings which may help git
to run GnuPG correctly:
$ git config --global gpg.program gpg
Some settings that control when git
will sign things and display signatures:
$ git config --global log.showSignature true
$ git config --global commit.gpgSign true
$ git config --global tag.gpgSign true
Export your public key (note the PUBLIC KEY
in the output below) and copy
the output.
$ gpg --export --armor KEY_ID
-----BEGIN PGP PUBLIC KEY BLOCK-----
....
-----END PGP PUBLIC KEY BLOCK-----
Go to the “SSH and GPG keys” page of your GitHub settings and click add New GPG key. Paste the key into the text area, add a useful comment to help identify the key, and click the save button.
When you use the GitHub.com web-site to commit changes to your code, GitHub signs your commits with an internal key. To validate these commits, you’ll need to import and sign the key:
$ curl https://github.com/web-flow.gpg | gpg --import
$ gpg --lsign-key noreply@github.com
Yubikey PGP Card edit
Yubikey PGP Importing keys
Yubikey PGP Git signing
git-commit(1)
man page
git-tag(1)
man page
git-log(1)
man page
Yudo (湯道) A comedy focussed on two brothers and the bath-house they inherit after their father’s death. It felt like the subplot around the older brother’s plans might have suffered in editing but the rest of the film hung together well. I enjoyed it.
The lines that define me (線は、僕を書く) A drama following a university student who is invited to become an apprentice of a famouse sumi-e painter and his relationships with the others in the group, his friends, and his tragic history. I really enjoyed it.
We’re broke, my lord (大名倒産) A period comedy set in the late Edo period. A young man who was raised by a salted salmon merchant discovers he’s the illegitimate son of the lord of the estate. I thought it was very, very trope-y, but good fun.
Brave: Gunjo Senki A high-school is transported back to the warring states period. Samurai attack the school, slaughter what seemed like most of the students, and kidnap several hostages. Some of the [very few] survivors set out to rescue their classmates before the school is carried back to the present day. An uneven mix of high-school sports drama, samurai slasher film, and isekai manga. Did not enjoy at all.
Mondays: See you “this” week A clever comedy with touches of The Office and Groundhog Day. A well put together film, I really enjoyed it.
Citizen Kitano A film about Takeshi Kitano and his work as a filmmaker and artist. I’m not much of a film buff so this wasn’t really my cup of tea but it seemed well made and was interesting enough.
I didn’t get to see as many films as in previous years and my miss-rate was higher than previous years, but it was still a fun week of cinema. Thanks Japan Foundation and sponsors.
]]>Many large AWK scripts are wrapped in a useless shell script. If all of your logic is in AWK then there’s no need for the shell to get involves in things; just use the correct shebang!
#!/usr/bin/awk -f
BEGIN { print("start") }
END { print("end") }
Just make it executable and now you have a
script written directly in AWK with no
shells involved. Just make it executable
(with chmod +x
as usual) and you are
good to go.
Why bother? It makes the script simpler
(e.g. no multiple levels of quoting and
escaping), makes it slightly less resource
intensive to start (only a single fork and
exec) and run (no shell waiting around for
the awk
interpreter to finish), and makes
the script sightly easier to handle with
tools like syntax highlighting, code
formatting, etc.
Like any body of code, the formatting of a longer AWK script can be an important help or hindrance to anyone trying to understand it. GNU AWK has a helpful option to format AWK scripts.
#!/usr/bin/awk -f
BEGIN { print("start"); }
END { print("end") }
We can format this AWK script like so:
$ gawk -f example.awk -o-
#!/usr/bin/awk -f
BEGIN {
print("start")
}
END {
print("end")
}
We use -f
to read the AWK script from a
file and -o
to format and output the
script.
In this case, we’re writing it to the
standard output (-o-
) but we could also
write it to another (different!) file with
-oformatted.awk
) or use -o
and let
awk
write it to the default output file
(awkprof.out
).
First, make sure your Homebrew formulas are all up to date:
brew update
Install Colima along with the Docker and Kubernetes command-line tools:
brew install colima docker kubectl
If you work on a network controlled by an organisation that uses TLS stripping security appliances you’ll probably need to install additional CA root certificates before you can pull container images from the Internet, etc. You can put them in the usual place in your home directory and Colima will automatically install them in the VMs it starts:
mkdir -p ~/.docker/certs.d
cd ~/.docker/certs.d
curl -o proxy-cert.crt https://insecurity.my.corp/proxy-cert.crt
(Do make sure you put each certificate in a separate file; if they are concatenated you’ll need to split them.)
With Colima installed, you should be able to start a Colima instance. There are a handful of options to control the CPU, disk, and memory allocation for the VM, the runtimes to configure on it, etc.
colima start --memory 8 --kubernetes
Check that the Docker and Kubernetes command-line tools have been configured to talk to the new Colima instance:
kubectl get pods -A
docker ps
If you installed custom certs, you’d better check that it’s all working correctly by pulling an image:
docker pull python:3.12-slim
Easy.
]]>Boot the Surface and let Microsoft Update install a bunch of stuff. Most of this isn’t important as I blew away the Windows installation completely, but I guess the firmware updates will probably be useful (and there’s no chance of getting them installed except through Windows). This takes absolutely ages. Something on the order of an hour, much of it with a static “installing updates” screen.
Download an OpenBSD installer image and write it to a USB storage device. I
used install72.img
and the dd
command but it doesn’t matter much. Don’t
forget to validate the signature of the image!
Boot the Surface into firmware mode (hold volume-up, press the power button, and wait until it enters the firmware before you release volume-up). Disable Secure Boot and set the boot order to include USB devices.
Insert the USB device and reboot. I needed a USB-C to USB-A converted for my USB storage device.
Go through the OpenBSD installer process. While OpenBSD includes a driver for the Intel wireless device, the installer doesn’t include the firmware. I yanked out the USB storage device and inserted a USB Ethernet adaptor I had handy at the network configuration point. This allowed me to use an HTTP mirror as the source for the installation and, importantly, let installer download the appropriate firmware.
https://jcs.org/2020/05/15/surface_go2
The installer identified the firmware packages needed for built-in Intel
wireless adaptor. But running fw_update
is probably a good idea.
Configure the WiFi interface by creating /etc/hostname.iwx0
with content like
the following:
join HOME_NETWORK wpakey PASSWORD
join WORK_NETWORK wpakey OTHERPASS
dhcp
There are more parameters that can be added; see hostname.if(5)
and
ifconfig(8)
for details.
I also have a new Google Pixel 7 Pro phone and want to get USB tethering up and running. Recent Pixel models have stopped using RNDIS protocol for USB tethering (though OpenBSD does have a driver for RNDIS). This is a good move by Google! And they’ve moved to using an actual standard: CDC UNM. Unfortunately OpenBSD does not, as far as I can tell, have a driver that supports CDC UNM devices.
If your USB tethering device is supported, you can configure hotplugd
to
bring up a network connection when you plug it in.
http://www.omarpolo.com/post/openbsd-tethering.html
xtsscale
to calibrate the touch screen but you need to find the right XInput
device. xinput --list
will show a number of mouse devices (all alike except
for the device and XInput IDs).
I consulted dmesg
for the ims
devices:
dmesg | grep -E 'ims[0-9]'
On my machine ims0
has “button, tip” while ims1
also has “barrel, eraser”
listed: clearly ims0
is the touch screen and ims1
is the stylus support.
The other matched lines in dmesg
should mention the wsmouse
the corresponds
to each ims
device: my machine had wsmouse0
and wsmouse1
respectively.
Use xinput --list
and find the entry for wsmouse0
. Then find the XInput ID
(e.g. id=7
) and calibrate it:
xtsscale -d 7
If your fingers are as fat as mine the calibration process might take a few
attempts but eventually it will update the calibration parameters for the
running X server and print those same parameters so you can update your
configuration. The details are in xtsscale(1)
.
https://www.birkey.co/2022-01-29-openbsd-7-xfce-desktop.html
https://www.tumfatig.net/2019/customizing-openbsd-xenodm/
https://dataswamp.org/~solene/2021-07-30-openbsd-xidle-xlock.html
https://www.tumfatig.net/2021/calibrate-your-touch-screen-on-openbsd/
]]>Over the course of nine months or so, I visited:
Kanpai Sake Brewery, London
Bimber Distillery, London
East London Liquor Company, London
Dojima Sake Brewery, Ely
Copper Rivet Distillery, Chatham
The Foundry, Canterbury
Spirit of Yorkshire, Filey
Cooper King, York
Cotswolds Distillery, Stourton
Penderyn Distillery, Penderyn, Wales
The Scotch Whisky Experience, Edinburgh
Lindores Abbey Distillery, Fife
Royal Lochnagar, Balmoral
Glenlivet, Speyside
Speyside Cooperage
Glen Moray Distillery, Speyside
Aberfeldy Distillery, Highlands
Dalwhinnie Distillery, Highlands
Glenkinchie, Lowlands
Oban Distillery, Highlands
Kilchoman, Islay
Ardnahoe, Islay
Bunnahabhain, Islay
Ardbeg, Islay
Lagavulin, Islay
Laphroaig, Islay
Holyrood Distillery, Edinburgh
Oxford Artisan Distillery, Oxford
Happily, you can use the Pipeline: Milestone Step plugin to have Jenkins terminate already running builds of the same job. The goal here is straighforward:
My multibranch pipeline build is notified of PR-123. It creates a new job called PR-123 and starts build 1.
I push another commit to the branch. GitHub notifies Jenkins and Jenkins starts PR-123 build 2. I now have two running builds – build 1 and build 2 – but the outcome of build 1 is no longer useful.
Suppose I notice a type and immediately push a fix. This would result in three running builds, two of them useless.
Milestones can be useful in a range of circumstances but I mostly want to take advantage of one feature:
When a build passes a milestone, any older build that passed the previous milestone but not this one is aborted.
If we use the BUILD_ID
as the milestone, then we can use this to abort old
jobs when a new one starts.
node {
stage("Prepare") {
milestone label: '', ordinal: Integer.parseInt(env.BUILD_ID) - 1
milestone label: '', ordinal: Integer.parseInt(env.BUILD_ID)
checkout scm
}
stage("One") {
sh """sleep 60"""
}
stage("Two") {
sh """sleep 120"""
}
stage("Three"){
sh """sleep 180"""
}
}
Like everything involving Jenkins, there are bound to be heaps of interactions with other features and scenarios where it doesn’t work reliably. Good luck.
]]>