This document contains only my personal opinions and calls of judgement, and where any comment is made as to the quality of anybody's work, the comment is an opinion, in my judgement.
[file this blog page at: digg del.icio.us Technorati]
Six months ago I bought at an online Lenovo.com sale a ThinkPad E495 with a Ryzen 5 3500U CPU, 8GiB RAM, 256GB SSD, 14in HD IPS screen. I have then upgraded it with 2×16GiB RAM SODIMMs, and 2×2TB flash SSDs.
My impressions as to its negative points:
dockis a very useful accessory.
Overall I am very pleased with the E495, I chose it because of excellent value for money, amazing expandability, and being a proper even if lower-range ThinkPad, and it delivered on all of that. Its largest issues are with the battery, but I mostly use it on a desk or for short travel trips.
I have been re-reading occasionally a great little book about the git version control system which is Git pocket guide by R.E. Silverman, and it got me thinking:
strangeand there have been several extensions and improvements since. Similarly the bigger (and also good) other book Version Control with Git has not been updated since 2012. I suspect this is because books are
obsoleteand most people who should be reading and digesting books instead adopt noddy methods (
Memorizing Six Git Commands,
Copying and Pasting from Stack Overflow) like in so much other unreliable software development.
To start fixing the latter two issues these are some short notes on terminology:
indexshould have been called the
commitsshould have been called the
branchesshould have been called
repositoryshould have been called the
stashshould have been called the
working treeshould have been called the
Note: there are many other aspects that are misleading in the git subcommands themselves, mostly related to the commands not being clear as to what they affect; for example git add copies content from the content pool into the draft snapshot.
And some short notes as to the purposes:
index, to be able to select only a subset of content changes for a snapshot, and to be able to annotate them with a specific author and description.
Note: The design of git is meant to enable the existence of git add -i and git rebase -i.
Overall the bigger issue is that git has a
terminology and is documented in a way that suggests that it is
version control system similar to
but distributed in a peer-to-peer architecture, its
distributed version control system but that is trivial or
misleading, while the really big difference is that it is a
system designed to track (annotated) patches as lineages of
(annotated) content snapshots, rather than as versions and
branches of files or collections of files.
Note: the intended workflow of version control systems is for authors to write changes and commit them as versions on branches, as they are. The intended workflow of git is to receive and merge a lot of patches into a large set of changes locally, then prepare them for remote publication by snapshotting them as subsets, and then review the those snapshots and rearranging them.
The overall purpose is an
workflow, treating the content as an encyclopedia, more than a
development workflow, which is also supported.
Note: for a development workflow staging and rebasing may be more annoying than useful, and the aspects of the design that support them add unnecessary complications.
One of the limitations of traditional UNIX style
authentication is that while it is possible to have multiple
user names) for the same identity
user number), it is
not possible to have multiple passwords for the same account.
Note: in UNIX system usually user names are thought as identities, but it is user numbers that are used as identities for authorization (access control and resource accounting). Conversely it is user names that are used for authentication, that is verifying the right to access an account and thus an identity. It is also confusing that because of history /etc/passwd usually contains no passwords and is just the /default) accounts database, and the passwords are in /etc/shadow, which is the (default) authentication database.
This is because while it is possible to have multiple lines in /etc/shadow with the same user name, only the first one is ever used. This is easily fixable, and would be very useful and even backwards compatible. The usefulness is both that one can use different password in different places, and also that if the regular password is known to be compromised then using another password allows deleting the compromised one, which is often possible.
So a lot of places require two factor authentication, and
that usually means having one
know (knowledge), like a password, and
something you have (possession) like an
authentication token or a mobile phone or an
Note: some people add
something you are to the categories of
authentication factors, but that is a very bad idea, while it
is acceptable for identification: something you are is
actually something you have in your body (like an eye or a
finger or a face), so it is not a separate category.
However it is not as simple as that because the traditional
something you know
something you have have to be
something you know: that actually means something that is part of your body, because the knowledge is stored somewhere in your body, and as a rule possession of your body is much the same as possession of the knowledge that is part of it as they can usually extract the knowledge from your body, sometimes in very unpleasant ways. Thus biometrics are actually
things you know, not that you are, or have. Or better said, password and biometrics are things that are not separable from your body, and
things you haveare things that are separable from your body, and can be stored away from it.
How can someone else steal it?: in the case of both knowledge and biometrics they can do it by acquiring your body (or in some cases a recording of it, for example record you typing or speaking the knowledge, or getting an impression of your fingerprint).
something you have, not you know, because you need possession of the piece of paper to use it.
something you have: that must mean something that is not part of your body, so it can be kept separately from your body. If someone who has possession of your body asks you to open a safe that requires a number and a fingerprint, that is not really two-factor authentication, just two-step, because your body contains both. It is quite different when a number and a physical key are required and the key is stored separately.
So for example I have described previousy that these two situations involve effectively two factors:
Then if someone gets to know the local login password they cannot login onto the system remotely, and if they get possession of the laptop they cannot use the SSH key to login remotely to other systems because they do not know the passphrase.
Note: using also a hardware token for both remote and local authentication helps, in particular if it requires an explicit physical touch every time it is requested to release the authentication factor it holds.
An important detail is that all authentication factors must exist in multiple separate and independent copies, to cope with the loss or compromise of any one. It is easy to both forget knowledge and to lose possessions. This is particularly important with authentication to massive sites "in the cloud", where you cannot turn up in person and authenticate yourself with your identity card and body to a system administrator who then resets your password. So if you use hardware tokens as an authentication factor make sure you have at least a backup one already registered with the site, just like you keep a spare set of keys to house and car in case you lose the main one.
Note: it is even possible to also lose fingers or at least damage the fingertips. So for example if you have to use fingerprints as an identification (or worse, authentication) factor register at least two, one on each hand).
Rather nice "peak" transfer rates on a recent laptop, a Lenovo ThinkPad E495, where SATA SK Hynix 256GB SSD, and sdc is an USB3 Corsair Slider X2 32GB:
# lsscsi [0:0:0:0] disk ATA CT2000MX500SSD1 023 /dev/sda [1:0:0:0] disk asmedia ASMT1153e 0 /dev/sdb [2:0:0:0] disk Corsair Voyager SliderX2 000A /dev/sdc [N:0:1:1] disk APS-SE20G-2T__1 /dev/nvme0n1 # time hdparm -t /dev/nvme0n1; time hdparm -t /dev/sda /dev/nvme0n1: HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device Timing buffered disk reads: 3762 MB in 3.00 seconds = 1253.57 MB/sec real 0m6.392s user 0m0.052s sys 0m2.383s /dev/sda: Timing buffered disk reads: 1514 MB in 3.00 seconds = 504.48 MB/sec real 0m6.221s user 0m0.035s sys 0m1.114s # time hdparm -t /dev/sdb; time hdparm -t /dev/sdc /dev/sdb: Timing buffered disk reads: 1242 MB in 3.00 seconds = 413.55 MB/sec real 0m6.238s user 0m0.041s sys 0m0.994s /dev/sdc: SG_IO: bad/missing sense data, sb: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Timing buffered disk reads: 556 MB in 3.01 seconds = 184.92 MB/sec real 0m6.280s user 0m0.008s sys 0m0.451s
The internal speeds are as expected, it is nice to see such high transfer rates with USB3.
Because for various reasons (geekiness, job protection, ...) many people love complicated setups the current Ubuntu (and most other distributions) boot sequences involves four different operating systems:
EFI variablesheld in the computer's non-volatile RAM.
rootfilestore that contains the main GNU/Linux distribution collection.
In all this there are two important aspects missing as to GRUB2 for EFI in particular for Ubuntu (and similarly for other distributions):
What is quite important is that
the/EFI/ubuntu/grub.cfg locates the filestore for the
/boot/grub/grub.cfg file by UUID and that
can break, just as the /boot/grub/grub.cfg also
locates by UUID the filestore containing the filestore with
the Linux kernel image and the
OS. If these change or are not unique things can get
However GRUB2, whether for EFI or for MBR boot, has a rather sophisticated interactive shell, and it can be configured on the fly with a few simple commands. In particular its configuration files expect two values to be set:
initramfsimages, under /boot something like (hd0,gpt6).
Given these it is easy to run the standard GRUB2 menu file with a command like configfile $prefix/grub.cfg or to manually load and boot a Linux kernel with something like:
linux $root/boot/vmlinuz-5.8.0-53-generic \ rootfsthpe=jfs rootfsflags=rw root=/dev/sda6 initrd $root/boot/initrd.img-5.8.0-53-generic boot
Note: actually $root is the default for GRUB2 commands, so it can be omitted from paths.
Every piece of software is a product of its time and of the organization that develops it and reflects its structure, and UNIX and derivative systems are designed to reflect the needs and structure of Bell Laboratories and in particular the group that designed and developed it, and one of the aspects is the authentication and authorization system.
It is based on a simple list of user accounts for authorization (/etc/passwd) and another one of secrets for authentication (/etc/shadow), plus similarly for groups, and a simple lookup logic where the first match found is the relevant one, even if there are multiple entries with the same key, and there is a single list that is centrally managed. This reflects its nature as a shared system by a close and closed group of people who just need some way to access a system locally and directly.
It is not necessarily adequate for modern use, and in particular for the situation where a system may be accessed from many different devices and locations by the same user.
It is quite useful to have at least different password for the same identity for use on different places, like remotely, or from a cellphone, or from a desktop. This is because most passwords end up stored in several encrypted books of secrets, and different places have different vulnerabilities, and being able to revoke or change a password from a particular place, rather than having to have no password books, or having to change all of them, is quite useful.
Because the main source insecurity is the cost of security.
Then the question arises why if the case is so compelling modern UNIX-like systems don't handle (and it would be easy) multiple passwords per user and password lists stored in the user's own directory so they can change them whne it suits them.
Fortunately this can be achieved within SSH by listing multiple keys in the authorized_keys file, which is a major advantage over using passwords for SSH (which is something that should not be done in general, because it means that a compromised password can be used to access a system remotely, while a compromised passphrase for an SSH private key cannot, as possession of the private key is also needed).