Primes ≤ 100 in Rust

Posted by Michał ‘mina86’ Nazarewicz on 20th of June 2021

In a past life I’ve talked about a challenge to write the shortest program which prints all prime numbers less than a hundred. Back then I’ve discussed a 60-character long solution written in C. Since Rust is the future, inspired by a recent thread on Sieve of Eratosthenes I’ve decided to carry the task for Rust as well.

To avoid spoiling the solution, I’m padding this article with a bit of unrelated content. To jump straight to the code, skip the next block of paragraphs. Otherwise, here’s a joke for ya:

How do balanced audio cables work

Posted by Michał ‘mina86’ Nazarewicz on 13th of June 2021

Have you ever wondered how balanced audio cables work? For the longest time I have until finally deciding to look into it. Turns out the principle is actually rather straightforward.

In a normal, unbalanced wire an analogue signal S is sent over a pair of wires: one carries the signal while the other a reference zero. Receiver interprets voltage between the two as the signal. The issue is that over the length of a cable noise is introduced. While transmitter sends S, receiver gets S + e (where e denotes the noise).

TransmitterReceivernoise
Illustration of transmission of an analogue signal over a balanced cable. For brevity the diagram missuses symbols from digital signal processing and should not be taken as a technically correct representation.

A balanced cable addresses this problem by sending the information over three wires: hot (or positive), cold (or negative) and ground. Hot wire carries the signal S as before, cold one carries the inverse of the signal -S and ground is zero as before. Just like before, when information travels over the cable, noise is introduced. Crucially, because it’s a single cable, noise on the positive and negative wires are strongly correlated. Receiver therefore gets S + e on hot wire and -S + e on cold wire. All it needs to do is inverse the signal on negative wire and add both signals together. Inversion changes phase of the noises on the cold wire such that it cancels out error remaining on the positive wire: (S + e) + -(-S + e) = S + e + S - e → S.

Explicit isn’t better than implicit

Posted by Michał ‘mina86’ Nazarewicz on 6th of June 2021

Continuing the new tradition of clickbaity titles, let’s talk about explicitness. It’s a subject that comes up when bike-shedding language and API designs. Pointing out that a construct or a function exhibits implicit behaviour is often taunted as an ultimate winning argument against it.

There are two problems with such line of reasoning. First of all, people claim to care about feature being explicit but came to accept a lot of implicit behaviour without batting an eye. Second of all, no one actually agrees what the terms mean.

In this article I’ll demonstrate those two issues and show that ‘explicit over implicit’ is the wrong value to uphold. It’s merely a proxy for a much more useful goal interfaces should strive for. By the end I’ll demonstrate what we should really look at instead.

Programmer (vs) Dvorak

Posted by Michał ‘mina86’ Nazarewicz on 30th of May 2021

A few years age I’ve made a decision that had the potential to change the course of history. Had I went a different path, the pl(dvp) layout might have never seen the light of day. But did I make a wise choice? Or had I chosen poorly?

I’m talking of course about the decision to learn Programmer Dvorak rather than a regular Dvorak keyboard layout. The main differences between the two is that in the former digits are entered with Shift key pressed down which allows several punctuation marks often used when programming to be typed without the need to reach for Shift. The hypothesis goes that developers use digits less often thus such design optimises the layout for them.

To test this I’ve grabbed all my git repositories and constructed a histogram of characters used in text files present there. Since letters are on the same position on both layouts in question, only digits and punctuation characters are compared on the histogram:

Not number rowUnshiftedShifted-".)(,/*=_;0>:<12'#{}438\569$[7]&+!%|@`?~^52%29%19%
Fig. 1. Histogram of characters used in text files authored by me present in my Git repositories.

Computer Science vs Reality

Posted by Michał ‘mina86’ Nazarewicz on 23rd of May 2021

Robin: ‘Let’s use a linked li—’; Batman: *slaps Robin* ‘Vector is faster’

Some years ago, during a friendly discussion about C++, a colleague challenged me with a question: what’s the best way to represent a sequence of numbers if delete operation is one that needs to be supported. I argued in favour of a linked list suggesting that with sufficiently large number of elements, it would be much preferred.

In a twist of fate, I’ve been recently discussing an algorithm which reminded my of that conversation I had all those years ago. But this time I was arguing against a node-based data structure. Rather than ending things at a conversation, I’ve decided to benchmark a few solutions to make sure which approach is the best.

The problem

The task at hand is simple. Design a data structure which stores a set of words all of the same length and is able to return all words matching globs in the form ‘prefix*suffix’. That is, words which start with a given prefix and end with a given suffix. Either part of the pattern may be empty and their concatenation is never longer than length of the words in the collection. Initialisation time and memory footprint are not a concern. Complexity of returning a result can be assumed to be constant.

In this article I’me going to describe possible solutions — some using a boring vector while others taking advantage of an exciting prefix tree — and benchmark the implementations in an ultimate battle between contiguous-memory-based and a node-based containers.

Embrace the Bloat

Posted by Michał ‘mina86’ Nazarewicz on 16th of May 2021

‘I’m using slock as my screen locker,’ a wise man once said. He had a beard so surely he was wise.

‘Oh?’ his colleague raised a brow intrigued. ‘Did they fix the PAM bug?’ he prodded inquisitively. Nothing but a confused stare came in reply. ‘slock crashes on systems using PAM,’ he offered an explanation and to demonstrate, he approached a nearby machine and pressed the Return key.

Screens, blanked by a locker just a few minutes prior, came back to life, unlocked without the need to enter the password.

The L*u*v* and LChuv colour spaces

Posted by Michał ‘mina86’ Nazarewicz on 9th of May 2021

I’ve written about L*a*b* so it’s only fair that I’ll also describe its twin sister: the L*u*v* colour space (a.k.a. CIELUV). The two share a lot in common. For example, they use the same luminance value, base their chromaticity on opponent process theory and each of them has a corresponding cylindrical LCh coordinate system. Yet, despite those similarities — or perhaps because of them — the CIELUV colour space is often overlooked.

Panther Chameleon
Fig. 1. Picture of a chameleon with its decomposition into L*, u* and v* channels. Photo by Dr Pratt Datta.

Even though L*a*b* seems to be getting all the limelight, L*u*v* model has its advantages. Before we start comparing the two colour spaces, let’s first go through the conversion formulæ.

Names of operands of arithmetic operations

Posted by Michał ‘mina86’ Nazarewicz on 2nd of May 2021

Every now and again I need a specific name for operands or results of various arithmetic operations. It usually takes me embarrassingly long time to look that information up. To save time in the future, here’s the list: $$ \begin{align} \left. \begin{matrix} \text{augend} + \text{addend†} \\ \text{summand} + \text{summand} \\ \text{term} + \text{term} \end{matrix} \right\} & = \text{sum} \\[.5em] \left. \begin{matrix} \text{minuend} - \text{subtrahend} \\ \text{term} - \text{term} \end{matrix} \right\} & = \text{difference} \\[.5em] \left. \begin{matrix} \text{multiplier} × \text{multiplicand} \\ \text{factor} × \text{factor} \\ \end{matrix} \right\} & = \text{product} \\[.5em] \left. \begin{matrix} \text{dividend} ÷ \text{divisor} \\ {\text{numerator}\over\text{denominator}} \end{matrix} \right\} & = \left\{ \begin{matrix} \text{ratio} \\ \text{fraction} \\ \text{quotient‡} + \text{remainder} \end{matrix} \right. \\[.5em] \text{base}^{\text{exponent}} & = \text{power} \\[.5em] \sqrt[\text{degree}]{\text{radicand}} & = \text{root} \\[.5em] \log_\text{base}(\text{anti-logarithm}) & = \text{logarithm} \end{align} $$

† Occasionally used to mean any operand of addition.
‡ Occasionally used to mean the fraction itself rather than just the integer part.

List in big part thanks to Wikipedia.

Most vexing parse

Posted by Michał ‘mina86’ Nazarewicz on 25th of April 2021

Here’s a puzzle: What does the following C++ code output:

#include <cstdio>
#include <string>

struct Foo {
	Foo(unsigned n = 1) {
		std::printf("Hell%s,", std::string(n, 'o').c_str());
	}
	~Foo() {
		std::printf("%s", " world");
	}
};

static constexpr double pi = 3.141592653589793238;

int main(void) {
	Foo foo();
	Foo bar(unsigned(pi));
}

Will the real ARG_MAX please stand up? Part 2

Posted by Michał ‘mina86’ Nazarewicz on 18th of April 2021

In part one we’ve looked at the ARG_MAX parameter on Linux-based systems. We’ve established experimentally how it affects arguments passed programs and what influences the value. This time, we’ll look directly at the source to verify our findings and see how the limit looks from the point of view of system libraries and kernel itself.