If you’re really cool like me and own a decade old Dodge Grand Caravan you may find that the Totally Integrated Power Module (TIPM) will start failing.
In common cases (like mine) it will stop activating the fuel pump relay. This means your car will turn over but not start. This is not a great feature but does potentially help with the climate change crisis.
Fortunately for me I have a mechanic just down the street. I’d like to not tow this beast. To diagnose all of this the internet mechanics mention opening your TIPM and hot-wiring the fuel relay circuit. I took at the relay’s fuse and used a multimeter. No voltage. Spot checked some others and was reading the expected volts.
I shoved a jumper wire from my battery-wired cigarette port fuse (not the key activated one) over to the fuel pump relay fuse and heard what sounded like an electric motor activate from beneath the car.
I turned the ignition and the car started right up. Off to the mechanic.
Don’t Trust the Crazy Car Owner
I explained my morning’s adventures to the mechanic. He looked skeptically at my patch wire and said he’d run a test to diagnose things. Could be the TIPM, maybe just the fuel pump.
We both knew it was the TIPM. Turns out it was the TIPM. Shocker.
The fix: new TIPM. The problem: since these things fail so ofter and they’re year/make/model specific and there’s a worldwide computer chip shortage I get a refurbed one. Oh yeah, and it’s going to take a week to ship it.
Turns out no car for a week is fine. Thanks to COVID most necessities are all delivered now.
Bugs in the System
After some delays and fakeouts from TIPM dealers I got the call that the van was ready to go.
Walked up to the shop and it started right up. Settled the bill (ouch) and brought it home.
Next day the right turn indicator started flipping out “front right turn signal out”. Guess I get to make a stop at the auto supply joint. Looked up the bulb number but before making a purchase decided to physically check all the lights first. The front right fog light has been out for forever so I might as well fix that while I’m at it.
I activate the left turn signal: the left fog light starts flashing. What?
I activate the right turn signal: no lights flashing. Ok, expected.
I push the fog light button: bot turn signals turn on. What?
Quick call to the mechanic to describe the situation. Basically got the “wasn’t my fault” spiel which is fine, wasn’t casting blame just trying to problem-solve here.
There’s a lot of downtime at kid’s baseball games so between innings I start asking the internet what it thinks of all of this. Eventually I put in the correct series of search terms and land on someone having the same problem. I searched the document number in the images: “k6855837”.
It lands me on a YouTube video that take me step-by-step through the process of performing this fix.
Apparently my new-to-me TIPM has a firmware update that changed the behavior of some circuits. Just gotta flip some wires. Since it turned into a car maintenance day I took the opportunity to pick up some new H11 headlight sockets and wire them in since Dodge seems to use janky wiring that melts every few years.
And hooray, a car that starts with correctly functioning lights.
Not a complete list but these have not changed, even when being forced into environments that were actively hostile against me (WordPress PHP/JS code style is hideous).
GIF: team soft “g”
Tabs vs Spaces: spaces (but inserted via using the tab key, nobody presses the space key)
Pineapple: Excellent on a pizza when it also has Canadian bacon.
data.community.labels.filter(/* ... */) is removing certain Label instances from the list.
filteredLabels.sort(/* ... */) is sorting the filtered items first by their .type then by their .name (case-insensitive)
filteredLabels.map(/* ... */) is turning the list of Label instances into a JSX.Element.
The hardest part for me to decipher as a reader of the code was step two: given two labels what was the intended sort order?
After spending a few moments internalizing those if statements I came to the conclusion the two properties being used for comparison were label.type and label.name.
A label of .type === LabelType.AutoLabel should appear before a label of .type === LabelType.UserDefined.
Labels with the same .type should then be sorted by their .name case-insensitively.
Ramda’s sortWith
The problem I was encountering with this bit of code is that my human brain works this way:
Given a list of Labels:
- Sort them by their .type with .AutoLabel preceding .UserDefined
- Sort labels of the some .type by their .name case-insensitively
Ramda’s sortWith gives us an API that sounds similar in theory:
Sorts a list according to a list of comparators.
A “comparator” is typed with (a, a) => Number. My list of comparators will be one for the label.type and one for the label.name.
A comparator‘s return value here is a bit ambiguous declaring Number in the documentation. But their code example for sortWith points to some more handy functions: ascend and descend.
Here’s the description for ascend:
Makes an ascending comparator function out of a function that returns a value that can be compared with < and >.
To sort by label.type I need to map the LabelType to value that will sort .AutoLabel to precede .UserDefined:
Ramda is a curried API. This means by leaving out the second argument, sortLabels now has the TypeScript signature of:
type LabelSort = (labels: Label[]) => Label[]
Since we hinted the generic type on sortWith<Label>() TypeScript has also inferred that the functions we give to ascend receive a Label type as their single argument (see on TS Playground).
Screen capture of TS Playground tooltip showing Label as the inferred type.
Given Ramda’s curried interface, we can extract that sorting business logic into a reusable constant.
/**
* Sort a list of Labels such that
* - AutoLabels appear before UserDefined
* - Labels are sorted by name case-insensitively
*/
export const sortLabelsByTypeAndName = sortWith<Label>(
[
ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
ascend((label) => label.name.toLowercase(),
]
);
Using this to replace the original code’s sorting we now have:
Ramda’s filter looks similar to Array.prototype.filter:
Filterable f => (a → Boolean) → f a → f a
Takes a predicate and a Filterable, and returns a new filterable of the same type containing the members of the given filterable which satisfy the given predicate. Filterable objects include plain objects or any object that has a filter method such as Array.
The first change will be conforming to this interface:
import { filter } from 'ramda';
const filteredLabels = filter<Label>((label) => {
// boolean logic here
}, data.community.labels);
There are two if statements in our original filter code that both have early returns. This indicates there are two different conditions that we test for.
Remove Label if
.type is AutolLabeland
.name is in a list of predefined label names
Remove Label if
.type is UserDefinedand
.stats.count is zero (or fewer)
To clear things up we can turn these into their own independent functions that capture the business logic they represent.
The AutoLabel scenario has one complication. The isGroup variable changes the behavior by changing the names the label is allowed to have.
In Lambda calculus this is called a free variable.We can solve this now by creating our own closure that accepts the string[] of names and returns the Label filter.
So if our application of Ramda’s pipe produces the exact signature of our a React.FunctionComponent then it stands to reason we can get rid of the function body completely:
The ergonomics of code like this is debatable. I personally like it for my own projects. I find the more I think and write in terms of data pipelines the clearer tho code becomes.
Here’s an interesting problem. What happens if we need to use a React hook in a component like this? We’ll need a valid place to call something like React.useState() which means we’ll need to create a closure for component implementation.
This makes sense though! A functionally pure component like this is not able to have side-effects. React hooks are side-effects.
Designing at the Type Level
The <LabelMenu /> component has a type signature of
type Props = {isGroupPage: boolean, labels: Label []};
type LabelMenu = React.VFC<Props>
It renders a list of the labels it is given while also sorting and filtering them due to some business logic.
We extracted much of this business logic into pure functions that encoded our business rules into plain functions that operated on our types.
When I use <LabelMenu /> I know that I must give it isGroupPage and labels props. The labels property seems pretty self-explanatory, but the isGroupPage doesn’t really make anything obvious about what it does.
I could go into the <LabelMenu /> code and discover that isGroupPage changes which LabelType.AutoLabel labels are displayed.
But what if I wanted another <LabelMenu /> that looked exactly the same but behaved slightly differently?
I could add some more props to <LabelMenu /> that changed how it internally filtered and sorted the labels I give it, but adding more property flags to its interface feels like the wrong kind of complexity.
How about disconnecting the labels from the filtering and sorting completely?
Start by Simplifying
I’ll first simplify the <LabelMenu /> implementation:
This implementation should contain everything about how these elements should look and render every label it gets.
But what about our filtering and sorting logic?
We had a component with this type signature:
type Props = { isGroupPage: boolean, labels: Label[] };
type LabelMenu = React.VFC<Props>;
Can we express the original component’s interface without changing <LabelMenu />‘s implementation?
If we can write a function that maps from one set of props to the other, then we should also be able to write a function that maps from one React component to the other.
First write the function that uses our original Props interface as its input, and then returns the new Props interface as its return value.
There’s our Ramda implementations again. We took out all of the React bits. It’s the same business logic but without the React element rendering. The only difference is instead of mapping the labels into JSX.Elements the labels are returned in the form of LabelMenuProps.
We’ve encoded our business logic into a function that maps from FilterPageLabelMenuProps to LabelMenuProps.
That means the output of propertiesForFilterPage can be used as the input to <LabelMenu />, which is itself a function that returns a JSX.Element.
Piping one function’s output into a compatible function’s input, that sounds familiar, doesn’t it?
In a real-time chat workplace spelling and grammar tend to take a back seat to speed.
I typed qwerty proficiently for many years. After switching to Dvorak I have found that my fingers tend to translate the words I type phonetically.
I don’t know how to explain it. In my mind I’m using the word “their” but then I read back the sentence I just typed: “I don’t know there thoughts on …”. I’m always surprised. It’s not the word I had visualized but it’s the word I typed.
Sometimes I catch it but usually I hit enter before I read what I typed and quickly press up-arrow then e so I can quickly edit the grammatical error before too many coworkers have read it. (I just did it there. I know the word is “read” but my fingers type “red” and then I go back and fix it).
The scenario that always gives me problems is weather vs whether vs wether.
weather: the state of the atmosphere at a place and time as regards heat, dryness, sunshine, wind, rain, etc.: if the weather’s good we can go for a walk.
whether: expressing a doubt or choice between alternatives: he seemed undecided whether to go or stay | it is still not clear whether or not he realizes.
wether: a castrated ram.
I think I always get “weather” right but my fingers never want to type an “h” after the “w”. They just aren’t used to that sequence of keys.
So I end up talking about castrated rams much more than I ever thought I would.
For me everything worthwhile starts with “what if we try to …”. But the magic moment where that dopamine is flooding the brain coincides with that phrase: “Oh my god, this is gonna work.”
There will no doubt be a million more things to do, but thats the moment the “how” starts falling into place.
The second in a series of posts that investigates using strongly-typed first-class functions with WordPress WP-API to create a composable, testable, verifiable, and productive method of REST API development.
Context switching is a productivity killer. What exactly constitutes a context switch though?
Moving to a ping in Slack away from a Vim window? Definitely a context switch.
Switching via cmd-tab between a source code editor and browser window? Also a context switch. Yes, even when duck-duck-going the error from the console.
Everything that reduces context switching during development is a productivity win.
Debugging is a Productivity Killer
Time spent searching logs and reconstructing failure cases from production bugs is time not spent shipping.
It is also time that was not accounted for in the 100% accurate development estimate given to the project manager to complete the task.
Passing a string value to a function that expects an int: bug. Typing the incorrect string name of a function in WordPress’s add_filter: another bug. Calling a method on a WP_Error instance because it was assumed to be a WP_User: bug.
They may all seem like small bugs but they can quickly add up to a non-trivial amount of time debugging. Perhaps these bugs will be discovered quickly at runtime, but that requires the correct codepaths are executed in a runtime. Is every code path in a project going to be executed between each source code change? No.
Static analysis will increase productivity by uncovering these bugs. But even with a 100% typed, fully analyzed codebase validating running code output is still necessary.
Automating runtime validation is another tool to increase productivity.
Runtime Verification
Psalm enforces correct types and API usage. Checking the correctness of the runtime code still requires some manual steps, like booting up an entire WordPress stack. Previously, wp-env was used to verify that the endpoint actually worked.
This isn’t going to scale well when the number of endpoints and the number of ways to call them increases. Jumping from an editor to a browser and back isn’t the best recipe for productive coding sessions either.
Time for automated tests.
In the world of PHP, that means PHPUnit.
The bare minimum code to test totes_not_buggy() is a single implementation of PHUnit\Framework\TestCase with a single test method. It will live in tests/Totes/TotesTest.php:
<?php
namespace Totes;
use WP_REST_Request;
use WP_REST_Server;
class TotesTest extends \PHPUnit\Framework\TestCase {
/**
* @return void
*/
function testTotesNotBuggy() {
$request = new WP_REST_Request( 'GET', '/totes/not-buggy' );
$response = totes_not_buggy( $request );
$this->assertEquals( [ 'status' => 'not buggy' ], $response->get_data( ) );
}
}
To run PHPUnit, the dependency needs to be installed.
composer --dev require phpunit/phpunit
Now run the test:
./vendor/bin/phpunit tests
// yadda yadda
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
The error shows that we don’t have WordPress APIs available to our run runtime:
1) Totes\TotesTest::testTotesNotBuggy
Error: Class 'WP_REST_Request' not found
WordPress is a dependency of this project. It won’t work without it. Time to install it:
composer require --dev johnpbloch/wordpress
The johnpbloch/wordpress package by default will install the WordPress source code in ./wordpress. Setting up a whole WordPress stack to work on some source code: productivity killer. “No install” is faster than any five minute install no matter how famous it is.
If WordPress were a PSR-4 compliant project there wouldn’t be anything left to do. But it isn’t. To illustrate, run the test again and observe the result is the same.
Since Composer doesn’t know how to autoload WordPress source code, PHPUnit needs to be taught how to find WordPress APIs during test execution. A perfect place for this is via PHPUnit’s "bootstrap" system.
Generate a config and tell PHPUnit to use a custom"bootstrap":
./vendor/bin/phpunit --generate-config
PHPUnit 9.0.1 by Sebastian Bergmann and contributors.
Generating phpunit.xml in /Users/beau/code/wp-api-fun
Bootstrap script (relative to path shown above; default: vendor/autoload.php): tests/bootstrap.php
Tests directory (relative to path shown above; default: tests):
Source directory (relative to path shown above; default: src):
Generated phpunit.xml in /Users/beau/code/wp-api-fun
This generates ./phpunit.xml and tells phpunit to run test/bootstrap.php before executing tests.
Time to hunt down all of the WordPress dependencies for this test.
One way to find which PHP files need to be included is to keep running the tests and including the files that define the missing classes and functions.
For example, the current error is that WP_REST_Request is not defined.
Now add wordpress/wp-includes/rest-api/class-wp-rest-request.php.
Keep going until it passes. This is the end result for now. Note that this is – at this time in our development – 100% of our plugin’s runtime dependencies.
Now that Composer can install WordPress and PHPUnit, the CI can run these tests too. Add it to the GitHub action:
+
+ - name: Unit Tests
+ run: vendor/bin/phpunit
Runtime verification of any new route can now be captured in a unit test. Once in a unit test it can be ran in all sorts of ways.
Bonus, with XDebug configured PHPUnit will also report coverage analysis when proper @covers annotations are added:
vendor/bin/phpunit test --coverage-html coverage-report
PHPUnit 9.0.1 by Sebastian Bergmann and contributors.
. 1 / 1 (100%)
Time: 68 ms, Memory: 8.00 MB
OK (1 test, 1 assertion)
Generating code coverage report in HTML format ... done [12 ms]
68 millisecond execution time with 100% coverage of a one-line function assigned a CRAP score of 1. Gotta love that new project smell.
Screen capture of a PHPUnit coverage report.
Safety Nets Engaged
Between Psalm and PHPUnit we now have static analysis and automated runtime tests.
Next up we’ll dive into Higher-Order Kinds with Psalm and start using them with WP-API to create a declarative, composable API.
When sharing some of this with a coworker to help figure out some type questions they quickly pointed out that this is in fact a Parser (thanks Dennis). These are things an informally trained developer (me) probably should have been able to identify at this point in their career.
Mapping the understanding of what a Parser is to what I had named it caused confusion. So all things Validator<T> have become Parser<T>. Naming: one of the two hard things.
Combining more than Two Parsers
In the Parser<T> library the function oneOf accepts two Parser<T> types and returns the union of them:
function oneOf<A,B>(a: Parser<A>, Parser<B>): Parser<(A|B)> {
return value => mapFailure(a(value), () => b(value));
}
A more complex Parser<T> is now created out of simpler ones.
Assuming isPerson is Parser<Person> and isAnimal is Parser<Animal>, const isThing is inferred by TypeScript to be:
type Parser<null | Person | Animal>
Each additional Parser<T> requires another call of oneOf. Writing a oneOf that takes one or more Parser<T> types is straight forward:
function oneOf(parser, ... parsers) {
return value => parsers(
(result, next) => mapFailure(result, () => next(result)),
parser(value)
)
}
However, writing the correct type signature for this function was beyond my grasp.
My first attempt I knew couldn’t work:
function oneOf<T>(parser: Parser<T>, ...parsers: Parser<T>[]): Parser<T> {
In use, TypeScript’s inference was not happy:
const example = oneOf(isString, isNumber, isBoolean);
Types of property 'value' are incompatible.
Type 'number' is not assignable to type 'string'.
The T was being captured as string because the first argument to oneOf is a Parser<string>. However isNumber is a Parser<number>, so the two T did not match and tsc was not happy. Removing the first parser: Parser<T> didn’t help.
If TypeScript is told what the union is, then everything is ok:
const example = oneOf<string|number|boolean>(isString, isNumber, isBoolean);
But the best API experience is one in which the correct type is inferred.
After varying attempts of picking out similar cases in TypeScript’s Advanced Types I gave up and posed the question in the company’s #typescript Slack channel.
The magical internet people debated about Parser<T> and Result<T> so I tried to simplify things to the “base case” and got rid of Result<T>:
type Machine<T> = () => T
Is it possible to create a function signature such that a list of Machine<*>s of differing <T>s via variadic type arguments could infer the union Machine<T1|T2|T3|...>:
function oneOf(... machines: Array<Machine<?>>>): Machine<(UNION of ?)> {
The magical internet people came up with a solution (thank you, Tal).
type MachineType<T> = T extends Machine<infer U> ? U : never;
function<M extends Machine<any>[]>(...machines: M): Machine<MachineType<M[number]>> {
After mapping it into the Parser domain, It worked!
type ParserType<T> = T extends Parser<infer U> ? U : never;
function<P extends Parser<any>[]>oneOf(...machines: P): Parser<ParserType<P[number]>> {
const example = oneOf(isNumber, isString, isBoolean);
Running tsc passed, and the inferred type of const example is:
const example: (value: any) => Result<string | number | boolean>
Now to understand why it works.
Conditional Types: ParserType<T>
The first thing to understand is ParserType<T>, which uses a Conditional Type:
type ParserType<T> = T extends Parser<infer U> ? U : never;
This is essentially a function within the type analysis stage of TypeScript (somewhat analogous to Flow’s $Call utility type). My first understanding of this reads as:
Given a type T, if it extends Parser<infer U> return U, otherwise never.
Using ParserType with any Parser<T> will give the type of T. So given any function that is a Parser<T>, the type of <T> can be inferred.
Within the extends clause of a conditional type, it is now possible to have infer declarations that introduce a type variable to be inferred. Such inferred type variables may be referenced in the true branch of the conditional type. It is possible to have multiple infer locations for the same type variable.
Take an example parsePerson parser which is defined using objectOf:
const parsePerson = objectOf({
name: isString,
email: isString,
metInPerson: isBoolean
});
type Person = ParserType<typeof parsePerson>;
// This is ok!
const valid: Person = {
name: 'Nausicaa',
email: 'nausica@valleyofthewind.website',
metInPerson: false,
};
// This fails!
const invalid: Person = {}; // Type Error
type Person is inferred to be:
type Person = {
name: string;
email: string;
metInPerson: boolean;
}
const invalid: Person fails because:
Type '{}' is missing the following properties from type '{ name: string; email: string; metInPerson: boolean; }': name, email, metInPerson
So now the return value of oneOf is almost understood:
: Parser<ParserType<P[number]>>
This says:
Returns a Parser<T> whose T is the ParserType of P[number].
Well what is P[number]?
Mapped Types
In TypeScript, Mapped Types allow one to take the key and value types of one type, and transform them into another.
If you’ve used Partial<T> or ReadOnly<T>, you have used a Mapped Type. The example implementations of those are given as:
type Readonly<T> = {
readonly [P in keyof T]: T[P];
}
type Partial<T> = {
[P in keyof T]?: T[P];
}
Given a type with an index, the type that is used for the index’s value can be accessed using its key type:
type MyIndexedType = {[key: number]: (number|boolean|string)};
type ValueType = MyIndexedType[number];
In this example ValueType will have the type (number|boolean|string).
In the return signature of oneOf there is a P[number].
: Parser<ParserType<P[number]>>
Assuming P is an indexed type with keys and values whose key type is a number, this gives the type of the value stored in P.
So what is P?
function<P extends Parser<any>[]>oneOf(
P is an array of Parser<any>[]. Well it extendsParser<any>[].
This is where the magic happens.
TypeScript captures the T of each Parser<any> and stores it in P. Because an Array is an indexed type whose key is number, the type of P can also be expressed like this:
type P = {[number]: (Parser<number>|Parser<string>|Parser<boolean>)};
There it is! The union is the value type at P[number].
Putting the Pieces Together
ParserType is a Conditional Type that given a Parser<T>, returns T.
What happens when ParserType is given a union of Parser<T> types.
type T = ParserType<(Parser<string> | Parser<number>)>
TypeScript infers the union for T:
type T = string | number
Given a Mapped Type P that extends Parser<T>[], the union of Parser<T> types is available at P[number].
It follows then that passing the P[number] into ParserType will provide the union of T types in Parser<T>. That is exactly what the return type in oneOf does.
Reading the new signature for oneOf is now less cryptic:
function oneOf<P extends Parser<any>[]>(
...parsers: P
): Parser<ParserType<P[number]>> {
Now to wrap up the implementation.
Using oneOf doesn’t work unless there is at least one Parser<T>. The signature can be updated to require one:
function oneOf<T, P extends Parser<any>[]>(
parser: Parser<T>,
...parsers: P
): Parser<T|ParserType<P[number]>> {
// no additional parsers, return the single parser to be used as is
if (parsers.length === 0) {
return parser;
}
return value => mapFailure(
parsers.reduce(
// with each reduction, only, try to parse when the previous was a Failure
(result, next) => mapFailure(result, () => next(value)),
// seed the result with the first parser
parser(value)
),
// if all parsers fail, indicate that there were multiple parsers attempted
() => failure(value, `'${value}' did not match any of ${parsers.length+1} validators`)
);
}