I’ve been around a little while now so I’m instilling this wisdom to you.
The guaranteed way to become a 10x developer:
Hire ten developers whose mean productivity matches yours.
I’ve been around a little while now so I’m instilling this wisdom to you.
The guaranteed way to become a 10x developer:
Hire ten developers whose mean productivity matches yours.
In the day job I recently recommended using Ramda to help clean up the readability of our UI code.
Ramda is a collection of pure functions designed to fit together using functional programming patterns.
We had a piece of TypeScript code landing that processed some data and rendered a React component.
const filteredLabels =
data.community.labels.filter((label) => {
if (label.type === LabelType.AutoLabel &&
(isGroupPage === true ?
['system_a', 'special'] :
['system_a']
).includes(label.name)
) {
return false;
}
if (label.type === LabelType.UserDefined &&
label.stats.timesUsed === 0) {
return false;
}
return true;
});
filteredLabels.sort((labelA, labelB) => {
if (labelA.type === LabelType.AutoLabel) {
if (labelB.type === LabelType.UserDefined) {
return -1;
}
return labelA.name.toLowerCase() < labelB.name.toLowerCase() ? -1 : 1;
}
if (labelB.type === LabelType.AutoLabel) {
return 1;
}
return labelA.name.toLowerCase() < labelB.name.toLowerCase() ? -1 : 1;
});
return filterdLabels.map((label) => <>{/* React UI */}</>
Three distinct things are happening here:
data.community.labels.filter(/* ... */)
is removing certain Label
instances from the list.filteredLabels.sort(/* ... */)
is sorting the filtered items first by their .type
then by their .name
(case-insensitive)filteredLabels.map(/* ... */)
is turning the list of Label
instances into a JSX.Element
.The hardest part for me to decipher as a reader of the code was step two: given two labels what was the intended sort order?
After spending a few moments internalizing those if statements I came to the conclusion the two properties being used for comparison were label.type
and label.name
.
A label of .type === LabelType.AutoLabel
should appear before a label of .type === LabelType.UserDefined
.
Labels with the same .type
should then be sorted by their .name
case-insensitively.
sortWith
The problem I was encountering with this bit of code is that my human brain works this way:
Given a list of Labels:
- Sort them by their .type with .AutoLabel preceding .UserDefined
- Sort labels of the some .type by their .name case-insensitively
Ramda’s sortWith
gives us an API that sounds similar in theory:
Sorts a list according to a list of comparators.
A “comparator” is typed with (a, a) => Number
. My list of comparators will be one for the label.type
and one for the label.name
.
import { sortWith } from 'ramda';
const sortLabels = sortWith<Label>([
// 1. compare label types
// 2. compare label names
]);
A comparator‘s return value here is a bit ambiguous declaring Number
in the documentation. But their code example for sortWith
points to some more handy functions: ascend
and descend
.
Here’s the description for ascend
:
Makes an ascending comparator function out of a function that returns a value that can be compared with
<
and>
.
To sort by label.type
I need to map the LabelType
to value that will sort .AutoLabel
to precede .UserDefined
:
const sortLabels = sortWith<Label>([
ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
// 2. compare label names
]);
To sort by the .name
I can ascend with a case-insensitive value for label.name
:
const sortLabels = sortWith<Label>([
ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
ascend((label) => label.name.toLowercase(),
]);
Ramda is a curried API. This means by leaving out the second argument, sortLabels
now has the TypeScript signature of:
type LabelSort = (labels: Label[]) => Label[]
Since we hinted the generic type on sortWith<Label>()
TypeScript has also inferred that the functions we give to ascend
receive a Label
type as their single argument (see on TS Playground).
Label
as the inferred type.Given Ramda’s curried interface, we can extract that sorting business logic into a reusable constant.
/**
* Sort a list of Labels such that
* - AutoLabels appear before UserDefined
* - Labels are sorted by name case-insensitively
*/
export const sortLabelsByTypeAndName = sortWith<Label>(
[
ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
ascend((label) => label.name.toLowercase(),
]
);
Using this to replace the original code’s sorting we now have:
const filteredLabels =
data.community.labels.filter((label) => {
if (label.type === LabelType.AutoLabel &&
(isGroupPage === true ?
['system_a', 'special'] :
['system_a']
).includes(label.name)
) {
return false;
}
if (label.type === LabelType.UserDefined &&
label.stats.timesUsed === 0) {
return false;
}
return true;
});
const sortedLabels = sortLabelsByTypeAndName(filteredLabels);
return sortedLabels((label) => <>{/* React UI */}</>);
Now let’s see what Ramda’s filter can do for us.
filter
Ramda’s filter
looks similar to Array.prototype.filter
:
Filterable f => (a → Boolean) → f a → f a
Takes a predicate and a
Filterable
, and returns a new filterable of the same type containing the members of the given filterable which satisfy the given predicate. Filterable objects include plain objects or any object that has a filter method such asArray
.
The first change will be conforming to this interface:
import { filter } from 'ramda';
const filteredLabels = filter<Label>((label) => {
// boolean logic here
}, data.community.labels);
There are two if
statements in our original filter code that both have early returns. This indicates there are two different conditions that we test for.
Label
if.type
is AutolLabel
and.name
is in a list of predefined label namesLabel
if.type
is UserDefined
and.stats.count
is zero (or fewer)To clear things up we can turn these into their own independent functions that capture the business logic they represent.
The AutoLabel
scenario has one complication. The isGroup
variable changes the behavior by changing the names the label is allowed to have.
In Lambda calculus this is called a free variable. We can solve this now by creating our own closure that accepts the string[]
of names and returns the Label
filter.
const isAutoLabelWithName = (names: string[]) =>
(label: Label) =>
label.type === LabelType.AutoLabel
&& names.include(label.name);
Now isAutoLabelWithName
can be used without needing to know anything about isGroupPage
.
We can now use this with filter
:
const filteredLabels = filter<Label>(
isAutoLabelWithName(
isGroupPage
? ['system_a', 'special']
: ['system_a'],
data.community.labels
);
But there’s a problem here. In the original code, we wanted to remove the labels that evaluated to true
. This is the opposite of that.
In set theory, this is called the complement. Ramda has a complement
function for this exact purpose.
const filteredLabels = filter<Label>(
complement(
isAutoLabelWithName(
isGroupPage
? ['system_a', 'special']
: ['system_a']
),
data.community.labels
);
The second condition is simpler given it uses no free variables.
const isUnusedUserDefinedLabel = (label: Label) =>
label.type === LabelType.UserDefined
&& label.stats.timesUsed <= 0;
Similar to isAutoLabelWithName
any Label
that is true for isUnusedUserDefinedLabel
should be removed from the list.
Since either being true
should remove the Label
from the collection, Ramda’s anyPass
can combine the two conditions:
const filteredLabels = filter<Label>(
complement(
anyPass(
isAutoLabelWithName(
isGroupPage
? ['system_a', 'special']
: ['system_a'],
isUnusedUserDefinedLabel
)
),
data.community.labels
);
Addressing the free variable this can be extracted into its own globally declared function that describes its purpose:
const filterLabelsForMenu = (isGroupPage: boolean) =>
filter<Label>(
complement(
anyPass(
isAutoLabelWithName(
isGroupPage
? ['system_a', 'special']
: ['system_a'],
isUnusedUserDefinedLabel
)
);
The <LabelMenu>
component cleans up to:
import { anyPass, ascend, complement, filter, sortWith } from 'ramda';
import { Label } from '../generated/graphql';
type Props = { isGroupPage: boolean };
const isAutoLabelWithName = (names: string[]) =>
(label: Label) =>
label.type === LabelType.AutoLabel
&& names.include(label.name);
const isUnusedUserDefinedLabel = (label: Label) =>
label.type === LabelType.UserDefined
&& label.stats.timesUsed <= 0;
const filterLabelsForMenu = (isGroupPage: boolean): (labels: Label[]) => Label[] =>
filter<Label>(
complement(
anyPass(
isAutoLabelWithName(
isGroupPage
? ['system_a', 'special']
: ['system_a'],
isUnusedUserDefinedLabel
)
);
export const sortLabelsByTypeAndName = sortWith<Label>(
[
ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1),
ascend((label) => label.name.toLowercase()),
]
);
const LabelMenu = ({isGroupPage, labels: Label[]}: Props): JSX.Element =>
const filterForGroup = filterLabelsForMenu(isGroupPage);
const filteredLabels = filterForGroup(labels);
const sortedLabels = sortLabelsByTypeAndName(filteredLabels);
return (
<>{
sortedLabels.map((label) => <>{/* React UI */}</>)
}</>
);
};
The example above is very close to what we ended up landing.
However, since I like to get a little too ridiculous with functional programming patterns I decided to take it a little further in my own time.
pipe
The <LabelMenu />
component has one more step that can be converted over to Ramda using map
.
Ramda’s map
is similar to Array.prototype.map
but using Ramda’s curried, data-as-final-argument style of API.
const labelOptions = map<Label>(
(label) => <>{/* React UI */}</>
);
return <>{labelOptions(sortedLabels)}</>;
labelOptions
is now a function that takes a list of labels (Label[]
) and returns a list of React nodes (JSX.Element[]
).
The <LabelList />
component now has a very interesting implementation.
filterLabelsForMenu
returns a function of type (labels: Label[]) => Label[]
sortLabelByTypeAndName
is a function of type (labels: Label[]) => Label[]
.labelOptions
is a function of type (labels: Label[]) => JSX.Element[]
.The output of each of those functions is given as the input of the next.
Taking away all of the variable assignments this looks like:
const LabelMenu = ({isGroupPage, labels: Label[]}: Props): JSX.Element => {
const labelOptions = map(
(label) => <>{/* React UI */}</>,
sortLabelsByTypeAndName(
filterLabelsForMenu(isGroupPage)(
labels
)
)
);
return <>{labelOptions}</>;
};
To understand how labelOptions
becomes JSX.Element[]
we are required to read from the innermost parentheses to the outermost.
filteredLabelsForMenu
is applied with props.isGroupPage
props.labels
sortLabelsByTypeAndName
map(<></>)
JSX.Element[]
We can take advantage of Ramda’s pipe
to express these operations in list form.
Performs left-to-right function composition. The first argument may have any arity; the remaining arguments must be unary.
We’re in luck, all of our functions are unary. We can line them up:
const LabelMenu = ({isGroupPage, labels}: Props) =>
const createLabelOptions = pipe(
filterLabelsForMenu(isGroupPage),
sortLabelsByTypeAndName,
map(label => <key={label.id}>{label.name}</>)
);
return <>{createLabelOptions(labels)}</>
}
The application of pipe
assigned to createLabelOptions
produces a function with the type signature:
type createLabelOptions: (labels: Label[]) => JSX.Element[];
React’s functional components are also plain functions. Ramda can use those too!
The type signature of <LabelMenu />
is:
type LabelMenu = ({isGroupPage: boolean, labels: Label[]}) => JSX.Element;
We can update our pipe
to wrap the list in a single element as its final operation:
export const LabelMenu = ({isGroupPage, labels}: Props): JSX.Element => {
const createLabelOptions = pipe(
filterLabelsForMenu(isGroupPage),
sortLabelsByTypeAndName,
map(label =>
<li key={label.id}>
{label.name}
</li>
),
(elements): JSX.Element =>
<ul>{elements}</ul>
);
return createLabelOptions(labels);
}
The type signature of our pipe
application (createLabelOptions
) is now:
const createLabelOptions: (x: Label[]) => JSX.Element
Wait a second, that looks very close to a React.VFC
compatible signature.
Our pipe
expects input of a single argument of Label[]
. But what if we changed it to accept an instance of Props
?
export const LabelMenu = (props: Props): JSX.Element => {
const createLabelOptions = pipe(
(props: Props) =>
filterLabelsForMenu(props.isGroupPage)(props.labels),
sortLabelsByTypeAndName,
map((label: Label) =>
<li key={label.id}>{label.name}</li>
),
(elements): JSX.Element =>
<ul>{elements}</ul>
);
return createLabelOptions(props);
}
Now the type signature of createLabelOptions
is:
const createLabelOptions: (x: Props) => JSX.Element
So if our application of Ramda’s pipe
produces the exact signature of our a React.FunctionComponent
then it stands to reason we can get rid of the function body completely:
type Props = { isGroupPage: boolean, labels: Label[] };
export const LabelMenu: React.VFC<Props> = pipe(
(props: Props) =>
filterLabelsForMenu(props.isGroupPage)(props.labels),
sortLabelsByTypeAndName,
map(label => <li key={label.id}>{label.name}</li>),
(elements) => <ul>{elements}</ul>
);
The ergonomics of code like this is debatable. I personally like it for my own projects. I find the more I think and write in terms of data pipelines the clearer tho code becomes.
Here’s an interesting problem. What happens if we need to use a React hook in a component like this? We’ll need a valid place to call something like React.useState()
which means we’ll need to create a closure for component implementation.
This makes sense though! A functionally pure component like this is not able to have side-effects. React hooks are side-effects.
The <LabelMenu />
component has a type signature of
type Props = {isGroupPage: boolean, labels: Label []};
type LabelMenu = React.VFC<Props>
It renders a list of the labels
it is given while also sorting and filtering them due to some business logic.
We extracted much of this business logic into pure functions that encoded our business rules into plain functions that operated on our types.
When I use <LabelMenu />
I know that I must give it isGroupPage
and labels
props. The labels
property seems pretty self-explanatory, but the isGroupPage
doesn’t really make anything obvious about what it does.
I could go into the <LabelMenu />
code and discover that isGroupPage
changes which LabelType.AutoLabel
labels are displayed.
But what if I wanted another <LabelMenu />
that looked exactly the same but behaved slightly differently?
I could add some more props to <LabelMenu />
that changed how it internally filtered and sorted the labels
I give it, but adding more property flags to its interface feels like the wrong kind of complexity.
How about disconnecting the labels
from the filtering and sorting completely?
I’ll first simplify the <LabelMenu />
implementation:
type Props = { labels: Label[] };
const LabelMenu = (props: Props) => (
<ul>
{labels.map(
(label) => <li key={label.id}>{label.name}<li>
)}
<ul>
);
This implementation should contain everything about how these elements should look and render every label it gets.
But what about our filtering and sorting logic?
We had a component with this type signature:
type Props = { isGroupPage: boolean, labels: Label[] };
type LabelMenu = React.VFC<Props>;
Can we express the original component’s interface without changing <LabelMenu />
‘s implementation?
If we can write a function that maps from one set of props to the other, then we should also be able to write a function that maps from one React component to the other.
First write the function that uses our original Props
interface as its input, and then returns the new Props
interface as its return value.
type LabelMenuProps = { labels: Label[] };
type FilterPageLabelMenuProps = {
isGroupPage: boolean,
labels: Label []
};
const propertiesForFilterPage = pipe(
(props: FilterPageLabelMenuProps) =>
filterLabelsForMenu(props.isGroupPage)(props.labels),
sortLabelsByTypeAndName,
(labels) = ({ labels })
);
There’s our Ramda implementations again. We took out all of the React bits. It’s the same business logic but without the React element rendering. The only difference is instead of mapping the label
s into JSX.Element
s the labels are returned in the form of LabelMenuProps
.
We’ve encoded our business logic into a function that maps from FilterPageLabelMenuProps
to LabelMenuProps
.
That means the output of propertiesForFilterPage
can be used as the input to <LabelMenu />
, which is itself a function that returns a JSX.Element
.
Piping one function’s output into a compatible function’s input, that sounds familiar, doesn’t it?
export const FilterPageMenuLabel: React.VFC<FilterPageLabelMenuProps> =
pipe(
(props: FilterPageLabelMenuProps) =>
filterLabelsForMenu(props.isGroupPage)(props.labels),
sortLabelsByTypeAndName,
(labels) = ({ labels }),
LabelMenu
);
We’ve leveraged our existing view specific code, but changed its behavior at the Prop
level.
import { FilterPageLabelMenuProps, LabelMenu } from './components/LabelMenu';
const Foo = () => {
const { data } = useQuery(query: LabelsQuery);
return (
<FilteredPageLabelMenuProps
isGroupPage={isGroupPage}
labels={data?.labels ?? []}
/>
);
}
const Bar = () => {
const { data } = useQuery(query: LabelsQuery);
return (
<LabelMenu labels={data?.labels ?? []} />
);
}
When hovering over the implementation of <FilteredPageLabelMenuProps>
the tooltip shows exactly how it’s implemented:
Whenever I’m hiking I think of this talk by Feynman and it brings a sense of awe as I walk through the trees.
People look at trees and think it comes out of the ground … they come out of the air!
In a real-time chat workplace spelling and grammar tend to take a back seat to speed.
I typed qwerty proficiently for many years. After switching to Dvorak I have found that my fingers tend to translate the words I type phonetically.
I don’t know how to explain it. In my mind I’m using the word “their” but then I read back the sentence I just typed: “I don’t know there thoughts on …”. I’m always surprised. It’s not the word I had visualized but it’s the word I typed.
Sometimes I catch it but usually I hit enter before I read what I typed and quickly press up-arrow
then e
so I can quickly edit the grammatical error before too many coworkers have read it. (I just did it there. I know the word is “read” but my fingers type “red” and then I go back and fix it).
The scenario that always gives me problems is weather vs whether vs wether.
I think I always get “weather” right but my fingers never want to type an “h” after the “w”. They just aren’t used to that sequence of keys.
So I end up talking about castrated rams much more than I ever thought I would.
I think that sums up the moment that keeps me excited about slinging code. It probably fits with any creative endeavor.
Oh my god, this is gonna work.
Adam Lisagor in How We Made “Slack WFH”
I couldn’t help but smile when he said that line.
For me everything worthwhile starts with “what if we try to …”. But the magic moment where that dopamine is flooding the brain coincides with that phrase: “Oh my god, this is gonna work.”
There will no doubt be a million more things to do, but thats the moment the “how” starts falling into place.
Wear your masks.
Arrakis teaches the attitude of the knife – chopping off what’s incomplete and saying: ‘Now, it’s complete because it’s ended here.’
– from “Collected Sayings of Muad’Dib” by the Princess Irulan
Frank Herbert quotes on Goodreads
The second in a series of posts that investigates using strongly-typed first-class functions with WordPress WP-API to create a composable, testable, verifiable, and productive method of REST API development.
Previously: Strongly Typed WP-API.
Context switching is a productivity killer. What exactly constitutes a context switch though?
Moving to a ping in Slack away from a Vim window? Definitely a context switch.
Switching via cmd-tab
between a source code editor and browser window? Also a context switch. Yes, even when duck-duck-going the error from the console.
Everything that reduces context switching during development is a productivity win.
Time spent searching logs and reconstructing failure cases from production bugs is time not spent shipping.
It is also time that was not accounted for in the 100% accurate development estimate given to the project manager to complete the task.
Passing a string
value to a function that expects an int
: bug. Typing the incorrect string name of a function in WordPress’s add_filter
: another bug. Calling a method on a WP_Error
instance because it was assumed to be a WP_User
: bug.
All of these things are caught by static type analysis.
They may all seem like small bugs but they can quickly add up to a non-trivial amount of time debugging. Perhaps these bugs will be discovered quickly at runtime, but that requires the correct codepaths are executed in a runtime. Is every code path in a project going to be executed between each source code change? No.
Static analysis will increase productivity by uncovering these bugs. But even with a 100% typed, fully analyzed codebase validating running code output is still necessary.
Automating runtime validation is another tool to increase productivity.
Psalm enforces correct types and API usage. Checking the correctness of the runtime code still requires some manual steps, like booting up an entire WordPress stack. Previously, wp-env
was used to verify that the endpoint actually worked.
wp-env start
curl http://localhost:8889/?rest_route=/totes/not-buggy
{"result": "not buggy"}
This isn’t going to scale well when the number of endpoints and the number of ways to call them increases. Jumping from an editor to a browser and back isn’t the best recipe for productive coding sessions either.
Time for automated tests.
In the world of PHP, that means PHPUnit.
The bare minimum code to test totes_not_buggy()
is a single implementation of PHUnit\Framework\TestCase
with a single test method. It will live in tests/Totes/TotesTest.php
:
<?php
namespace Totes;
use WP_REST_Request;
use WP_REST_Server;
class TotesTest extends \PHPUnit\Framework\TestCase {
/**
* @return void
*/
function testTotesNotBuggy() {
$request = new WP_REST_Request( 'GET', '/totes/not-buggy' );
$response = totes_not_buggy( $request );
$this->assertEquals( [ 'status' => 'not buggy' ], $response->get_data( ) );
}
}
To run PHPUnit, the dependency needs to be installed.
composer --dev require phpunit/phpunit
Now run the test:
./vendor/bin/phpunit tests
// yadda yadda
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
The error shows that we don’t have WordPress APIs available to our run runtime:
1) Totes\TotesTest::testTotesNotBuggy
Error: Class 'WP_REST_Request' not found
WordPress is a dependency of this project. It won’t work without it. Time to install it:
composer require --dev johnpbloch/wordpress
The johnpbloch/wordpress
package by default will install the WordPress source code in ./wordpress
. Setting up a whole WordPress stack to work on some source code: productivity killer. “No install” is faster than any five minute install no matter how famous it is.
If WordPress were a PSR-4 compliant project there wouldn’t be anything left to do. But it isn’t. To illustrate, run the test again and observe the result is the same.
Since Composer doesn’t know how to autoload WordPress source code, PHPUnit needs to be taught how to find WordPress APIs during test execution. A perfect place for this is via PHPUnit’s "bootstrap"
system.
Generate a config and tell PHPUnit to use a custom"bootstrap"
:
./vendor/bin/phpunit --generate-config
PHPUnit 9.0.1 by Sebastian Bergmann and contributors.
Generating phpunit.xml in /Users/beau/code/wp-api-fun
Bootstrap script (relative to path shown above; default: vendor/autoload.php): tests/bootstrap.php
Tests directory (relative to path shown above; default: tests):
Source directory (relative to path shown above; default: src):
Generated phpunit.xml in /Users/beau/code/wp-api-fun
This generates ./phpunit.xml
and tells phpunit
to run test/bootstrap.php
before executing tests.
Time to hunt down all of the WordPress dependencies for this test.
One way to find which PHP files need to be included is to keep running the tests and including the files that define the missing classes and functions.
For example, the current error is that WP_REST_Request
is not defined.
ack 'class WP_REST_Request' wordpress
wordpress/wp-includes/rest-api/class-wp-rest-request.php
29:class WP_REST_Request implements ArrayAccess {
Now add wordpress/wp-includes/rest-api/class-wp-rest-request.php
.
Keep going until it passes. This is the end result for now. Note that this is – at this time in our development – 100% of our plugin’s runtime dependencies.
<?php
define( 'ABSPATH', __DIR__ . '/../wordpress' );
define( 'WPINC', '/wp-includes' );
require_once __DIR__ . '/../wordpress/wp-includes/functions.php';
require_once __DIR__ . '/../wordpress/wp-includes/plugin.php';
require_once __DIR__ . '/../wordpress/wp-includes/class-wp-error.php';
require_once __DIR__ . '/../wordpress/wp-includes/pomo/translations.php';
require_once __DIR__ . '/../wordpress/wp-includes/l10n.php';
require_once __DIR__ . '/../wordpress/wp-includes/class-wp-http-response.php';
require_once __DIR__ . '/../wordpress/wp-includes/rest-api/class-wp-rest-request.php';
require_once __DIR__ . '/../wordpress/wp-includes/rest-api/class-wp-rest-response.php';
require_once __DIR__ . '/../wordpress/wp-includes/rest-api/class-wp-rest-server.php';
require_once __DIR__ . '/../wordpress/wp-includes/rest-api.php';
require_once __DIR__ . '/../wordpress/wp-includes/load.php';
add_action( 'rest_api_init', 'totes_register_endpoints' );
/** @psalm-suppress InvalidGlobal */
global $wp_rest_server;
$wp_rest_server = new WP_REST_Server();
do_action( 'rest_api_init' );
Now that Composer can install WordPress and PHPUnit, the CI can run these tests too. Add it to the GitHub action:
+
+ - name: Unit Tests
+ run: vendor/bin/phpunit
Runtime verification of any new route can now be captured in a unit test. Once in a unit test it can be ran in all sorts of ways.
Bonus, with XDebug configured PHPUnit will also report coverage analysis when proper @covers
annotations are added:
vendor/bin/phpunit test --coverage-html coverage-report
PHPUnit 9.0.1 by Sebastian Bergmann and contributors.
. 1 / 1 (100%)
Time: 68 ms, Memory: 8.00 MB
OK (1 test, 1 assertion)
Generating code coverage report in HTML format ... done [12 ms]
68 millisecond execution time with 100% coverage of a one-line function assigned a CRAP score of 1. Gotta love that new project smell.
Between Psalm and PHPUnit we now have static analysis and automated runtime tests.
Next up we’ll dive into Higher-Order Kinds with Psalm and start using them with WP-API to create a declarative, composable API.
Quick context: Validator<T>
is a function that returns a Result<T>
:
type Validator<T> = (value: any) => Result<T>;
When sharing some of this with a coworker to help figure out some type
questions they quickly pointed out that this is in fact a Parser (thanks Dennis). These are things an informally trained developer (me) probably should have been able to identify at this point in their career.
Mapping the understanding of what a Parser is to what I had named it caused confusion. So all things Validator<T>
have become Parser<T>
. Naming: one of the two hard things.
In the Parser<T>
library the function oneOf
accepts two Parser<T>
types and returns the union of them:
function oneOf<A,B>(a: Parser<A>, Parser<B>): Parser<(A|B)> {
return value => mapFailure(a(value), () => b(value));
}
A more complex Parser<T>
is now created out of simpler ones.
const isStringOrNumber = oneOf(isString, isNumber);
TypeScript can infer that isStringOrNumber
has the type of Parser<string|number>
.
This works great when combining two parsers, but when more than two are combined with oneOf
it requires nested calls:
const isThing = oneOf(isNull, oneOf(isPerson, isAnimal));
Assuming isPerson
is Parser<Person>
and isAnimal
is Parser<Animal>
, const isThing
is inferred by TypeScript to be:
type Parser<null | Person | Animal>
Each additional Parser<T>
requires another call of oneOf
. Writing a oneOf
that takes one or more Parser<T>
types is straight forward:
function oneOf(parser, ... parsers) {
return value => parsers(
(result, next) => mapFailure(result, () => next(result)),
parser(value)
)
}
However, writing the correct type signature for this function was beyond my grasp.
My first attempt I knew couldn’t work:
function oneOf<T>(parser: Parser<T>, ...parsers: Parser<T>[]): Parser<T> {
In use, TypeScript’s inference was not happy:
const example = oneOf(isString, isNumber, isBoolean);
Types of property 'value' are incompatible. Type 'number' is not assignable to type 'string'.
The T
was being captured as string
because the first argument to oneOf
is a Parser<string>
. However isNumber
is a Parser<number>
, so the two T
did not match and tsc
was not happy. Removing the first parser: Parser<T>
didn’t help.
If TypeScript is told what the union is, then everything is ok:
const example = oneOf<string|number|boolean>(isString, isNumber, isBoolean);
But the best API experience is one in which the correct type is inferred.
After varying attempts of picking out similar cases in TypeScript’s Advanced Types I gave up and posed the question in the company’s #typescript Slack channel.
The magical internet people debated about Parser<T>
and Result<T>
so I tried to simplify things to the “base case” and got rid of Result<T>
:
type Machine<T> = () => T
Is it possible to create a function signature such that a list of Machine<*>
s of differing <T>
s via variadic type arguments could infer the union Machine<T1|T2|T3|...>
:
function oneOf(... machines: Array<Machine<?>>>): Machine<(UNION of ?)> {
The magical internet people came up with a solution (thank you, Tal).
type MachineType<T> = T extends Machine<infer U> ? U : never;
function<M extends Machine<any>[]>(...machines: M): Machine<MachineType<M[number]>> {
After mapping it into the Parser
domain, It worked!
type ParserType<T> = T extends Parser<infer U> ? U : never;
function<P extends Parser<any>[]>oneOf(...machines: P): Parser<ParserType<P[number]>> {
const example = oneOf(isNumber, isString, isBoolean);
Running tsc
passed, and the inferred type of const example
is:
const example: (value: any) => Result<string | number | boolean>
Now to understand why it works.
ParserType<T>
The first thing to understand is ParserType<T>
, which uses a Conditional Type:
type ParserType<T> = T extends Parser<infer U> ? U : never;
This is essentially a function within the type analysis stage of TypeScript (somewhat analogous to Flow’s $Call
utility type). My first understanding of this reads as:
Given a type
T
, if it extendsParser<infer U>
returnU
, otherwisenever
.
Using ParserType
with any Parser<T>
will give the type of T
. So given any function that is a Parser<T>
, the type of <T>
can be inferred.
Within the
Type inference in conditional typesextends
clause of a conditional type, it is now possible to haveinfer
declarations that introduce a type variable to be inferred. Such inferred type variables may be referenced in the true branch of the conditional type. It is possible to have multipleinfer
locations for the same type variable.
Take an example parsePerson
parser which is defined using objectOf
:
const parsePerson = objectOf({
name: isString,
email: isString,
metInPerson: isBoolean
});
type Person = ParserType<typeof parsePerson>;
// This is ok!
const valid: Person = {
name: 'Nausicaa',
email: 'nausica@valleyofthewind.website',
metInPerson: false,
};
// This fails!
const invalid: Person = {}; // Type Error
type Person
is inferred to be:
type Person = {
name: string;
email: string;
metInPerson: boolean;
}
const invalid: Person
fails because:
Type '{}' is missing the following properties from type '{ name: string; email: string; metInPerson: boolean; }': name, email, metInPerson
So now the return value of oneOf
is almost understood:
: Parser<ParserType<P[number]>>
This says:
Returns a
Parser<T>
whoseT
is theParserType
ofP[number]
.
Well what is P[number]
?
In TypeScript, Mapped Types allow one to take the key and value types of one type, and transform them into another.
If you’ve used Partial<T>
or ReadOnly<T>
, you have used a Mapped Type. The example implementations of those are given as:
type Readonly<T> = {
readonly [P in keyof T]: T[P];
}
type Partial<T> = {
[P in keyof T]?: T[P];
}
Given a type with an index, the type that is used for the index’s value can be accessed using its key type:
type MyIndexedType = {[key: number]: (number|boolean|string)};
type ValueType = MyIndexedType[number];
In this example ValueType
will have the type (number|boolean|string)
.
In the return signature of oneOf
there is a P[number]
.
: Parser<ParserType<P[number]>>
Assuming P
is an indexed type with keys and values whose key type is a number
, this gives the type of the value stored in P
.
So what is P
?
function<P extends Parser<any>[]>oneOf(
P
is an array of Parser<any>[]
. Well it extends Parser<any>[]
.
This is where the magic happens.
TypeScript captures the T
of each Parser<any>
and stores it in P
. Because an Array
is an indexed type whose key is number
, the type of P
can also be expressed like this:
type P = {[number]: (Parser<number>|Parser<string>|Parser<boolean>)};
There it is! The union is the value type at P[number]
.
ParserType
is a Conditional Type that given a Parser<T>
, returns T
.
What happens when ParserType
is given a union of Parser<T>
types.
type T = ParserType<(Parser<string> | Parser<number>)>
TypeScript infers the union for T
:
type T = string | number
Given a Mapped Type P
that extends Parser<T>[]
, the union of Parser<T>
types is available at P[number]
.
It follows then that passing the P[number]
into ParserType
will provide the union of T
types in Parser<T>
. That is exactly what the return type in oneOf
does.
Reading the new signature for oneOf
is now less cryptic:
function oneOf<P extends Parser<any>[]>(
...parsers: P
): Parser<ParserType<P[number]>> {
Now to wrap up the implementation.
Using oneOf
doesn’t work unless there is at least one Parser<T>
. The signature can be updated to require one:
function oneOf<T, P extends Parser<any>[]>(
parser: Parser<T>,
...parsers: P
): Parser<T|ParserType<P[number]>> {
// no additional parsers, return the single parser to be used as is
if (parsers.length === 0) {
return parser;
}
return value => mapFailure(
parsers.reduce(
// with each reduction, only, try to parse when the previous was a Failure
(result, next) => mapFailure(result, () => next(value)),
// seed the result with the first parser
parser(value)
),
// if all parsers fail, indicate that there were multiple parsers attempted
() => failure(value, `'${value}' did not match any of ${parsers.length+1} validators`)
);
}
oneOf
Using oneOf
now looks like this:
const parseStatus = oneOf(
isExactly('pending'),
isExactly('shipped'),
isExactly('delivered'),
);
This expresses a Parser<T>
that will fail if the string is not 'pending'
, 'shipped'
, or 'delivered'
.
With the new signature of oneOf
, TypeScript now infers parseStatus
to have the type:
const parseStatus: Parser<'pending'|'shipped'|'delivered'>;
Combined with mapSuccess
, the Success<T>
will guarantee that the value is one of those three exact strings.
mapSuccess(parseStatus('other'), status => {
switch(status) {
case 'something': return 'not valid';
}
});
This fails type checking:
Type '"something"' is not comparable to type '"shipped" | "pending" | "delivered"'.
This works with the most complex of Parser<T>
s:
const json: Parser<any> = value = {
try {
return success(JSON.parse(value));
} catch(error) {
return failure(value, error.description);
}
}
const employeesParser = mapParser(json, objectOf({
employees: arrayOf(objectOf({
role: oneOf(
isExactly('Vice President'),
isExactly('Manager'),
isExactly('Individual Contributor')
),
// This one is for you Dennis
// assuming ISO8601 Date strings and a modern browser
hireDate: mapParser(isString, (value) => success(new Date(value)))
}))
})));
mapSuccess(employeesParser("{...JSON HERE...}"), (valid) => {
valid.employees.forEach(employee => {
const employmentDurationInMS = (
Date.now() - employee.hireDate.getTime()
);
switch(employee.role) {
case "Not A Real Role": {
}
}
});
});
The case "Not A Real Role":
doesn’t exist for employee.role
:
Type '"Not A Real Role"' is not comparable to type '"Manager" | "Individual Contributor" | "Vice President"'
Lovely!
Here’s the inferred type of employeesParser
’s use of oneOf
:
function oneOf<"Vice President", [Parser<"Manager">, Parser<"Individual Contributor">]>(parser: Parser<"Vice President">, parsers_0: Parser< "Manager">, parsers_1: Parser<"Individual Contributor">): Parser<...>
We can see where:
Parser<"Manager">
and Parser<"Individual Contributor">
types are captured in P
.parsers_0
and parsers_1
are spread as arguments to oneOf
with the correct parser types.In my personal projects I have fallen in love with solving my problems via Type Driven Development.
Given a language has static types, generics, and first-class functions it hits the sweet spot for this kind of development. The only real requirement is first-class functions because it is an application of Lambda calculus principles.
any
Typed languages provide safety. If the developer uses an API incorrectly, the computer will yell at them.
type Product = {
readonly name: string
}
function createProduct(name: string): Product {
return { name };
}
createProduct(5);
When calling createProduct
with name
of something other than a string
the computer cries out:
Argument of type '5' is not assignable to parameter of type 'string'.
A problem I want to solve in one of my side-projects is JSON safety. Take Product
as an example. When serializing it with JSON.stringify
and then parsing it with JSON.parse
, the type is lost:
type User = {
readonly username: string
}
function renameUser(name: string, user: User): void {
// implementation left blank
}
const product = createProduct('some product');
renameUser('some user', product);
renameUser('some user', JSON.parse(JSON.stringify(product)));
The second call to renameUser
shows no error. The first call to renameUser
shows:
Argument of type 'Product' is not assignable to parameter of type 'User'. Property 'username' is missing in type 'Product' but required in type 'User'.
If we write the unit test I’m confident we can prove that product
and JSON.parse(JSON.stringify(product))
are deeply equal.
The problem is that JSON.parse()
returns any
(in TypeScript and Flow).
A similar problem exists in all of the languages I have come across:
org.json.JSONObject
and org.json.JSONArray
JSONSerialization
/NSJSONSerializalion
json_decode
Going from binary data to native object is inherently unsafe. When the JSON data comes in from an external system – like a REST API – the risk is real.
In a language like TypeScript or Flow the straight-forward way to safely deal with JSON values is through type refinement.
This results in an increasing number of type guards as different members within the any
type are accessed. Assuming your chosen REST API layer does JSON marshaling for you:
const result = await api.get('http://example.dev/api/people');
if (result && result.people && Array.isArray(result.people) {
people.map(person => {
// more runtime type refining 💩
})
}
If both client and server are both under your control, or you feel somewhat confident enough in the REST API maintainers, one might feel brazen enough to force the situation:
type PeopleResponse = { people: Array<Person> };
const result: PeopleResponse = await api.get('http://example.dev/api/people');
// go along your merry way until your Runtime errors start popping up
This is madness. It assumes type safety when there isn’t any. Unfortunately, this is what I see most often in projects at work.
The prospect of writing lines and lines of type refinements for every possible JSON structure for every API response is a lot of work. In my “toy” project I already have 21 different REST API calls with varying shapes of responses and that’s only going to grow.
Can I write a JSON validation layer that’s as declarative as creating custom TypeScript types?
Let’s give it a shot.
Time to start practicing Type Driven Development.
What is Type Driven Development? Start with types, then write implementations to satisfy the type checker. It’s like Test Driven Development, but you don’t even have to write the tests.
Our current problem is pretty clear. We need a way to write functions that validate some JSON any
type. That means we need a function that accepts a single any
type as its input.
But which type does it return? That should be up to the implementation of the validation, and at this point, that implementation doesn’t exist. So we’ll use a generic type to stand in its place:
type Validator<T> = (value: any) => T;
This states that a Validator<T>
is a Function
that accepts a single any
and returns a T
.
This makes sense for success cases, but what about failure cases? What happens when validation fails?
At this point there are two options to deal with failure:
throw
an Error
return
a Union
type to indicate success or failure modes.Common usage of a Validator<T>
expects failure. Using throw
might feel simpler at the implementation level, but it forces the user of the Validator<T>
to take on that complexity. TypeScript’s (or Flow’s) Union
types allow for safe handling of success/failure modes.
Here’s what a Union
type API looks like:
type Success<T> = {
readonly type: 'success'
readonly value: T
}
type Failure = {
readonly type: 'failure'
readonly value: any
readonly reason: string
}
type Result<T> = Success<T> | Failure;
type Validator<T> = (value: any) => Result<T>;
This looks like the complete set of types for a “validation” API. A function that accepts any
thing and returns Success<T>
or Failure
. The Success<T>
boxes the typed value with the refined type. The Failure
contains the original value
and the reason
that validation failed.
Let’s write our first validator:
const isString: Validator<string> = (value) => {
if (typeof value === 'string') {
return { type: 'success', value }
} else {
return {
type: 'failure',
value,
reason: 'typeof value is ' + (typeof value)
};
}
}
With tsc
and jest
we can confirm that both type checking and runtime behavior match our expectations:
describe('isString', () => {
it('succeeds', () => {
const validator: Validator<string> = isString;
const value: Result<string> = validator('yes');
expect(value).toEqual(success('yes'));
})
});
The remaining non-container (Array
, and Object
) types are equally as trivial. And to make things a little more convenient we can make Success<T>
and Failure
factories:
function success<T>(value: T): Success<T> {
return {
type: 'success',
value,
};
}
function failure(value: any, reason: string) {
return {
type: 'failure',
value,
reason,
};
}
Now isString
, isNumber
, isNull
, isUndefined
, isObject
, isArray
, isUndefined
and isBoolean
can all follow this pattern:
const isNull: Validator<null> = value =>
value === null
? success(null)
: failure(value, 'typeof value is ' + (typeof value));
With each basic case we can write the corresponding set of tests to confirm the runtime characteristics and the static type checker’s ability to infer types.
But JSON is more complex than these base types, and our TypeScript types even more complicated with nullables and unions.
We need to be able to combine these base cases into something that can address our real world needs.
Optional types in TypeScript and Flow are a Union type of null
or some type T
.
type Optional<T> = null | T;
If we wanted to validate an optional type our validator’s type would be Validator<null|T>
.
An optional string
validator would have the type Validator<null|string>
. We have a Validator<string>
already, so perhaps we can utilize that.
const isOptionalString: Validator<null|string> = value => {
if (value === null) {
return success(null);
}
return isString(value);
}
This works fine, but the idea of writing each isOptionalX
sounds boring. And TypeScript types can be more complex than null|T
. They can be string | number
or any other set of unions.
Since we’re playing at leveraging Lamda calculus concepts, we can lift ourselves out of the minutiae of Validator<T>
implementations and start working with validators themselves.
Given two different validators Validator<A>
and Validator<B>
, can we use what we know about validators to create a Validator<A|B>
?
Using Type Driven Development, let’s stub out the function signature:
function oneOf<A,B>(a: Validator<A>, b: Validator<B>): Validator<A|B> {
}
At this point tsc
is upset:
A function whose declared type is neither 'void' nor 'any' must return a value.
What should we return? A Validator<A|B>
is like any other validator in that it accepts a single any
argument. In Type Driven Development style, let’s return a function since that’s what it wants:
function oneOf<A,B>(a: Validator<A>, b: Validator<B>): Validator<A|B> {
return value => {
}
}
Now tsc
says:
Type '(value: any) => void' is not assignable to type 'Validator<A | B>'. Type 'void' is not assignable to type 'Result<A | B>'.
Our function isn’t correct yet. It has no return value (void
) but a Validator<A | B>
needs to return a Result<A | B>
.
We now have all of the inputs we need do do that within the scope of this function. All we need to do is use them:
function oneOf<A,B>(a: Validator<A>, b: Validator<B>): Validator<A|B> {
return value => {
return a(value);
}
}
Now tsc
is happy, but does it have the runtime characteristics we want?
describe('oneOf', () => {
it('succeeds', () => {
const validator = oneOf(isNumber, isString);
expect(validator('a')).toEqual(success('a'));
expect(validator(1)).toEqual(success(1));
)};
});
What does jest
think:
expect(received).toEqual(expected) // deep equality - Expected - 1 + Received + 2 Object { - "type": "success", + "reason": "typeof value is number", + "type": "failure", "value": 1, }
It failed with the number
value as it should have, because we didn’t use both Validator<T>
‘s.
function oneOf<A,B>(a: Validator<A>, b: Validator<B>): Validator<A|B> {
return value => {
const result_a = a(value);
if (result_a.type === 'success') {
return result_a;
}
return b(value);
}
}
If Validator<A>
succeeds, we return a Success<A>
. Otherwise return the result of Validator<B>
which is Success<B> | Failure
.
We’ve written a function that accepts two Validator<T>
types and returns a new Validator<>
by combining them. We wrote a combinator.
I have so far failed to create a variadic version of oneOf
that can take “n” Validator<T>
s and infer the union Validator<T1|T2|Tn>
type. This means we need to use multiple calls to oneOf
to build up inferred union types:
const validator: Validator<null|string|number> = oneOf(
isNull,
oneOf(isNumber, isString)
);
Since nullable types are so common – and because it’s so easy to do given our APIs – we can use oneOf
to make a convenient combinator that takes a Validator<T>
and turns it into a Validator<null | T>
. I’ll name it optional
.
Definition:
export const optional = <T>(validator: Validator<T>): Validator<null|T> =>
oneOf(isNull, validator);
And in use:
import { optional, isNumber } from `./validator';
const validate = optional(isNumber);
validate(1); // returns Success<null | number>;
validate(null); // returns Success<null | number>;
validate('hi'); // returns Failure
Again, we’re using a combinator to build up a complex Validator<T>
without actually implementing any new Validator<T>
s.
We can do the same thing to build Object
and Array
validators.
The ideal API for validating should be as terse and declarative as a custom TypeScript type. Here’s a somewhat complex type:
type Record = {
readonly name: string
readonly owner: {
readonly id: number
readonly name: string
readonly role: 'admin' | 'member' | 'visitor'
}
}
This is my ideal API:
const validateRecord = objectOf({
name: isString,
owner: objectOf({
id: isNumber,
name: isString,
role: isValidRole,
});
});
The combinator we want to make here is objectOf
. It will take a plain object who’s keys point to values of Validator<T>
s and returns a Validator<Result<{...}>>
that matches the shape of the validator.
In TypeScript we can infer this type using Mapped types. One of the examples looks similar to what we want:
Now that you know how to wrap the properties of a type, the next thing you’ll want to do is unwrap them. Fortunately, that’s pretty easy:
type Proxify<T> = {
[P in keyof T]: Proxy<T[P]>;
};
function unproxify<T>(t: Proxify<T>): T {
let result = {} as T;
for (const k in t) {
result[k] = t[k].get();
}
return result;
}
In terms of our domain we want to map the keys K
of some generic object T
into validators that validate the type at key K
in T
.
export function objectOf<T extends {}>(
validators: {[K in keyof T]: Validator<T[K]>}
): Validator<T> {
}
So far what does tsc
think:
A function whose declared type is neither 'void' nor 'any' must return a value.
Time to implement the combinator:
validated
T
value[key]
with its corresponding validators[key]
.Success<T[K]>
set validated[key] = result.value
Failure
return the Failure
return success(validated
)export function objectOf<T extends {}>(
validators: {[K in keyof T]: Validator<T[K]>}
): Validator<T> {
let result = {} as T;
for (const key in validators) {
const validated = validators[key](value ? value[key] : undefined);
if (validated.type === 'failure') {
return validated;
}
result[key] = validated.value;
}
return success(result);
}
Now for a test:
describe('objectOf', () => {
it('validates', () => {
const validate = objectOf({
name: isString,
child: objectOf({
id: isNumber
}),
});
const valid = {
name: 'Valid',
child: { id: 1 },
};
const invalid = {
name: 'Invalid',
child: { id: 'not-number' },
};
expect(validate(valid)).toEqual(success(valid));
expect(validate(invalid)).toEqual(failure(invalid, 'typeof value is string' ));
});
});
And both tsc
and jest
are happy. Not only does it validate as expected, but it also infers the shape of the value:
validate
.It knows that this particular use of objectOf
creates a:
Validator<{name: string, child: {id: number}}>
Which returns a Result<T>
type of:
Result<{name: string, child: {id: number}}>
An example in action:
const validate = objectOf({
id: isNumber,
name: oneOf(isString, isNull),
role: oneOf(isNull, objectOf({
type: isString(),
groupId: isNumber()
}))
});
let result = validate(JSON.parse('{"name": "sam", "id": 5}');
if (result.type === 'success') {
/**
* result is Success<{
* id: number,
* name: string | null,
* role: null | {type: string, groupId: number }
* }>
*/
result.value.name // null | string
result.value.role // null | {type: string, groupId: number}
} else {
// Failure
throw new Error(result.reason);
}
If you already have a type
you know you need to validate for, you can use it as the generic argument to objectOf
and tsc
will enforce that all of the keys are present:
type Record = { id: number, name: string };
const validate = objectOf<Record>({});
The tsc
error shows:
Argument of type '{}' is not assignable to parameter of type '{ id: Validator; name: Validator ; }'. Type '{}' is missing the following properties from type '{ id: Validator ; name: Validator ; }': id, name
It knows a validator for the Record
type needs an id
validator and a name
validator.
It even knows which type of Validator<T>
it needs:
const validate = objectOf<Record>({
id: isString,
name: isString.
});
id
in Record
has a type of number
, but isString
cannot validate to number
:
(property) id: ValidatorType '(value: any) => Result ' is not assignable to type 'Validator '. Type 'Result ' is not assignable to type 'Result '. Type 'Readonly<{ type: "success"; value: string; }>' is not assignable to type 'Result '. Type 'Readonly<{ type: "success"; value: string; }>' is not assignable to type 'Readonly<{ type: "success"; value: number; }>'. Types of property 'value' are incompatible. Type 'string' is not assignable to type 'number'
You can see how it worked out that the id
validator of isString
does not return a Result<T>
that is compatible with number
which is the type of Record['id']
.
One last thing to make use of objectOf
a little nicer. When it iterates through the keys of the validators and reaches a Failure
type, it returns the Failure
as is. This resulted in a somewhat opaque failure reason:
const invalid = {
name: 'Invalid',
child: { id: 'not-number' },
};
expect(validate(invalid)).toEqual(failure(invalid, 'typeof value is string' ));
The "typeof value is string"
message failed because invalid.child.id
was a string
, not a number
. Given we know which key
was being validated when the Failure
was returned, we can improve the error message:
function keyedFailure(value: any, key: string | number, failure: Failure): Failure {
return {
...failure,
value,
reason: `Failed at '${key}': ${failure.reason}`,
};
}
Now the failure in objectOf
can be passed through keyedFailure
before returning:
for (const key in validators) {
const validated = validators[key](value ? value[key] : undefined);
if (validated.type === 'failure') {
return keyedFailure(value, key, validated);
}
The improved error message is now:
"Failed at 'child': Failed at 'id': typeof value is string"
The value at .child.id
was a string
, and that’s why there’s a failure. Much clearer.
We’re an arrayOf
implementation away from a fully capable JSON validation library. But before we go there, we’re going to detour into more combinators.
In Lamda calculus a combinator is an abstraction (function) whose identifiers are all bound within that abstraction. In short, no “global” variables.
If we consider the behavior of Validator<T>
and how it returns one of two values Success<T>
or Failure
a natural branching control flow reveals itself.
In our example uses of Validator<T>
instances, to continue using it, the next step is to first refine it by checking result.type
for either success
or failure
.
Given how common this pattern is, we can write some combinators to make them slightly easier to work with.
In most uses of Validator<T>
we want to do something with the boxed value of the Success<T>
case of Result<T>
.
This looks like:
const result: Result<Thing> = validate(thing);
if (result.type === 'success') {
const value: Thing = result.value;
// do something interesting with value
}
The pattern here is refining to the success case, then using the success value in a new domain. So if the user of validate had a function of type:
(thing: Thing) => OtherThing
It would be nice if they could forego the extra refinement work. We can define that pattern in a combinator.
We want to map the success case into a new domain.
function mapSuccess<A, B>(result: Result<A>, map: (value: A) => B): B|Failure {
if (result.type === 'success') {
return map(result.value);
}
return result;
}
And in use:
function isAdmin(user: User): boolean {
// something interesting
return true;
}
const validate = objectOf<User>({ ... });
const isAdminResult: Result<boolean> = mapResult(validate(JSON.parse("{...}"), isAdmin);
And for the sake of completeness, the comparable mapFailure
:
function mapFailure<A,B>(result: Result<A>, map: (value: Failure) => B): Success<A>|B {
if (result.type === 'failure') {
return map(result);
}
return result;
}
Why would you want this? It allows you to write pure functions in your business domain, like isAdmin
above, and then combine them with the Validator<T>
domain, without using any glue code.
The fewer lines of code, the fewer variables to type. And we have tsc
there to let us know when the function signatures don’t match.
For instance trying to use a function that takes something other than a User
is going to fail type analysis when used with mapResult(Result<User>, ...)
.
The less often you need to cross domains within your APIs, the more decoupled they are.
A Validator<T>
returns a Result<T>
. What if we wanted to continue validating T
and turn it into another type? Let’s consider Array
.
The first step to turning an any
type into an Array<T>
is first checking if is in fact an Array
.
This is similar to our other base validators:
const isArray: Validator<any[]> = value =>
Array.isArray(value) ? success(value) : failure('value is not an array');
The next step is iterating through each member in the Array<any>
and validating the member. Since we’re practicing Type Driven Development, we’ll start with the type signature.
function arrayOf<T>(validator: Validator<T>): Validator<Array<T>> {
}
And jus like before tsc
isn’t happy:
A function whose declared type is neither 'void' nor 'any' must return a value.
We just defined isArray
. It would be neat if we could use it here. Thinking about it, it would be nice to be able to take the success case of isArray
and then do more validation to it and return a mapped Result<Array<T>>
.
Let’s write one more combinator that maps a Validator<A>
into a Validator<B>
given a function of (value: A) => Result<B>
.
function mapValidator<A, B>(
validator: Validator<A>,
map: (value: A) => Result<B>
): Validator<B> {
}
If the Result<A>
case is a Failure
, it should be returned right away, but if it’s a Success<A>
we want to unbox it and give it to (value: A) => Result<B>
.
Does that sound familiar? We want to map the success result of Validator<A>
. That’s mapSuccess
. We can define mapValidator
in terms of mapSuccess
:
function mapValidator<A, B>(
validator: Validator<A>,
map: (value: A) => Result<B>
): Validator<B> {
return value => mapSuccess(validator(value), map);
}
Using mapValidator
allows us to define a validation in terms of another Validator<T>
.
So now we can define Validator<Array<T>>
in terms of Validator<any[]>
:
function arrayOf<T>(validate: Validator<T>): Validator<Array<T>> {
return mapValidator(isArray, (value) => {
});
}
At this point tsc
can determine that value
is type any[]
. But to satisfy Validator<Array<T>>
we need to validate each member of any[]
with Validator<T>
.
If any item fails validation, the whole Array
fails validation. So not only are we validating each member, but potentially returning a Failure
case. We need to reduce any[]
to Result<Array<T>>
.
We can seed the reduce call with an empty success case:
return mapValidator(isArray, (value) =>
value.reduce<Result<T[]>>(
(result, member) => undefined,
success([])
)
But what to use for our reduce function? We’re declaring to Array.prototype.reduce
that the first argument and return value is a Result<T[]>
. That means the type of our reduce function needs to be of type:
(result: Result<T[]>, member: any, index: number) => Result<T[]>
If result
is ever the Failure
case, we don’t want to do anything, we only want to handle the Success<T[]>
case. That’s another case for mapSuccess
:
(result, member, index) => mapSuccess(
result,
(items) =>
)
Now that we are within an iteration of the array, we have enough context to use our Validator<T>
on the member
. If it’s successful, we want to concat it with the rest of items
, if a failure, we’ll just return it (for now).
Another case for mapSuccess
:
(result, member, index) => mapSuccess(
result,
(items) => mapSuccess(
validate(member),
valid => success(items.concat([member]),
)
)
And here’s the complete arrayOf
:
function arrayOf<T>(validate: Validator<T>): Validator<Array<T>> {
return mapValidator(isArray, (value) =>
value.reduce<Result<T[]>>(
(result, member, index) => mapSuccess(
result,
items => mapSuccess(
validate(member),
valid => success(items.concat([valid])
)
),
success([])
)
);
}
In a test:
describe('arrayOf', () => {
const validate = arrayOf(objectOf({ name: isString }));
it('succeeds', () => {
const values = [{name: 'Rumpleteazer'}];
expect(validate(values)).toEqual(success(values));
});
it('fails', () => {
const values = [{name: 1}];
expect(validate(values)).toEqual(failure(values, 'Failed at \'name\': typeof value is number');
});
});
One last thing before we tie a ribbon on Validator<T>
. The Falure
case reason
says:
"Failed at 'name': typeof value is number"
In the context of .reduce
we know which index we are currently on while iterating. So when we validate the member, we can use mapFailure
to enhance the Failure
case. Here’s the new reducer:
(result, member, index) => mapSuccess(
result,
items => mapSuccess(
mapFailure(
validate(member),
failure => keyedFailure(items, index, failure)
),
valid => success(items.concat([valid])
)
),
And now the Failure
reason
is:
"Failed at '0': Failed at 'name': typeof value is string"
I have now used this library to create type safety for all of my project’s JSON based REST APIs.
Functions that once used half of their lines for type refinements are now one mapSuccess
away type safe response values.
Taking my API responses was a matter of mapping my JSON decoders to Validator<T>
instances.
Before:
export const v3SubmitOrders = jsonEncodedRequest(
fw(build.post('/v3/submit_orders')),
({options}: SubmitOrders) => ({orders: options.orders, validate_only: options.validate_only !== false}),
response.decodeJson
);
After:
export const v3SubmitOrders = jsonEncodedRequest(
fw(build.post('/v3/submit_orders')),
({options}: SubmitOrders) => ({orders: options.orders, validate_only: options.validate_only !== false}),
response.mapHandler(response.decodeJson, objectOf({
status: validateStatus,
orders: arrayOf(objectOf({
order_po: isString,
order_id: isNumber,
order_confirmation_id: isNumber,
order_confirmation_datetime: isString,
})),
debug: isAnyValue,
misc: isAnyValue,
}))
);
One Promise
resolver later, and I have type safe JSON responses:
cost result = await v3SubmitOrders({orders: [123]).then(requireValidResponse);
Implementing a Validator<T>
not only provides type safety, it also provides better documentation.
Without fail, every time I approach an API using Lambda calculus principles I end with an API that is declarative and easy to combine.
The first in a series of posts exploring WP-API with statically typed PHP and Functional Programming patterns.
To expose a resource as an endpoint via WordPress’ WP-API interface one must use register_rest_route
.
/**
* Registers a REST API route.
*
* Note: Do not use before the {@see 'rest_api_init'} hook.
*
* @since 4.4.0
* @since 5.1.0 Added a _doing_it_wrong() notice when not called on or after the rest_api_init hook.
*
* @param string $namespace The first URL segment after core prefix. Should be unique to your package/plugin.
* @param string $route The base URL for route you are adding.
* @param array $args Optional. Either an array of options for the endpoint, or an array of arrays for
* multiple methods. Default empty array.
* @param bool $override Optional. If the route already exists, should we override it? True overrides,
* false merges (with newer overriding if duplicate keys exist). Default false.
* @return bool True on success, false on error.
*/
function register_rest_route( $namespace, $route, $args = array(), $override = false ) {
The documentation here is incredibly opaque so it’s probably a good idea to have the handbook page open until the API is internalized in your brain.
The $namespace
and $route
arguments are somewhat clear, however in typical WordPress PHP fashion the bulk of the magic is provided through an opaquely documented @param array $args
.
The bare minimum are the keys method
and callback
and for our purposes will be all that we need. WP_REST_Server
provides some handy constants (READABLE
, CREATABLE
, DELETABLE
, EDITABLE
) for the methods
key so that leaves callback
.
What is callback
? In PHP terms it’s a callable
. Many things in PHP can be a callable
. The most commonly used callable
for WordPress tends to be a string value that is the name of a function:
function my_callable() {
}
register_rest_route( 'some-namespace', '/some/path', [ 'callback' => 'my_callable' ] );
This would call my_callable
, and as is would probably return 200 response with an empty body.
What would me more useful than just callable
would be a callable
that can define its argument types and return types.
The ability to verify the correctness of software with strongly typed languages is an obvious benefit to using them.
However, an additional benefit is how the types themselves become the natural documentation to the code.
PHP has supported type hinting for a while:
function totes_not_buggy( WP_REST_Request $request ) WP_REST_Response {
}
With type hints the expectations for totes_not_buggy()
are much clearer.
Adding these type hints means at runtime PHP will enforce that only instances of WP_REST_Request
will be able to be used with totes_not_buggy()
, and that totes_not_buggy()
can only return instances of WP_REST_Response
.
This sounds good except that this is enforced at runtime. For true type safety we want something better, we want static type analysis. Types should be enforced without running the code.
For this exercise, Psalm will provide static type analysis via PHPDoc
annotations.
/**
* Responds to a REST request with text/plain "You did it!"
*
* @param WP_REST_Request $request
* @return WP_REST_Response
*/
function totes_not_buggy($request) {
return new WP_REST_Response( 'You did it!', 200, ['content-type' => 'text/plain' );
}
Ok this all sounds nice in theory, how do we check this with Psalm?
To the terminal!
mkdir -p ~/code/wp-api-fun
cd ~/cod/wp-api-fun
composer init
Accept all the defaults and say “no” to the dependencies:
Package name (<vendor>/<name>) [beaucollins/wp-api-fun]:
Description []:
Author [Beau Collins <beau@collins.pub>, n to skip]:
Minimum Stability []:
Package Type (e.g. library, project, metapackage, composer-plugin) []:
License []:
Define your dependencies.
Would you like to define your dependencies (require) interactively [yes]? no
Would you like to define your dev dependencies (require-dev) interactively [yes]? no
{
"name": "beaucollins/wp-api-fun",
"authors": [
{
"name": "Beau Collins",
"email": "beau@collins.pub"
}
],
"require": {}
}
Do you confirm generation [yes]?
Now install two dependencies:
vimeo/psalm
to run type checkingphp-stubs/wordpress-stubs
to type check against WordPress APIscomposer require --dev vimeo/psalm php-stubs/wordpress-stubs
Assuming success try to run Psalm:
./vendor/bin/psalm
Could not locate a config XML file in path /Users/beau/code/wp-api-fun/. Have you run 'psalm --init' ?
To keep things simple with composer, define a single PHP file to be loaded for our project at the path ./src/fun.php
:
mkdir src
touch src/fun.php
Now inform composer.json
where this file is via the "autoload"
key:
{
"name": "beaucollins/wp-api-fun",
"authors": [
{
"name": "Beau Collins",
"email": "beau@collins.pub"
}
],
"require": {},
"require-dev": {
"vimeo/psalm": "^3.9",
"php-stubs/wordpress-stubs": "^5.3"
},
"autoload": {
"files": ["src/fun.php"]
}
}
Generate Psalm’s config file and run it to verify our empty PHP file has zero errors:
./vendor/bin/psalm --init
Calculating best config level based on project files
Calculating best config level based on project files
Scanning files...
Analyzing files...
░
Detected level 1 as a suitable initial default
Config file created successfully. Please re-run psalm.
./vendor/bin/psalm
Scanning files...
Analyzing files...
░
------------------------------
No errors found!
------------------------------
Checks took 0.12 seconds and used 37.515MB of memory
Psalm was unable to infer types in the codebase
For a quick gut-check define totes_not_buggy()
in ./src/fun.php
:
<?php
// in ./src/fun.php
/**
* Responds to a REST request with text/plain "You did it!"
*
* @param WP_REST_Request $request
* @return WP_REST_Response
*/
function totes_not_buggy($request) {
return new WP_REST_Response( 'You did it!', 200, ['content-type' => 'text/plain' );
}
Now analyze with Psalm:
./vendor/bin/psalm
./vendor/bin/psalm
Scanning files...
Analyzing files...
E
ERROR: UndefinedDocblockClass - src/fun.php:6:11 - Docblock-defined class or interface WP_REST_Request does not exist
* @param WP_REST_Request $request
ERROR: UndefinedDocblockClass - src/fun.php:7:12 - Docblock-defined class or interface WP_REST_Response does not exist
* @return WP_REST_Response
ERROR: MixedInferredReturnType - src/fun.php:7:12 - Could not verify return type 'WP_REST_Response' for totes_not_buggy
* @return WP_REST_Response
------------------------------
3 errors found
------------------------------
Checks took 0.15 seconds and used 40.758MB of memory
Psalm was unable to infer types in the codebase
Psalm doesn’t know about WordPress APIs yet. Time to teach it where those are by adding the stubs to ./psalm.xml
:
<stubs>
<file name="vendor/php-stubs/wordpress-stubs/wordpress-stubs.php" />
</stubs>
</psalm>
One more run of Psalm:
./vendor/bin/psalm
Scanning files...
Analyzing files...
░
------------------------------
No errors found!
------------------------------
Checks took 5.10 seconds and used 356.681MB of memory
Psalm was able to infer types for 100% of the codebase
No errors! It knows about WP_REST_Request
and WP_REST_Response
now.
What happens if they’re used incorrectly like a string for the status code in the WP_REST_Response
constructor:
ERROR: InvalidScalarArgument - src/fun.php:10:48 - Argument 2 of WP_REST_Response::__construct expects int, string(200) provided
return new WP_REST_Response( 'You did it!', '200', ['content-type' => 'text/plain'] );
Nice! Before running the PHP source, Psalm can tell us if it is correct or not. IDE’s that have Psalm integrations show the errors in-place:
InvalidScalarArgument
error. ]Now to answer the question “which type of callable
is the register_rest_route()
callback
option?”
With PHP’s type hinting, the best type it can offer for the callback
parameter is callable
.
This gives no insight into which arguments the callable
requires nor what it returns.
With Psalm integrated into the project there are more tools available to better describe this callable
type.
callable(Type1, OptionalType2=, SpreadType3...):ReturnType
Using this syntax, the callback
option of $args
can be described as:
callable(WP_REST_Request):(WP_REST_Response|WP_Error|JSONSerializable)
This line defines a callable
that accepts a WP_REST_Request
and can return one of WP_REST_Response
, WP_Error
or JSONSerializable
.
Once returned, WP_REST_Server
will do what is required to correctly deliver an HTTP response. Anything that conforms to this can be a callback
for WP-API. The WP-API world is now more clearly defined:
callable(WP_REST_Request):(WP_REST_Response|WP_Error|JSON_Serializable)
To illustrate this type at work define a function that accepts a callable
that will be used with register_rest_route()
.
Following WordPress conventions, each function name will be prefixed with totes_
as an ad-hoc namespace
of sorts (yes, this is completely ignoring PHP namespaces).
/**
* @param string $path
* @param (callable(WP_REST_Request):(WP_REST_Response|WP_Error|JSONSerializable)) $handler
* @return void
*/
function totes_register_api_endpoint( $path, $handler ) {
register_rest_route( 'totes', $path, [
'callback' => $handler
] );
}
add_action( 'rest_api_init', function() {
totes_register_api_endpoint('not-buggy', 'totes_not_buggy');
} );
A quick check with Psalm shows no errors:
------------------------------
No errors found!
------------------------------
What happens if the developer has a typo in the string name of the callback totes_not_buggy
? Perhaps they accidentally typed totes_not_bugy
?
ERROR: UndefinedFunction - src/fun.php:24:45 - Function totes_not_bugy does not exist
totes_register_api_endpoint('not-buggy', 'totes_not_bugy');
Fantastic!
What happens if the totes_not_buggy
function does not conform to the callable(WP_REST_Request):(...)
type? Perhaps it returns an int
instead:
/**
* Responds to a REST request with text/plain "You did it!"
*
* @param WP_REST_Request $request
* @return int
*/
function totes_not_buggy( $request ) {
return new WP_REST_Response("not buggy", 200, ['content-type' => 'text/plain']);
}
ERROR: InvalidArgument - src/fun.php:24:45 - Argument 2 of totes_register_api_endpoint expects callable(WP_REST_Request):(JSONSerializable|WP_Error|WP_REST_Response), string(totes_not_buggy) provided
totes_register_api_endpoint('not-buggy', 'totes_not_buggy');
The callable
string 'totes'
no longer conforms to the API. Psalm is catching these bugs before anything is even executed.
Psalm says this code is correct, but does this code work? Well, there’s only one way to find out.
First, turn./src/fun.php
into a WordPress plugin with the minimal amount of header comments:
<?php
/**
* Plugin Name: Totes
*/
And boot WordPress via wp-env
:
npm install -g @wordpress/env
echo '{"plugins": ["./src/fun.php"]}' > .wp-env.json
wp-env start
curl http://localhost:8889/?rest_route=/ | jq '.routes|keys' | grep totes
There are the endpoints:
curl --silent http://localhost:8889/\?rest_route\=/ | \
jq '.routes|keys' | \
grep totes
"/totes",
"/totes/not-buggy",
curl http://localhost:8889/\?rest_route\=/totes/not-buggy
"not buggy"
Well it works, but there’s a small problem. It looks like WordPress decided to json_encode()
the string literal not buggy
so it arrived in quotes as "not buggy"
(not very not buggy).
Changing the return of totes_not_buggy
to something more JSON compatible works as expected:
- return new WP_REST_Response("not buggy", 200, ['content-type' => 'text/plain']);
+ return new WP_REST_Response( [ 'status' => 'not-buggy' ] );
curl http://localhost:8889/\?rest_route\=/totes/not-buggy
{"status":"not-buggy"}
Reproducing the steps to run psalm
on this codebase is trivial.
With a concise Github Action definition this project can get static analysis on every push. Throw in a annotation service and Pull Request changes are marked with Psalm warnings and exceptions.
The Github workflow definition defines how to:
composer
.composer
dependencies (with caching).composer check
.This sets up the foundation for a highly productive development environment:
wp-env
allows for fast verification of running code.Coming up: exploring functional programming patterns for WP-API with the help of Psalm.