Categories
Uncategorized

Dodge Caravan Hacking

If you’re really cool like me and own a decade old Dodge Grand Caravan you may find that the Totally Integrated Power Module (TIPM) will start failing.

In common cases (like mine) it will stop activating the fuel pump relay. This means your car will turn over but not start. This is not a great feature but does potentially help with the climate change crisis.

Fortunately for me I have a mechanic just down the street. I’d like to not tow this beast. To diagnose all of this the internet mechanics mention opening your TIPM and hot-wiring the fuel relay circuit. I took at the relay’s fuse and used a multimeter. No voltage. Spot checked some others and was reading the expected volts.

I shoved a jumper wire from my battery-wired cigarette port fuse (not the key activated one) over to the fuel pump relay fuse and heard what sounded like an electric motor activate from beneath the car.

I turned the ignition and the car started right up. Off to the mechanic.

Don’t Trust the Crazy Car Owner

I explained my morning’s adventures to the mechanic. He looked skeptically at my patch wire and said he’d run a test to diagnose things. Could be the TIPM, maybe just the fuel pump.

We both knew it was the TIPM. Turns out it was the TIPM. Shocker.

The fix: new TIPM. The problem: since these things fail so ofter and they’re year/make/model specific and there’s a worldwide computer chip shortage I get a refurbed one. Oh yeah, and it’s going to take a week to ship it.

Turns out no car for a week is fine. Thanks to COVID most necessities are all delivered now.

Bugs in the System

After some delays and fakeouts from TIPM dealers I got the call that the van was ready to go.

Walked up to the shop and it started right up. Settled the bill (ouch) and brought it home.

Next day the right turn indicator started flipping out “front right turn signal out”. Guess I get to make a stop at the auto supply joint. Looked up the bulb number but before making a purchase decided to physically check all the lights first. The front right fog light has been out for forever so I might as well fix that while I’m at it.

I activate the left turn signal: the left fog light starts flashing. What?

I activate the right turn signal: no lights flashing. Ok, expected.

I push the fog light button: bot turn signals turn on. What?

Quick call to the mechanic to describe the situation. Basically got the “wasn’t my fault” spiel which is fine, wasn’t casting blame just trying to problem-solve here.

There’s a lot of downtime at kid’s baseball games so between innings I start asking the internet what it thinks of all of this. Eventually I put in the correct series of search terms and land on someone having the same problem. I searched the document number in the images: “k6855837”.

It lands me on a YouTube video that take me step-by-step through the process of performing this fix.

Apparently my new-to-me TIPM has a firmware update that changed the behavior of some circuits. Just gotta flip some wires. Since it turned into a car maintenance day I took the opportunity to pick up some new H11 headlight sockets and wire them in since Dodge seems to use janky wiring that melts every few years.

And hooray, a car that starts with correctly functioning lights.

This is my last American car I swear.


Search terms:

  • TIPM
  • 2011 Dodge Grand Caravan
  • Fog lights and turn signals switched
  • K6855837
  • fuel pump relay

Categories
Uncategorized

Declaring Sides in the Flame Wars

Not a complete list but these have not changed, even when being forced into environments that were actively hostile against me (WordPress PHP/JS code style is hideous).

  • GIF: team soft “g”
  • Tabs vs Spaces: spaces (but inserted via using the tab key, nobody presses the space key)
  • Pineapple: Excellent on a pizza when it also has Canadian bacon.

Categories
Uncategorized

How to be a 10X Developer

I’ve been around a little while now so I’m instilling this wisdom to you.

The guaranteed way to become a 10x developer:

Hire ten developers whose mean productivity matches yours.

Categories
Programming

React and Pipes in TypeScript

In the day job I recently recommended using Ramda to help clean up the readability of our UI code.

Ramda is a collection of pure functions designed to fit together using functional programming patterns.

The Context

We had a piece of TypeScript code landing that processed some data and rendered a React component.

const filteredLabels =
  data.community.labels.filter((label) => {

    if (label.type === LabelType.AutoLabel &&
        (isGroupPage === true ?
          ['system_a', 'special'] :
          ['system_a']
        ).includes(label.name)
    ) {
      return false;
    }

    if (label.type === LabelType.UserDefined &&
          label.stats.timesUsed === 0) {
      return false;
    }
    
    return true;
  });

filteredLabels.sort((labelA, labelB) => {
  if (labelA.type === LabelType.AutoLabel) {
    if (labelB.type === LabelType.UserDefined) {
      return -1;
    }
    return labelA.name.toLowerCase() < labelB.name.toLowerCase() ? -1 : 1;
  }

  if (labelB.type === LabelType.AutoLabel) {
     return 1;
  }
  return labelA.name.toLowerCase() < labelB.name.toLowerCase() ? -1 : 1;
});

return filterdLabels.map((label) => <>{/* React UI */}</>

Three distinct things are happening here:

  1. data.community.labels.filter(/* ... */) is removing certain Label instances from the list.
  2. filteredLabels.sort(/* ... */) is sorting the filtered items first by their .type then by their .name (case-insensitive)
  3. filteredLabels.map(/* ... */) is turning the list of Label instances into a JSX.Element.

The hardest part for me to decipher as a reader of the code was step two: given two labels what was the intended sort order?

After spending a few moments internalizing those if statements I came to the conclusion the two properties being used for comparison were label.type and label.name.

A label of .type === LabelType.AutoLabel should appear before a label of .type === LabelType.UserDefined.

Labels with the same .type should then be sorted by their .name case-insensitively.

Ramda’s sortWith

The problem I was encountering with this bit of code is that my human brain works this way:

Given a list of Labels:
- Sort them by their .type with .AutoLabel preceding .UserDefined
- Sort labels of the some .type by their .name case-insensitively

Ramda’s sortWith gives us an API that sounds similar in theory:

Sorts a list according to a list of comparators.

A “comparator” is typed with (a, a) => Number. My list of comparators will be one for the label.type and one for the label.name.

import { sortWith } from 'ramda';

const sortLabels = sortWith<Label>([
  // 1. compare label types
  // 2. compare label names
]);

A comparator‘s return value here is a bit ambiguous declaring Number in the documentation. But their code example for sortWith points to some more handy functions: ascend and descend.

Here’s the description for ascend:

Makes an ascending comparator function out of a function that returns a value that can be compared with < and >.

To sort by label.type I need to map the LabelType to value that will sort .AutoLabel to precede .UserDefined:

const sortLabels = sortWith<Label>([
  ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
  // 2. compare label names
]);

To sort by the .name I can ascend with a case-insensitive value for label.name:

const sortLabels = sortWith<Label>([
  ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
  ascend((label) => label.name.toLowercase(),
]);

Ramda is a curried API. This means by leaving out the second argument, sortLabels now has the TypeScript signature of:

type LabelSort = (labels: Label[]) => Label[]

Since we hinted the generic type on sortWith<Label>() TypeScript has also inferred that the functions we give to ascend receive a Label type as their single argument (see on TS Playground).

Screen capture of TS Playground tooltip showing Label as the inferred type.

Given Ramda’s curried interface, we can extract that sorting business logic into a reusable constant.

/**
 * Sort a list of Labels such that
 *  - AutoLabels appear before UserDefined
 *  - Labels are sorted by name case-insensitively
 */
export const sortLabelsByTypeAndName = sortWith<Label>(
  [
    ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
    ascend((label) => label.name.toLowercase(),
  ]
);

Using this to replace the original code’s sorting we now have:

const filteredLabels =
  data.community.labels.filter((label) => {

    if (label.type === LabelType.AutoLabel &&
        (isGroupPage === true ?
          ['system_a', 'special'] :
          ['system_a']
        ).includes(label.name)
    ) {
      return false;
    }

    if (label.type === LabelType.UserDefined &&
          label.stats.timesUsed === 0) {
      return false;
    }
    return true;
  });

const sortedLabels = sortLabelsByTypeAndName(filteredLabels);

return sortedLabels((label) => <>{/* React UI */}</>);

Now let’s see what Ramda’s filter can do for us.

Declarative Filtering with filter

Ramda’s filter looks similar to Array.prototype.filter:

Filterable f => (a → Boolean) → f a → f a

Takes a predicate and a Filterable, and returns a new filterable of the same type containing the members of the given filterable which satisfy the given predicate. Filterable objects include plain objects or any object that has a filter method such as Array.

The first change will be conforming to this interface:

import { filter } from 'ramda';

const filteredLabels = filter<Label>((label) => {
  // boolean logic here
}, data.community.labels);

There are two if statements in our original filter code that both have early returns. This indicates there are two different conditions that we test for.

  • Remove Label if
    • .type is AutolLabel and
    • .name is in a list of predefined label names
  • Remove Label if
    • .type is UserDefined and
    • .stats.count is zero (or fewer)

To clear things up we can turn these into their own independent functions that capture the business logic they represent.

The AutoLabel scenario has one complication. The isGroup variable changes the behavior by changing the names the label is allowed to have.

In Lambda calculus this is called a free variable. We can solve this now by creating our own closure that accepts the string[] of names and returns the Label filter.

const isAutoLabelWithName = (names: string[]) =>
  (label: Label) =>
    label.type === LabelType.AutoLabel
    && names.include(label.name);

Now isAutoLabelWithName can be used without needing to know anything about isGroupPage.

We can now use this with filter:

const filteredLabels = filter<Label>(
  isAutoLabelWithName(
     isGroupPage
       ? ['system_a', 'special']
       : ['system_a'],
  data.community.labels
);

But there’s a problem here. In the original code, we wanted to remove the labels that evaluated to true. This is the opposite of that.

In set theory, this is called the complement. Ramda has a complement function for this exact purpose.

const filteredLabels = filter<Label>(
  complement(
    isAutoLabelWithName(
      isGroupPage
        ? ['system_a', 'special']
        : ['system_a']
  ),
  data.community.labels
);

The second condition is simpler given it uses no free variables.

const isUnusedUserDefinedLabel = (label: Label) =>
  label.type === LabelType.UserDefined
  && label.stats.timesUsed <= 0;

Similar to isAutoLabelWithName any Label that is true for isUnusedUserDefinedLabel should be removed from the list.

Since either being true should remove the Label from the collection, Ramda’s anyPass can combine the two conditions:

const filteredLabels = filter<Label>(
  complement(
    anyPass(
      isAutoLabelWithName(
        isGroupPage
          ? ['system_a', 'special']
          : ['system_a'],
      isUnusedUserDefinedLabel
    )
  ),
  data.community.labels
);

Addressing the free variable this can be extracted into its own globally declared function that describes its purpose:

const filterLabelsForMenu = (isGroupPage: boolean) =>
  filter<Label>(
    complement(
      anyPass(
        isAutoLabelWithName(
          isGroupPage
          ? ['system_a', 'special']
          : ['system_a'],
        isUnusedUserDefinedLabel
      )
    );

The <LabelMenu> component cleans up to:

import { anyPass, ascend, complement, filter, sortWith } from 'ramda';
import { Label } from '../generated/graphql';

type Props = { isGroupPage: boolean };

const isAutoLabelWithName = (names: string[]) =>
  (label: Label) =>
    label.type === LabelType.AutoLabel
    && names.include(label.name);

const isUnusedUserDefinedLabel = (label: Label) =>
  label.type === LabelType.UserDefined
  && label.stats.timesUsed <= 0;

const filterLabelsForMenu = (isGroupPage: boolean): (labels: Label[]) => Label[] =>
  filter<Label>(
    complement(
      anyPass(
        isAutoLabelWithName(
          isGroupPage
          ? ['system_a', 'special']
          : ['system_a'],
        isUnusedUserDefinedLabel
      )
    );

export const sortLabelsByTypeAndName = sortWith<Label>(
  [
    ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1),
    ascend((label) => label.name.toLowercase()),
  ]
);

const LabelMenu = ({isGroupPage, labels: Label[]}: Props): JSX.Element =>
  const filterForGroup = filterLabelsForMenu(isGroupPage);
  const filteredLabels = filterForGroup(labels);
  const sortedLabels = sortLabelsByTypeAndName(filteredLabels);


  return (
    <>{
      sortedLabels.map((label) => <>{/* React UI */}</>)
    }</>
  );
};

The example above is very close to what we ended up landing.

However, since I like to get a little too ridiculous with functional programming patterns I decided to take it a little further in my own time.

Going too far with pipe

The <LabelMenu /> component has one more step that can be converted over to Ramda using map.

Ramda’s map is similar to Array.prototype.map but using Ramda’s curried, data-as-final-argument style of API.

const labelOptions = map<Label>(
  (label) => <>{/* React UI */}</>
);

return <>{labelOptions(sortedLabels)}</>;

labelOptions is now a function that takes a list of labels (Label[]) and returns a list of React nodes (JSX.Element[]).

The <LabelList /> component now has a very interesting implementation.

  • filterLabelsForMenu returns a function of type (labels: Label[]) => Label[]
  • sortLabelByTypeAndName is a function of type (labels: Label[]) => Label[].
  • labelOptions is a function of type (labels: Label[]) => JSX.Element[].

The output of each of those functions is given as the input of the next.

Taking away all of the variable assignments this looks like:

const LabelMenu = ({isGroupPage, labels: Label[]}: Props): JSX.Element => {

  const labelOptions = map(
    (label) => <>{/* React UI */}</>,
    sortLabelsByTypeAndName(
      filterLabelsForMenu(isGroupPage)(
        labels
      )
    )
  );

  return <>{labelOptions}</>;
};

To understand how labelOptions becomes JSX.Element[] we are required to read from the innermost parentheses to the outermost.

  • filteredLabelsForMenu is applied with props.isGroupPage
  • the returned filter function is applied with props.labels
  • the returned filtered labels are applied to sortLabelsByTypeAndName
  • the returned sorted labels are applied to map(<></>)
  • the result is JSX.Element[]

We can take advantage of Ramda’s pipe to express these operations in list form.

Performs left-to-right function composition. The first argument may have any arity; the remaining arguments must be unary.

We’re in luck, all of our functions are unary. We can line them up:

const LabelMenu = ({isGroupPage, labels}: Props) =>
  const createLabelOptions = pipe(
    filterLabelsForMenu(isGroupPage),
    sortLabelsByTypeAndName,
    map(label => <key={label.id}>{label.name}</>)
  );

  return <>{createLabelOptions(labels)}</>
}

The application of pipe assigned to createLabelOptions produces a function with the type signature:

type createLabelOptions: (labels: Label[]) => JSX.Element[];

But wait, there’s more!

React’s functional components are also plain functions. Ramda can use those too!

The type signature of <LabelMenu /> is:

type LabelMenu = ({isGroupPage: boolean, labels: Label[]}) => JSX.Element;

We can update our pipe to wrap the list in a single element as its final operation:

export const LabelMenu = ({isGroupPage, labels}: Props): JSX.Element => {
  const createLabelOptions = pipe(
    filterLabelsForMenu(isGroupPage),
    sortLabelsByTypeAndName,
    map(label =>
      <li key={label.id}>
        {label.name}
      </li>
    ),
    (elements): JSX.Element =>
       <ul>{elements}</ul>
  );

  return createLabelOptions(labels);
}

The type signature of our pipe application (createLabelOptions) is now:

const createLabelOptions: (x: Label[]) => JSX.Element

Wait a second, that looks very close to a React.VFC compatible signature.

Our pipe expects input of a single argument of Label[]. But what if we changed it to accept an instance of Props?

export const LabelMenu = (props: Props): JSX.Element => {
  const createLabelOptions = pipe(
    (props: Props) =>
      filterLabelsForMenu(props.isGroupPage)(props.labels),

    sortLabelsByTypeAndName,

    map((label: Label) =>
      <li key={label.id}>{label.name}</li>
    ),
    
    (elements): JSX.Element =>
      <ul>{elements}</ul>
  );

  return createLabelOptions(props);
}

Now the type signature of createLabelOptions is:

const createLabelOptions: (x: Props) => JSX.Element

So if our application of Ramda’s pipe produces the exact signature of our a React.FunctionComponent then it stands to reason we can get rid of the function body completely:

type Props = { isGroupPage: boolean, labels: Label[] };

export const LabelMenu: React.VFC<Props> = pipe(

    (props: Props) =>
      filterLabelsForMenu(props.isGroupPage)(props.labels),

    sortLabelsByTypeAndName,

    map(label => <li key={label.id}>{label.name}</li>),

    (elements) => <ul>{elements}</ul>
  );

The ergonomics of code like this is debatable. I personally like it for my own projects. I find the more I think and write in terms of data pipelines the clearer tho code becomes.

Here’s an interesting problem. What happens if we need to use a React hook in a component like this? We’ll need a valid place to call something like React.useState() which means we’ll need to create a closure for component implementation.

This makes sense though! A functionally pure component like this is not able to have side-effects. React hooks are side-effects.

Designing at the Type Level

The <LabelMenu /> component has a type signature of

type Props = {isGroupPage: boolean, labels: Label []};
type LabelMenu = React.VFC<Props>

It renders a list of the labels it is given while also sorting and filtering them due to some business logic.

We extracted much of this business logic into pure functions that encoded our business rules into plain functions that operated on our types.

When I use <LabelMenu /> I know that I must give it isGroupPage and labels props. The labels property seems pretty self-explanatory, but the isGroupPage doesn’t really make anything obvious about what it does.

I could go into the <LabelMenu /> code and discover that isGroupPage changes which LabelType.AutoLabel labels are displayed.

But what if I wanted another <LabelMenu /> that looked exactly the same but behaved slightly differently?

I could add some more props to <LabelMenu /> that changed how it internally filtered and sorted the labels I give it, but adding more property flags to its interface feels like the wrong kind of complexity.

How about disconnecting the labels from the filtering and sorting completely?

Start by Simplifying

I’ll first simplify the <LabelMenu /> implementation:

type Props = { labels: Label[] };

const LabelMenu = (props: Props) => (
  <ul>
    {labels.map(
      (label) => <li key={label.id}>{label.name}<li>
    )}
  <ul>
);

This implementation should contain everything about how these elements should look and render every label it gets.

But what about our filtering and sorting logic?

We had a component with this type signature:

type Props = { isGroupPage: boolean, labels: Label[] };
type LabelMenu = React.VFC<Props>;

Can we express the original component’s interface without changing <LabelMenu />‘s implementation?

If we can write a function that maps from one set of props to the other, then we should also be able to write a function that maps from one React component to the other.

First write the function that uses our original Props interface as its input, and then returns the new Props interface as its return value.

type LabelMenuProps = { labels: Label[] }; 

type FilterPageLabelMenuProps = {
  isGroupPage: boolean,
  labels: Label []
};

const propertiesForFilterPage = pipe(
    (props: FilterPageLabelMenuProps) =>
      filterLabelsForMenu(props.isGroupPage)(props.labels),

    sortLabelsByTypeAndName,

    (labels) = ({ labels })
  );

There’s our Ramda implementations again. We took out all of the React bits. It’s the same business logic but without the React element rendering. The only difference is instead of mapping the labels into JSX.Elements the labels are returned in the form of LabelMenuProps.

We’ve encoded our business logic into a function that maps from FilterPageLabelMenuProps to LabelMenuProps.

That means the output of propertiesForFilterPage can be used as the input to <LabelMenu />, which is itself a function that returns a JSX.Element.

Piping one function’s output into a compatible function’s input, that sounds familiar, doesn’t it?

export const FilterPageMenuLabel: React.VFC<FilterPageLabelMenuProps> =
  pipe(
    (props: FilterPageLabelMenuProps) =>
      filterLabelsForMenu(props.isGroupPage)(props.labels),

    sortLabelsByTypeAndName,

    (labels) = ({ labels }),

    LabelMenu
  );

We’ve leveraged our existing view specific code, but changed its behavior at the Prop level.

import { FilterPageLabelMenuProps, LabelMenu } from './components/LabelMenu';

const Foo = () => {
  const { data } = useQuery(query: LabelsQuery);

  return (
    <FilteredPageLabelMenuProps
      isGroupPage={isGroupPage}
      labels={data?.labels ?? []}
    />
  );
}

const Bar = () => {
  const { data } = useQuery(query: LabelsQuery);

  return (
    <LabelMenu labels={data?.labels ?? []} />
  );
}

When hovering over the implementation of <FilteredPageLabelMenuProps> the tooltip shows exactly how it’s implemented:

Categories
Uncategorized

Feynman on Trees

Whenever I’m hiking I think of this talk by Feynman and it brings a sense of awe as I walk through the trees.

People look at trees and think it comes out of the ground … they come out of the air!

Categories
Uncategorized

Spelling Some Words

In a real-time chat workplace spelling and grammar tend to take a back seat to speed.

I typed qwerty proficiently for many years. After switching to Dvorak I have found that my fingers tend to translate the words I type phonetically.

I don’t know how to explain it. In my mind I’m using the word “their” but then I read back the sentence I just typed: “I don’t know there thoughts on …”. I’m always surprised. It’s not the word I had visualized but it’s the word I typed.

Sometimes I catch it but usually I hit enter before I read what I typed and quickly press up-arrow then e so I can quickly edit the grammatical error before too many coworkers have read it. (I just did it there. I know the word is “read” but my fingers type “red” and then I go back and fix it).

The scenario that always gives me problems is weather vs whether vs wether.

  • weather: the state of the atmosphere at a place and time as regards heat, dryness, sunshine, wind, rain, etc.: if the weather’s good we can go for a walk
  • whether: expressing a doubt or choice between alternatives: he seemed undecided whether to go or stay | it is still not clear whether or not he realizes.
  • wether: a castrated ram.

I think I always get “weather” right but my fingers never want to type an “h” after the “w”. They just aren’t used to that sequence of keys.

So I end up talking about castrated rams much more than I ever thought I would.

Categories
Uncategorized

Oh my god, this is gonna work.

I think that sums up the moment that keeps me excited about slinging code. It probably fits with any creative endeavor.

Oh my god, this is gonna work.

Adam Lisagor in How We Made “Slack WFH”

I couldn’t help but smile when he said that line.

For me everything worthwhile starts with “what if we try to …”. But the magic moment where that dopamine is flooding the brain coincides with that phrase: “Oh my god, this is gonna work.”

There will no doubt be a million more things to do, but thats the moment the “how” starts falling into place.

Wear your masks.

Categories
Uncategorized

Deal with Scope Creep Like Muad’Dib

Arrakis teaches the attitude of the knife – chopping off what’s incomplete and saying: ‘Now, it’s complete because it’s ended here.’

– from “Collected Sayings of Muad’Dib” by the Princess Irulan

Frank Herbert quotes on Goodreads

Categories
Uncategorized

Runtime Verification and WP-API

The second in a series of posts that investigates using strongly-typed first-class functions with WordPress WP-API to create a composable, testable, verifiable, and productive method of REST API development.

Previously: Strongly Typed WP-API.

Productivity

Context switching is a productivity killer. What exactly constitutes a context switch though?

Moving to a ping in Slack away from a Vim window? Definitely a context switch.

Switching via cmd-tab between a source code editor and browser window? Also a context switch. Yes, even when duck-duck-going the error from the console.

Everything that reduces context switching during development is a productivity win.

Debugging is a Productivity Killer

Time spent searching logs and reconstructing failure cases from production bugs is time not spent shipping.

It is also time that was not accounted for in the 100% accurate development estimate given to the project manager to complete the task.

Passing a string value to a function that expects an int: bug. Typing the incorrect string name of a function in WordPress’s add_filter: another bug. Calling a method on a WP_Error instance because it was assumed to be a WP_User: bug.

All of these things are caught by static type analysis.

They may all seem like small bugs but they can quickly add up to a non-trivial amount of time debugging. Perhaps these bugs will be discovered quickly at runtime, but that requires the correct codepaths are executed in a runtime. Is every code path in a project going to be executed between each source code change? No.

Static analysis will increase productivity by uncovering these bugs. But even with a 100% typed, fully analyzed codebase validating running code output is still necessary.

Automating runtime validation is another tool to increase productivity.

Runtime Verification

Psalm enforces correct types and API usage. Checking the correctness of the runtime code still requires some manual steps, like booting up an entire WordPress stack. Previously, wp-env was used to verify that the endpoint actually worked.

wp-env start
curl http://localhost:8889/?rest_route=/totes/not-buggy
{"result": "not buggy"}

This isn’t going to scale well when the number of endpoints and the number of ways to call them increases. Jumping from an editor to a browser and back isn’t the best recipe for productive coding sessions either.

Time for automated tests.

In the world of PHP, that means PHPUnit.

The bare minimum code to test totes_not_buggy() is a single implementation of PHUnit\Framework\TestCase with a single test method. It will live in tests/Totes/TotesTest.php:

<?php
namespace Totes;

use WP_REST_Request;
use WP_REST_Server;

class TotesTest extends \PHPUnit\Framework\TestCase {

    /**
     * @return void
     */
    function testTotesNotBuggy() {
        $request = new WP_REST_Request( 'GET', '/totes/not-buggy' );
        $response = totes_not_buggy( $request );
        $this->assertEquals( [ 'status' => 'not buggy' ], $response->get_data( ) );
    }
}

To run PHPUnit, the dependency needs to be installed.

composer --dev require phpunit/phpunit

Now run the test:

./vendor/bin/phpunit tests

// yadda yadda

ERRORS!
Tests: 1, Assertions: 0, Errors: 1.

The error shows that we don’t have WordPress APIs available to our run runtime:

1) Totes\TotesTest::testTotesNotBuggy
Error: Class 'WP_REST_Request' not found

WordPress is a dependency of this project. It won’t work without it. Time to install it:

composer require --dev johnpbloch/wordpress

The johnpbloch/wordpress package by default will install the WordPress source code in ./wordpress. Setting up a whole WordPress stack to work on some source code: productivity killer. “No install” is faster than any five minute install no matter how famous it is.

If WordPress were a PSR-4 compliant project there wouldn’t be anything left to do. But it isn’t. To illustrate, run the test again and observe the result is the same.

Since Composer doesn’t know how to autoload WordPress source code, PHPUnit needs to be taught how to find WordPress APIs during test execution. A perfect place for this is via PHPUnit’s "bootstrap" system.

Generate a config and tell PHPUnit to use a custom"bootstrap":

./vendor/bin/phpunit --generate-config
PHPUnit 9.0.1 by Sebastian Bergmann and contributors.

Generating phpunit.xml in /Users/beau/code/wp-api-fun

Bootstrap script (relative to path shown above; default: vendor/autoload.php): tests/bootstrap.php
Tests directory (relative to path shown above; default: tests): 
Source directory (relative to path shown above; default: src): 

Generated phpunit.xml in /Users/beau/code/wp-api-fun

This generates ./phpunit.xml and tells phpunit to run test/bootstrap.php before executing tests.

Time to hunt down all of the WordPress dependencies for this test.

One way to find which PHP files need to be included is to keep running the tests and including the files that define the missing classes and functions.

For example, the current error is that WP_REST_Request is not defined.

ack 'class WP_REST_Request' wordpress
wordpress/wp-includes/rest-api/class-wp-rest-request.php
29:class WP_REST_Request implements ArrayAccess {

Now add wordpress/wp-includes/rest-api/class-wp-rest-request.php.

Keep going until it passes. This is the end result for now. Note that this is – at this time in our development – 100% of our plugin’s runtime dependencies.

<?php

define( 'ABSPATH', __DIR__ . '/../wordpress' );
define( 'WPINC', '/wp-includes' );

require_once __DIR__ . '/../wordpress/wp-includes/functions.php';
require_once __DIR__ . '/../wordpress/wp-includes/plugin.php';

require_once __DIR__ . '/../wordpress/wp-includes/class-wp-error.php';
require_once __DIR__ . '/../wordpress/wp-includes/pomo/translations.php';
require_once __DIR__ . '/../wordpress/wp-includes/l10n.php';
require_once __DIR__ . '/../wordpress/wp-includes/class-wp-http-response.php';
require_once __DIR__ . '/../wordpress/wp-includes/rest-api/class-wp-rest-request.php';
require_once __DIR__ . '/../wordpress/wp-includes/rest-api/class-wp-rest-response.php';
require_once __DIR__ . '/../wordpress/wp-includes/rest-api/class-wp-rest-server.php';
require_once __DIR__ . '/../wordpress/wp-includes/rest-api.php';
require_once __DIR__ . '/../wordpress/wp-includes/load.php';

add_action( 'rest_api_init', 'totes_register_endpoints' );

/** @psalm-suppress InvalidGlobal */
global $wp_rest_server;

$wp_rest_server = new WP_REST_Server();

do_action( 'rest_api_init' );

Now that Composer can install WordPress and PHPUnit, the CI can run these tests too. Add it to the GitHub action:

+
+    - name: Unit Tests
+      run: vendor/bin/phpunit

Runtime verification of any new route can now be captured in a unit test. Once in a unit test it can be ran in all sorts of ways.

Bonus, with XDebug configured PHPUnit will also report coverage analysis when proper @covers annotations are added:

vendor/bin/phpunit test --coverage-html coverage-report
PHPUnit 9.0.1 by Sebastian Bergmann and contributors.

.                                                                   1 / 1 (100%)

Time: 68 ms, Memory: 8.00 MB

OK (1 test, 1 assertion)

Generating code coverage report in HTML format ... done [12 ms]

68 millisecond execution time with 100% coverage of a one-line function assigned a CRAP score of 1. Gotta love that new project smell.

Screen capture of PHPUnit coverage report
Screen capture of a PHPUnit coverage report.

Safety Nets Engaged

Between Psalm and PHPUnit we now have static analysis and automated runtime tests.

Next up we’ll dive into Higher-Order Kinds with Psalm and start using them with WP-API to create a declarative, composable API.

Categories
Uncategorized

Parser and Getting Complicated with Types

Quick context: Validator<T> is a function that returns a Result<T>:

type Validator<T> = (value: any) => Result<T>;

When sharing some of this with a coworker to help figure out some type questions they quickly pointed out that this is in fact a Parser (thanks Dennis). These are things an informally trained developer (me) probably should have been able to identify at this point in their career.

Mapping the understanding of what a Parser is to what I had named it caused confusion. So all things Validator<T> have become Parser<T>. Naming: one of the two hard things.

Combining more than Two Parsers

In the Parser<T> library the function oneOf accepts two Parser<T> types and returns the union of them:

function oneOf<A,B>(a: Parser<A>, Parser<B>): Parser<(A|B)> {
  return value => mapFailure(a(value), () => b(value));
}

A more complex Parser<T> is now created out of simpler ones.

const isStringOrNumber = oneOf(isString, isNumber);

TypeScript can infer that isStringOrNumber has the type of Parser<string|number>.

This works great when combining two parsers, but when more than two are combined with oneOf it requires nested calls:

const isThing = oneOf(isNull, oneOf(isPerson, isAnimal));

Assuming isPerson is Parser<Person> and isAnimal is Parser<Animal>, const isThing is inferred by TypeScript to be:

type Parser<null | Person | Animal>

Each additional Parser<T> requires another call of oneOf. Writing a oneOf that takes one or more Parser<T> types is straight forward:

function oneOf(parser, ... parsers) {
  return value => parsers(
     (result, next) => mapFailure(result, () => next(result)),
     parser(value)
  )
}

However, writing the correct type signature for this function was beyond my grasp.

My first attempt I knew couldn’t work:

function oneOf<T>(parser: Parser<T>, ...parsers: Parser<T>[]): Parser<T> {

In use, TypeScript’s inference was not happy:

const example = oneOf(isString, isNumber, isBoolean);
Types of property 'value' are incompatible.
          Type 'number' is not assignable to type 'string'.

The T was being captured as string because the first argument to oneOf is a Parser<string>. However isNumber is a Parser<number>, so the two T did not match and tsc was not happy. Removing the first parser: Parser<T> didn’t help.

If TypeScript is told what the union is, then everything is ok:

const example = oneOf<string|number|boolean>(isString, isNumber, isBoolean);

But the best API experience is one in which the correct type is inferred.

After varying attempts of picking out similar cases in TypeScript’s Advanced Types I gave up and posed the question in the company’s #typescript Slack channel.

The magical internet people debated about Parser<T> and Result<T> so I tried to simplify things to the “base case” and got rid of Result<T>:

type Machine<T> = () => T

Is it possible to create a function signature such that a list of Machine<*>s of differing <T>s via variadic type arguments could infer the union Machine<T1|T2|T3|...>:

function oneOf(... machines: Array<Machine<?>>>): Machine<(UNION of ?)> {

The magical internet people came up with a solution (thank you, Tal).

type MachineType<T> = T extends Machine<infer U> ? U : never;

function<M extends Machine<any>[]>(...machines: M): Machine<MachineType<M[number]>> {

After mapping it into the Parser domain, It worked!

type ParserType<T> = T extends Parser<infer U> ? U : never;

function<P extends Parser<any>[]>oneOf(...machines: P): Parser<ParserType<P[number]>> {
const example = oneOf(isNumber, isString, isBoolean);

Running tsc passed, and the inferred type of const example is:

const example: (value: any) => Result<string | number | boolean>

Now to understand why it works.

Conditional Types: ParserType<T>

The first thing to understand is ParserType<T>, which uses a Conditional Type:

type ParserType<T> = T extends Parser<infer U> ? U : never;

This is essentially a function within the type analysis stage of TypeScript (somewhat analogous to Flow’s $Call utility type). My first understanding of this reads as:

Given a type T, if it extends Parser<infer U> return U, otherwise never.

Using ParserType with any Parser<T> will give the type of T. So given any function that is a Parser<T>, the type of <T> can be inferred.

Within the extends clause of a conditional type, it is now possible to have infer declarations that introduce a type variable to be inferred. Such inferred type variables may be referenced in the true branch of the conditional type. It is possible to have multiple infer locations for the same type variable.

Type inference in conditional types

Take an example parsePerson parser which is defined using objectOf:

const parsePerson = objectOf({
  name: isString,
  email: isString,
  metInPerson: isBoolean
});

type Person = ParserType<typeof parsePerson>;

// This is ok!
const valid: Person = {
  name: 'Nausicaa',
  email: 'nausica@valleyofthewind.website',
  metInPerson: false,
};

// This fails!
const invalid: Person = {}; // Type Error

type Person is inferred to be:

type Person = {
    name: string;
    email: string;
    metInPerson: boolean;
}

const invalid: Person fails because:

Type '{}' is missing the following properties from type '{ name: string; email: string; metInPerson: boolean; }': name, email, metInPerson

So now the return value of oneOf is almost understood:

: Parser<ParserType<P[number]>>

This says:

Returns a Parser<T> whose T is the ParserType of P[number].

Well what is P[number]?

Mapped Types

In TypeScript, Mapped Types allow one to take the key and value types of one type, and transform them into another.

If you’ve used Partial<T> or ReadOnly<T>, you have used a Mapped Type. The example implementations of those are given as:

type Readonly<T> = {
    readonly [P in keyof T]: T[P];
}
type Partial<T> = {
    [P in keyof T]?: T[P];
}

Given a type with an index, the type that is used for the index’s value can be accessed using its key type:

type MyIndexedType = {[key: number]: (number|boolean|string)};
type ValueType = MyIndexedType[number];

In this example ValueType will have the type (number|boolean|string).

In the return signature of oneOf there is a P[number].

: Parser<ParserType<P[number]>>

Assuming P is an indexed type with keys and values whose key type is a number, this gives the type of the value stored in P.

So what is P?

function<P extends Parser<any>[]>oneOf(

P is an array of Parser<any>[]. Well it extends Parser<any>[].

This is where the magic happens.

TypeScript captures the T of each Parser<any> and stores it in P. Because an Array is an indexed type whose key is number, the type of P can also be expressed like this:

type P = {[number]: (Parser<number>|Parser<string>|Parser<boolean>)};

There it is! The union is the value type at P[number].

Putting the Pieces Together

ParserType is a Conditional Type that given a Parser<T>, returns T.

What happens when ParserType is given a union of Parser<T> types.

type T = ParserType<(Parser<string> | Parser<number>)>

TypeScript infers the union for T:

type T = string | number

Given a Mapped Type P that extends Parser<T>[], the union of Parser<T> types is available at P[number].

It follows then that passing the P[number] into ParserType will provide the union of T types in Parser<T>. That is exactly what the return type in oneOf does.

Reading the new signature for oneOf is now less cryptic:

function oneOf<P extends Parser<any>[]>(
  ...parsers: P
): Parser<ParserType<P[number]>> {

Now to wrap up the implementation.

Using oneOf doesn’t work unless there is at least one Parser<T>. The signature can be updated to require one:

function oneOf<T, P extends Parser<any>[]>(
  parser: Parser<T>,
  ...parsers: P
): Parser<T|ParserType<P[number]>> {
    // no additional parsers, return the single parser to be used as is
    if (parsers.length === 0) {
        return parser;
    }

    return value => mapFailure(
        parsers.reduce(
            // with each reduction, only, try to parse when the previous was a Failure
            (result, next) => mapFailure(result, () => next(value)),
            // seed the result with the first parser
            parser(value)
        ),
        // if all parsers fail, indicate that there were multiple parsers attempted
        () => failure(value, `'${value}' did not match any of ${parsers.length+1} validators`)
    );
}

Using oneOf

Using oneOf now looks like this:

const parseStatus = oneOf(
    isExactly('pending'),
    isExactly('shipped'),
    isExactly('delivered'),
);

This expresses a Parser<T> that will fail if the string is not 'pending', 'shipped', or 'delivered'.

With the new signature of oneOf, TypeScript now infers parseStatus to have the type:

const parseStatus: Parser<'pending'|'shipped'|'delivered'>;

Combined with mapSuccess, the Success<T> will guarantee that the value is one of those three exact strings.

mapSuccess(parseStatus('other'), status => {
  switch(status) {
    case 'something': return 'not valid';
  }
});

This fails type checking:

Type '"something"' is not comparable to type '"shipped" | "pending" | "delivered"'.

This works with the most complex of Parser<T>s:

const json: Parser<any> = value = {
    try {
        return success(JSON.parse(value));
    } catch(error) {
        return failure(value, error.description);
    }
}

const employeesParser = mapParser(json, objectOf({
    employees: arrayOf(objectOf({
        role: oneOf(
             isExactly('Vice President'),
             isExactly('Manager'),
             isExactly('Individual Contributor')
        ),
        // This one is for you Dennis
        // assuming ISO8601 Date strings and a modern browser
        hireDate: mapParser(isString, (value) => success(new Date(value)))
    }))
})));

mapSuccess(employeesParser("{...JSON HERE...}"), (valid) => {
    valid.employees.forEach(employee => {
        const employmentDurationInMS = (
            Date.now() - employee.hireDate.getTime()
        );

        switch(employee.role) {
            case "Not A Real Role": {
            }
        }
    });
});

The case "Not A Real Role": doesn’t exist for employee.role:

Type '"Not A Real Role"' is not comparable to type '"Manager" | "Individual Contributor" | "Vice President"'

Lovely!

Here’s the inferred type of employeesParser’s use of oneOf:

function oneOf<"Vice President", [Parser<"Manager">, Parser<"Individual Contributor">]>(parser: Parser<"Vice President">, parsers_0: Parser< "Manager">, parsers_1: Parser<"Individual Contributor">): Parser<...>

We can see where:

  1. The Parser<"Manager"> and Parser<"Individual Contributor"> types are captured in P.
  2. The parsers_0 and parsers_1 are spread as arguments to oneOf with the correct parser types.