Categories
Programming

TypeScript: Reduce over a Tuple

I have a knack for solving TypeScript riddles posed by my coworkers. It’s my “stupid human trick”.

A coworker posed this today:

is there a way to take something like

type Foo = [{name: 'one', value: boolean}, {name: 'two': value: number}]

And turn it into

type Bar = {one: boolean, two: number}

I like to challenge myself to respond to these types of questions with a link to https://href.li/?https://www.typescriptlang.org/play containing a working solution.

In this case I responded 12 minutes later with this solution. (I’m not sure exactly when I saw the message, though. So it was definitely under 12 minutes. I saw it when I came back from lunch. Yes, I’m trying to brag here, but this is really my one differentiating skill so I’ve got to milk it. Please forgive me.)

type Foo = [
  {name: 'one', value: boolean},
  {name: 'two', value: number}
]

type NameIndexedTypes<T, Result extends Record<never, never> = {}> =
  T extends [infer Head, ...infer Rest]
   ? Head extends { value: infer Value, name: infer Name extends string}
     ? NameIndexedTypes<Rest, Record<Name, Value> & Result>
     : never
   : Result

type Bar = NameIndexedTypes<Foo>;

const good: Bar = {
    one: false,
    two: 2
}

const bad: Bar = {
    one: 'one',
    two: 'two'
}

Let’s break this down.

The source type Foo is a tuple type. It appears as though the expectation is that each member of the tuple will have a Record type with a name that extends from a string and a value of anything else.

Given these constraints let’s dive into a solution.

Reduce a Tuple

When you take something that is an enumerable (in this case a tuple) and you want to transform it into something else entirely (in this case a Record<?, ?>, it’s time to perform a reduce.

In functional programming, reduce is iterating over a collection and for each item in the collection, aggregating it into something else.

An equivalent concept in JavaScript is Array.prototype.reduce().

TypeScript does nat have a built-in reduce operation. But the pattern can be reproduced.

Reducing over the tuple and returning a Record requires a type with two generics. One for the tuple and one for the transformed Record that gets aggregated and returned.

type IndexedByName<
  T,
  Result extends Record<never, never> = {}
>;

If you’re unfamiliar with generics or they seem indecipherable, in TypeScript’s world this is really no different than defining a function that takes in two arguments.

In this case the first argument is named T. This will expect the tuple.

The second argument is Result. This constrains the Result to extend Record<never, never> and assigns a default to {}. (note: a better default could be Record<never, never> because {} is pretty much the same as any).

Why Record<never, never>? Usage of any is banned in this codebase. (Using an any in the extends clause isn’t really a safety risk, but those are the rules).

The domain (or key) type of a Record is constrained to string | number | symbol. This means unknown won’t work which is usually the safer solution to using any. Record<never, never> here indicates to the type system that it needs to be a Record, but the domain and value types are not specified.

Since the default of {} is provided, the type can be used without specifying the initial Result:

type MyResult = IndexedByName<MyTuple>;

Starting the Reduce

The first thing to do is extract the first item (the “head”) from the tuple. In TypeScript this is done with a conditional type.

T extends [infer Head, ...infer Rest]
  ? // do something with Head and 
  : // the else case

In TypeScript you’re writing ternary statements. (The place I used to work banned ternaries because the aren’t readable. I’m a big fan of ternaries, so this always made me sad.)

In the true case of the ternary (the code after the ?), Head and Rest will be available type aliases. Rest has — well — the rest of the tuple while Head is now the member of tuple that was the first item.

Now it’s time to handle the true branch.

A Record with one key/value pair

Given the type Foo from the original question, in the first iteration over the tuple the Head will be aliased to

type Head = { name: 'one', value: boolean };

To solve the next stage of this problem this map type with keys of name and value need to become a Record type with a key of one and a value of boolean.

Time for another conditional type.

// The first true branch of the first ternary
Head extends { name: infer Name extends string, value: infer Value }
  ? // next true branch
  : // next false branch

This now checks if Head extends the expected shape and captures the two types into to aliases. Using the first member of Foo as the example, the aliases are now:

  • Name aliased to 'one'
  • Value aliased to boolean

Usually when defining a type with explicit members an interface or type alias is used with an explicit key name:

// interface example
interface Whatever {
 one: boolean;
}

// type alias example
type Whatever {
 one: boolean;
}

This can also be define with a Record type with a string subtype:

type Whatever = Record<'one', boolean>;

So if you wanted to build up a bunch of key/value pairs and merge them into a single type they can be intersected (&) together.

The usual way:

type Whatever {
  one: boolean;
  two: 'a' | 'b' | 'c';
}

The intersection way:

type Whatever =
  Record<'one', boolean>
  & Record<'two', 'a' | 'b' | 'c'>;

So never define a mapped type alias this way. Your coworkers will hate you. But if you need to reduce a tuple and merge the results into a single type, this is the tool you have to reach for.

So back to the solution. Now that Name and Value have been infered, Record<Name, Value> can be intersected with the current Result to produce a merged Record type.

Result & Record<Name, Value>

Tail call recursion

And thus we reach the meat of the solution.

// The second ternary after infer Name/Value
? IndexedByName<Rest, Result & Record<Name, Value>>

All in a single line:

  • Recursively call the the IndexedByName type
  • Use Rest which contains the tail of the tuple type
  • Intersect (&)
    • The carried Result type (the second generic input to IndexedByName
    • And the single key/value pair Record<Name, Value> described above

In the case of type Foo this means the it’s going to make a recursive call that looks like this:

IndexedByName<
  [{ name: 'two', value: boolean}],
  Record<'one', number>
>

Another thing to point out here is that the conditional type requires Name extends string.

A Record‘s key type has to be a type of string | number | symbol so it’s being constrained here so it can be used with Record. Using infer inside the extends statement was introduced in TypeScript 4.8. Prior to 4.8 an extra conditional type would be required.

In the original ask I assumed that members of Foo will have a string subtype for name but a more liberal solution could use extends number | string | symbol which means Foo could have a member of:

type MemberWithNumericalName = {
  name: 123;
  value: string;
};

Exiting the recursion

So far the example handles both true branches in the two conditional types used in this solution.

The first ternary will branch into the “else” portion if T cannot infer a Head type which means T is now an empty tuple ([]). This means the recursion is done, so for the false branch of the first ternary, the Result alias can be returned as is:

  : Result;

In the second nested ternary, the solution exits with never. This branch is reached if the member in the tuple does not match an expected type of:

type { name: string, value: unknown };

With the false branches of the ternary the solution is complete.

Extra credit

I wasn’t completely happy with the solution. The inferred type that comes out of IndexedByName isn’t the most readable:

An example with four entries in the incoming tuple will produce an inferred type with the intersecting Record types:

Screenshot of a VS Code tooltip with the inferred result of `IndexedByName`.
Gross.

This tries to communicate that the type needs to have valid key/value pairs for 'four', 'three', 'two', 'one' keys. But the type you’d expect to use would be something more like:

type Better {
  one: boolean;
  two: number;
  three: boolean;
  four: boolean;
}

TypeScript can be forced into using this type using a mapped type on Result:

  : {[K in keyof Result]: Result[K]}

The intersection of Record types are now merged into a single type alias:

Screenshot of a VSCode tooltip containing the inferred type using the mapped Result type.
Nice.

Tests

When introducing types like this into the codebase I like to unit test my types.

How do you unit test types? @ts-expect-error.

This codebase uses jest. In this case there really is no need for a runtime test, but using a stubbed out one can also be used for some type assertions.

# IndexedByName.test.ts

describe('IndexedByName', () => {
  type Foo = [
    { name: 'one', value: boolean },
    { name: 'two': value: number },
  ];

  type Bar = IndexedByName<Foo>;

  // @ts-expect-error BadRecord can't be a string
  type BadRecord: Bar = 'hi';

  type BadKey = {
    // @ts-expect-error one must be a boolean
    one: 'hi';  
    two: 1;
  }

  // @ts-expect-error missing key of one
  type MissingKey = {
    two: 1;
  }

  it('works', () => {
    const value: Bar = {
      one: true,
      two: 2
    }
    expect(value.one).toBe(true);
  });
});

If the IndexedByName type were to stop working the @ts-expect-error statements would fail tsc.

The only thing worse than no types are types that give you a false sense of safety.

Categories
Programming

React and Pipes in TypeScript

In the day job I recently recommended using Ramda to help clean up the readability of our UI code.

Ramda is a collection of pure functions designed to fit together using functional programming patterns.

The Context

We had a piece of TypeScript code landing that processed some data and rendered a React component.

const filteredLabels =
  data.community.labels.filter((label) => {

    if (label.type === LabelType.AutoLabel &&
        (isGroupPage === true ?
          ['system_a', 'special'] :
          ['system_a']
        ).includes(label.name)
    ) {
      return false;
    }

    if (label.type === LabelType.UserDefined &&
          label.stats.timesUsed === 0) {
      return false;
    }
    
    return true;
  });

filteredLabels.sort((labelA, labelB) => {
  if (labelA.type === LabelType.AutoLabel) {
    if (labelB.type === LabelType.UserDefined) {
      return -1;
    }
    return labelA.name.toLowerCase() < labelB.name.toLowerCase() ? -1 : 1;
  }

  if (labelB.type === LabelType.AutoLabel) {
     return 1;
  }
  return labelA.name.toLowerCase() < labelB.name.toLowerCase() ? -1 : 1;
});

return filterdLabels.map((label) => <>{/* React UI */}</>

Three distinct things are happening here:

  1. data.community.labels.filter(/* ... */) is removing certain Label instances from the list.
  2. filteredLabels.sort(/* ... */) is sorting the filtered items first by their .type then by their .name (case-insensitive)
  3. filteredLabels.map(/* ... */) is turning the list of Label instances into a JSX.Element.

The hardest part for me to decipher as a reader of the code was step two: given two labels what was the intended sort order?

After spending a few moments internalizing those if statements I came to the conclusion the two properties being used for comparison were label.type and label.name.

A label of .type === LabelType.AutoLabel should appear before a label of .type === LabelType.UserDefined.

Labels with the same .type should then be sorted by their .name case-insensitively.

Ramda’s sortWith

The problem I was encountering with this bit of code is that my human brain works this way:

Given a list of Labels:
- Sort them by their .type with .AutoLabel preceding .UserDefined
- Sort labels of the some .type by their .name case-insensitively

Ramda’s sortWith gives us an API that sounds similar in theory:

Sorts a list according to a list of comparators.

A “comparator” is typed with (a, a) => Number. My list of comparators will be one for the label.type and one for the label.name.

import { sortWith } from 'ramda';

const sortLabels = sortWith<Label>([
  // 1. compare label types
  // 2. compare label names
]);

A comparator‘s return value here is a bit ambiguous declaring Number in the documentation. But their code example for sortWith points to some more handy functions: ascend and descend.

Here’s the description for ascend:

Makes an ascending comparator function out of a function that returns a value that can be compared with < and >.

To sort by label.type I need to map the LabelType to value that will sort .AutoLabel to precede .UserDefined:

const sortLabels = sortWith<Label>([
  ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
  // 2. compare label names
]);

To sort by the .name I can ascend with a case-insensitive value for label.name:

const sortLabels = sortWith<Label>([
  ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
  ascend((label) => label.name.toLowercase(),
]);

Ramda is a curried API. This means by leaving out the second argument, sortLabels now has the TypeScript signature of:

type LabelSort = (labels: Label[]) => Label[]

Since we hinted the generic type on sortWith<Label>() TypeScript has also inferred that the functions we give to ascend receive a Label type as their single argument (see on TS Playground).

Screen capture of TS Playground tooltip showing Label as the inferred type.

Given Ramda’s curried interface, we can extract that sorting business logic into a reusable constant.

/**
 * Sort a list of Labels such that
 *  - AutoLabels appear before UserDefined
 *  - Labels are sorted by name case-insensitively
 */
export const sortLabelsByTypeAndName = sortWith<Label>(
  [
    ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1,
    ascend((label) => label.name.toLowercase(),
  ]
);

Using this to replace the original code’s sorting we now have:

const filteredLabels =
  data.community.labels.filter((label) => {

    if (label.type === LabelType.AutoLabel &&
        (isGroupPage === true ?
          ['system_a', 'special'] :
          ['system_a']
        ).includes(label.name)
    ) {
      return false;
    }

    if (label.type === LabelType.UserDefined &&
          label.stats.timesUsed === 0) {
      return false;
    }
    return true;
  });

const sortedLabels = sortLabelsByTypeAndName(filteredLabels);

return sortedLabels((label) => <>{/* React UI */}</>);

Now let’s see what Ramda’s filter can do for us.

Declarative Filtering with filter

Ramda’s filter looks similar to Array.prototype.filter:

Filterable f => (a → Boolean) → f a → f a

Takes a predicate and a Filterable, and returns a new filterable of the same type containing the members of the given filterable which satisfy the given predicate. Filterable objects include plain objects or any object that has a filter method such as Array.

The first change will be conforming to this interface:

import { filter } from 'ramda';

const filteredLabels = filter<Label>((label) => {
  // boolean logic here
}, data.community.labels);

There are two if statements in our original filter code that both have early returns. This indicates there are two different conditions that we test for.

  • Remove Label if
    • .type is AutolLabel and
    • .name is in a list of predefined label names
  • Remove Label if
    • .type is UserDefined and
    • .stats.count is zero (or fewer)

To clear things up we can turn these into their own independent functions that capture the business logic they represent.

The AutoLabel scenario has one complication. The isGroup variable changes the behavior by changing the names the label is allowed to have.

In Lambda calculus this is called a free variable. We can solve this now by creating our own closure that accepts the string[] of names and returns the Label filter.

const isAutoLabelWithName = (names: string[]) =>
  (label: Label) =>
    label.type === LabelType.AutoLabel
    && names.include(label.name);

Now isAutoLabelWithName can be used without needing to know anything about isGroupPage.

We can now use this with filter:

const filteredLabels = filter<Label>(
  isAutoLabelWithName(
     isGroupPage
       ? ['system_a', 'special']
       : ['system_a'],
  data.community.labels
);

But there’s a problem here. In the original code, we wanted to remove the labels that evaluated to true. This is the opposite of that.

In set theory, this is called the complement. Ramda has a complement function for this exact purpose.

const filteredLabels = filter<Label>(
  complement(
    isAutoLabelWithName(
      isGroupPage
        ? ['system_a', 'special']
        : ['system_a']
  ),
  data.community.labels
);

The second condition is simpler given it uses no free variables.

const isUnusedUserDefinedLabel = (label: Label) =>
  label.type === LabelType.UserDefined
  && label.stats.timesUsed <= 0;

Similar to isAutoLabelWithName any Label that is true for isUnusedUserDefinedLabel should be removed from the list.

Since either being true should remove the Label from the collection, Ramda’s anyPass can combine the two conditions:

const filteredLabels = filter<Label>(
  complement(
    anyPass(
      isAutoLabelWithName(
        isGroupPage
          ? ['system_a', 'special']
          : ['system_a'],
      isUnusedUserDefinedLabel
    )
  ),
  data.community.labels
);

Addressing the free variable this can be extracted into its own globally declared function that describes its purpose:

const filterLabelsForMenu = (isGroupPage: boolean) =>
  filter<Label>(
    complement(
      anyPass(
        isAutoLabelWithName(
          isGroupPage
          ? ['system_a', 'special']
          : ['system_a'],
        isUnusedUserDefinedLabel
      )
    );

The <LabelMenu> component cleans up to:

import { anyPass, ascend, complement, filter, sortWith } from 'ramda';
import { Label } from '../generated/graphql';

type Props = { isGroupPage: boolean };

const isAutoLabelWithName = (names: string[]) =>
  (label: Label) =>
    label.type === LabelType.AutoLabel
    && names.include(label.name);

const isUnusedUserDefinedLabel = (label: Label) =>
  label.type === LabelType.UserDefined
  && label.stats.timesUsed <= 0;

const filterLabelsForMenu = (isGroupPage: boolean): (labels: Label[]) => Label[] =>
  filter<Label>(
    complement(
      anyPass(
        isAutoLabelWithName(
          isGroupPage
          ? ['system_a', 'special']
          : ['system_a'],
        isUnusedUserDefinedLabel
      )
    );

export const sortLabelsByTypeAndName = sortWith<Label>(
  [
    ascend((label) => label.type === LabelType.AutoLabel ? -1 : 1),
    ascend((label) => label.name.toLowercase()),
  ]
);

const LabelMenu = ({isGroupPage, labels: Label[]}: Props): JSX.Element =>
  const filterForGroup = filterLabelsForMenu(isGroupPage);
  const filteredLabels = filterForGroup(labels);
  const sortedLabels = sortLabelsByTypeAndName(filteredLabels);


  return (
    <>{
      sortedLabels.map((label) => <>{/* React UI */}</>)
    }</>
  );
};

The example above is very close to what we ended up landing.

However, since I like to get a little too ridiculous with functional programming patterns I decided to take it a little further in my own time.

Going too far with pipe

The <LabelMenu /> component has one more step that can be converted over to Ramda using map.

Ramda’s map is similar to Array.prototype.map but using Ramda’s curried, data-as-final-argument style of API.

const labelOptions = map<Label>(
  (label) => <>{/* React UI */}</>
);

return <>{labelOptions(sortedLabels)}</>;

labelOptions is now a function that takes a list of labels (Label[]) and returns a list of React nodes (JSX.Element[]).

The <LabelList /> component now has a very interesting implementation.

  • filterLabelsForMenu returns a function of type (labels: Label[]) => Label[]
  • sortLabelByTypeAndName is a function of type (labels: Label[]) => Label[].
  • labelOptions is a function of type (labels: Label[]) => JSX.Element[].

The output of each of those functions is given as the input of the next.

Taking away all of the variable assignments this looks like:

const LabelMenu = ({isGroupPage, labels: Label[]}: Props): JSX.Element => {

  const labelOptions = map(
    (label) => <>{/* React UI */}</>,
    sortLabelsByTypeAndName(
      filterLabelsForMenu(isGroupPage)(
        labels
      )
    )
  );

  return <>{labelOptions}</>;
};

To understand how labelOptions becomes JSX.Element[] we are required to read from the innermost parentheses to the outermost.

  • filteredLabelsForMenu is applied with props.isGroupPage
  • the returned filter function is applied with props.labels
  • the returned filtered labels are applied to sortLabelsByTypeAndName
  • the returned sorted labels are applied to map(<></>)
  • the result is JSX.Element[]

We can take advantage of Ramda’s pipe to express these operations in list form.

Performs left-to-right function composition. The first argument may have any arity; the remaining arguments must be unary.

We’re in luck, all of our functions are unary. We can line them up:

const LabelMenu = ({isGroupPage, labels}: Props) =>
  const createLabelOptions = pipe(
    filterLabelsForMenu(isGroupPage),
    sortLabelsByTypeAndName,
    map(label => <key={label.id}>{label.name}</>)
  );

  return <>{createLabelOptions(labels)}</>
}

The application of pipe assigned to createLabelOptions produces a function with the type signature:

type createLabelOptions: (labels: Label[]) => JSX.Element[];

But wait, there’s more!

React’s functional components are also plain functions. Ramda can use those too!

The type signature of <LabelMenu /> is:

type LabelMenu = ({isGroupPage: boolean, labels: Label[]}) => JSX.Element;

We can update our pipe to wrap the list in a single element as its final operation:

export const LabelMenu = ({isGroupPage, labels}: Props): JSX.Element => {
  const createLabelOptions = pipe(
    filterLabelsForMenu(isGroupPage),
    sortLabelsByTypeAndName,
    map(label =>
      <li key={label.id}>
        {label.name}
      </li>
    ),
    (elements): JSX.Element =>
       <ul>{elements}</ul>
  );

  return createLabelOptions(labels);
}

The type signature of our pipe application (createLabelOptions) is now:

const createLabelOptions: (x: Label[]) => JSX.Element

Wait a second, that looks very close to a React.VFC compatible signature.

Our pipe expects input of a single argument of Label[]. But what if we changed it to accept an instance of Props?

export const LabelMenu = (props: Props): JSX.Element => {
  const createLabelOptions = pipe(
    (props: Props) =>
      filterLabelsForMenu(props.isGroupPage)(props.labels),

    sortLabelsByTypeAndName,

    map((label: Label) =>
      <li key={label.id}>{label.name}</li>
    ),
    
    (elements): JSX.Element =>
      <ul>{elements}</ul>
  );

  return createLabelOptions(props);
}

Now the type signature of createLabelOptions is:

const createLabelOptions: (x: Props) => JSX.Element

So if our application of Ramda’s pipe produces the exact signature of our a React.FunctionComponent then it stands to reason we can get rid of the function body completely:

type Props = { isGroupPage: boolean, labels: Label[] };

export const LabelMenu: React.VFC<Props> = pipe(

    (props: Props) =>
      filterLabelsForMenu(props.isGroupPage)(props.labels),

    sortLabelsByTypeAndName,

    map(label => <li key={label.id}>{label.name}</li>),

    (elements) => <ul>{elements}</ul>
  );

The ergonomics of code like this is debatable. I personally like it for my own projects. I find the more I think and write in terms of data pipelines the clearer tho code becomes.

Here’s an interesting problem. What happens if we need to use a React hook in a component like this? We’ll need a valid place to call something like React.useState() which means we’ll need to create a closure for component implementation.

This makes sense though! A functionally pure component like this is not able to have side-effects. React hooks are side-effects.

Designing at the Type Level

The <LabelMenu /> component has a type signature of

type Props = {isGroupPage: boolean, labels: Label []};
type LabelMenu = React.VFC<Props>

It renders a list of the labels it is given while also sorting and filtering them due to some business logic.

We extracted much of this business logic into pure functions that encoded our business rules into plain functions that operated on our types.

When I use <LabelMenu /> I know that I must give it isGroupPage and labels props. The labels property seems pretty self-explanatory, but the isGroupPage doesn’t really make anything obvious about what it does.

I could go into the <LabelMenu /> code and discover that isGroupPage changes which LabelType.AutoLabel labels are displayed.

But what if I wanted another <LabelMenu /> that looked exactly the same but behaved slightly differently?

I could add some more props to <LabelMenu /> that changed how it internally filtered and sorted the labels I give it, but adding more property flags to its interface feels like the wrong kind of complexity.

How about disconnecting the labels from the filtering and sorting completely?

Start by Simplifying

I’ll first simplify the <LabelMenu /> implementation:

type Props = { labels: Label[] };

const LabelMenu = (props: Props) => (
  <ul>
    {labels.map(
      (label) => <li key={label.id}>{label.name}<li>
    )}
  <ul>
);

This implementation should contain everything about how these elements should look and render every label it gets.

But what about our filtering and sorting logic?

We had a component with this type signature:

type Props = { isGroupPage: boolean, labels: Label[] };
type LabelMenu = React.VFC<Props>;

Can we express the original component’s interface without changing <LabelMenu />‘s implementation?

If we can write a function that maps from one set of props to the other, then we should also be able to write a function that maps from one React component to the other.

First write the function that uses our original Props interface as its input, and then returns the new Props interface as its return value.

type LabelMenuProps = { labels: Label[] }; 

type FilterPageLabelMenuProps = {
  isGroupPage: boolean,
  labels: Label []
};

const propertiesForFilterPage = pipe(
    (props: FilterPageLabelMenuProps) =>
      filterLabelsForMenu(props.isGroupPage)(props.labels),

    sortLabelsByTypeAndName,

    (labels) = ({ labels })
  );

There’s our Ramda implementations again. We took out all of the React bits. It’s the same business logic but without the React element rendering. The only difference is instead of mapping the labels into JSX.Elements the labels are returned in the form of LabelMenuProps.

We’ve encoded our business logic into a function that maps from FilterPageLabelMenuProps to LabelMenuProps.

That means the output of propertiesForFilterPage can be used as the input to <LabelMenu />, which is itself a function that returns a JSX.Element.

Piping one function’s output into a compatible function’s input, that sounds familiar, doesn’t it?

export const FilterPageMenuLabel: React.VFC<FilterPageLabelMenuProps> =
  pipe(
    (props: FilterPageLabelMenuProps) =>
      filterLabelsForMenu(props.isGroupPage)(props.labels),

    sortLabelsByTypeAndName,

    (labels) = ({ labels }),

    LabelMenu
  );

We’ve leveraged our existing view specific code, but changed its behavior at the Prop level.

import { FilterPageLabelMenuProps, LabelMenu } from './components/LabelMenu';

const Foo = () => {
  const { data } = useQuery(query: LabelsQuery);

  return (
    <FilteredPageLabelMenuProps
      isGroupPage={isGroupPage}
      labels={data?.labels ?? []}
    />
  );
}

const Bar = () => {
  const { data } = useQuery(query: LabelsQuery);

  return (
    <LabelMenu labels={data?.labels ?? []} />
  );
}

When hovering over the implementation of <FilteredPageLabelMenuProps> the tooltip shows exactly how it’s implemented:

Categories
Programming

Strongly Typed WP-API

The first in a series of posts exploring WP-API with statically typed PHP and Functional Programming patterns.

The Context

To expose a resource as an endpoint via WordPress’ WP-API interface one must use register_rest_route.

/**
 * Registers a REST API route.
 *
 * Note: Do not use before the {@see 'rest_api_init'} hook.
 *
 * @since 4.4.0
 * @since 5.1.0 Added a _doing_it_wrong() notice when not called on or after the rest_api_init hook.
 *
 * @param string $namespace The first URL segment after core prefix. Should be unique to your package/plugin.
 * @param string $route     The base URL for route you are adding.
 * @param array  $args      Optional. Either an array of options for the endpoint, or an array of arrays for
 *                          multiple methods. Default empty array.
 * @param bool   $override  Optional. If the route already exists, should we override it? True overrides,
 *                          false merges (with newer overriding if duplicate keys exist). Default false.
 * @return bool True on success, false on error.
 */
function register_rest_route( $namespace, $route, $args = array(), $override = false ) {

The documentation here is incredibly opaque so it’s probably a good idea to have the handbook page open until the API is internalized in your brain.

The $namespace and $route arguments are somewhat clear, however in typical WordPress PHP fashion the bulk of the magic is provided through an opaquely documented @param array $args.

The bare minimum are the keys method and callback and for our purposes will be all that we need. WP_REST_Server provides some handy constants (READABLE, CREATABLE, DELETABLE, EDITABLE) for the methods key so that leaves callback.

What is callback? In PHP terms it’s a callable. Many things in PHP can be a callable. The most commonly used callable for WordPress tends to be a string value that is the name of a function:

function my_callable() {
}
register_rest_route( 'some-namespace', '/some/path', [ 'callback' => 'my_callable' ] );

This would call my_callable, and as is would probably return 200 response with an empty body.

What would me more useful than just callable would be a callable that can define its argument types and return types.

Types and PHP

The ability to verify the correctness of software with strongly typed languages is an obvious benefit to using them.

However, an additional benefit is how the types themselves become the natural documentation to the code.

PHP has supported type hinting for a while:

function totes_not_buggy( WP_REST_Request $request ) WP_REST_Response {
}

With type hints the expectations for totes_not_buggy() are much clearer.

Adding these type hints means at runtime PHP will enforce that only instances of WP_REST_Request will be able to be used with totes_not_buggy(), and that totes_not_buggy() can only return instances of WP_REST_Response.

This sounds good except that this is enforced at runtime. For true type safety we want something better, we want static type analysis. Types should be enforced without running the code.

For this exercise, Psalm will provide static type analysis via PHPDoc annotations.

/**
 * Responds to a REST request with text/plain "You did it!"
 *
 * @param WP_REST_Request $request
 * @return WP_REST_Response
 */
function totes_not_buggy($request) {
   return new WP_REST_Response( 'You did it!', 200, ['content-type' => 'text/plain' );
}

Ok this all sounds nice in theory, how do we check this with Psalm?

To the terminal!

mkdir -p ~/code/wp-api-fun
cd ~/cod/wp-api-fun
composer init

Accept all the defaults and say “no” to the dependencies:

Package name (<vendor>/<name>) [beaucollins/wp-api-fun]: 
Description []: 
Author [Beau Collins <beau@collins.pub>, n to skip]: 
Minimum Stability []: 
Package Type (e.g. library, project, metapackage, composer-plugin) []: 
License []: 
Define your dependencies.
Would you like to define your dependencies (require) interactively [yes]? no
Would you like to define your dev dependencies (require-dev) interactively [yes]? no
{
    "name": "beaucollins/wp-api-fun",
    "authors": [
        {
            "name": "Beau Collins",
            "email": "beau@collins.pub"
        }
    ],
    "require": {}
}
Do you confirm generation [yes]? 

Now install two dependencies:

  • vimeo/psalm to run type checking
  • php-stubs/wordpress-stubs to type check against WordPress APIs
composer require --dev vimeo/psalm php-stubs/wordpress-stubs

Assuming success try to run Psalm:

./vendor/bin/psalm
Could not locate a config XML file in path /Users/beau/code/wp-api-fun/. Have you run 'psalm --init' ?

To keep things simple with composer, define a single PHP file to be loaded for our project at the path ./src/fun.php:

mkdir src
touch src/fun.php

Now inform composer.json where this file is via the "autoload" key:

{
    "name": "beaucollins/wp-api-fun",
    "authors": [
        {
            "name": "Beau Collins",
            "email": "beau@collins.pub"
        }
    ],
    "require": {},
    "require-dev": {
        "vimeo/psalm": "^3.9",
        "php-stubs/wordpress-stubs": "^5.3"
    },
    "autoload": {
        "files": ["src/fun.php"]
    }
}

Generate Psalm’s config file and run it to verify our empty PHP file has zero errors:

./vendor/bin/psalm --init
Calculating best config level based on project files
Calculating best config level based on project files
Scanning files...
Analyzing files...
░
Detected level 1 as a suitable initial default
Config file created successfully. Please re-run psalm.
./vendor/bin/psalm
Scanning files...
Analyzing files...
░
------------------------------
No errors found!
------------------------------
Checks took 0.12 seconds and used 37.515MB of memory
Psalm was unable to infer types in the codebase

For a quick gut-check define totes_not_buggy() in ./src/fun.php:

<?php
// in ./src/fun.php
/**
 * Responds to a REST request with text/plain "You did it!"
 *
 * @param WP_REST_Request $request
 * @return WP_REST_Response
 */
function totes_not_buggy($request) {
   return new WP_REST_Response( 'You did it!', 200, ['content-type' => 'text/plain' );
}

Now analyze with Psalm:

./vendor/bin/psalm
./vendor/bin/psalm
Scanning files...
Analyzing files...
E
ERROR: UndefinedDocblockClass - src/fun.php:6:11 - Docblock-defined class or interface WP_REST_Request does not exist
 * @param WP_REST_Request $request
ERROR: UndefinedDocblockClass - src/fun.php:7:12 - Docblock-defined class or interface WP_REST_Response does not exist
 * @return WP_REST_Response
ERROR: MixedInferredReturnType - src/fun.php:7:12 - Could not verify return type 'WP_REST_Response' for totes_not_buggy
 * @return WP_REST_Response
------------------------------
3 errors found
------------------------------
Checks took 0.15 seconds and used 40.758MB of memory
Psalm was unable to infer types in the codebase

Psalm doesn’t know about WordPress APIs yet. Time to teach it where those are by adding the stubs to ./psalm.xml:

    <stubs>
        <file name="vendor/php-stubs/wordpress-stubs/wordpress-stubs.php" />
    </stubs>
</psalm>

One more run of Psalm:

./vendor/bin/psalm     
Scanning files...
Analyzing files...
░
------------------------------
No errors found!
------------------------------
Checks took 5.10 seconds and used 356.681MB of memory
Psalm was able to infer types for 100% of the codebase

No errors! It knows about WP_REST_Request and WP_REST_Response now.

What happens if they’re used incorrectly like a string for the status code in the WP_REST_Response constructor:

ERROR: InvalidScalarArgument - src/fun.php:10:48 - Argument 2 of WP_REST_Response::__construct expects int, string(200) provided
   return new WP_REST_Response( 'You did it!', '200', ['content-type' => 'text/plain'] );

Nice! Before running the PHP source, Psalm can tell us if it is correct or not. IDE’s that have Psalm integrations show the errors in-place:

Screen capture of Visual Studio Code with fun.php open and the Psalm error displayed in a tool tip.
Visual Studio Code with the Psalm extension enabled showing the InvalidScalarArgument error. ]

Now to answer the question “which type of callable is the register_rest_route() callback option?”

First-Class Functions

With PHP’s type hinting, the best type it can offer for the callback parameter is callable.

This gives no insight into which arguments the callable requires nor what it returns.

With Psalm integrated into the project there are more tools available to better describe this callable type.

callable(Type1, OptionalType2=, SpreadType3...):ReturnType

Using this syntax, the callback option of $args can be described as:

callable(WP_REST_Request):(WP_REST_Response|WP_Error|JSONSerializable)

This line defines a callable that accepts a WP_REST_Request and can return one of WP_REST_Response, WP_Error or JSONSerializable.

Once returned, WP_REST_Server will do what is required to correctly deliver an HTTP response. Anything that conforms to this can be a callback for WP-API. The WP-API world is now more clearly defined:

callable(WP_REST_Request):(WP_REST_Response|WP_Error|JSON_Serializable)

To illustrate this type at work define a function that accepts a callable that will be used with register_rest_route().

Following WordPress conventions, each function name will be prefixed with totes_ as an ad-hoc namespace of sorts (yes, this is completely ignoring PHP namespaces).

/**
 * @param string $path
 * @param (callable(WP_REST_Request):(WP_REST_Response|WP_Error|JSONSerializable)) $handler
 * @return void
 */
function totes_register_api_endpoint( $path, $handler ) {
   register_rest_route( 'totes', $path, [
      'callback' => $handler
   ] );
}
add_action( 'rest_api_init', function() {
   totes_register_api_endpoint('not-buggy', 'totes_not_buggy');
} );

A quick check with Psalm shows no errors:

------------------------------
No errors found!
------------------------------

What happens if the developer has a typo in the string name of the callback totes_not_buggy? Perhaps they accidentally typed totes_not_bugy?

ERROR: UndefinedFunction - src/fun.php:24:45 - Function totes_not_bugy does not exist
   totes_register_api_endpoint('not-buggy', 'totes_not_bugy');

Fantastic!

What happens if the totes_not_buggy function does not conform to the callable(WP_REST_Request):(...) type? Perhaps it returns an int instead:

/**
 * Responds to a REST request with text/plain "You did it!"
 *
 * @param WP_REST_Request $request
 * @return int
 */
function totes_not_buggy( $request ) {
   return new WP_REST_Response("not buggy", 200, ['content-type' => 'text/plain']);
}
ERROR: InvalidArgument - src/fun.php:24:45 - Argument 2 of totes_register_api_endpoint expects callable(WP_REST_Request):(JSONSerializable|WP_Error|WP_REST_Response), string(totes_not_buggy) provided
   totes_register_api_endpoint('not-buggy', 'totes_not_buggy');

The callable string 'totes' no longer conforms to the API. Psalm is catching these bugs before anything is even executed.

But Does it Work?

Psalm says this code is correct, but does this code work? Well, there’s only one way to find out.

First, turn./src/fun.php into a WordPress plugin with the minimal amount of header comments:

<?php
/**
 * Plugin Name: Totes
 */

And boot WordPress via wp-env:

npm install -g @wordpress/env
echo '{"plugins": ["./src/fun.php"]}' > .wp-env.json
wp-env start
curl http://localhost:8889/?rest_route=/ | jq '.routes|keys' | grep totes

There are the endpoints:

curl --silent http://localhost:8889/\?rest_route\=/ | \
  jq '.routes|keys' | \
  grep totes
  "/totes",
  "/totes/not-buggy",
curl http://localhost:8889/\?rest_route\=/totes/not-buggy
"not buggy"

Well it works, but there’s a small problem. It looks like WordPress decided to json_encode() the string literal not buggy so it arrived in quotes as "not buggy" (not very not buggy).

Changing the return of totes_not_buggy to something more JSON compatible works as expected:

-    return new WP_REST_Response("not buggy", 200, ['content-type' => 'text/plain']);
+    return new WP_REST_Response( [ 'status' => 'not-buggy' ] );
curl http://localhost:8889/\?rest_route\=/totes/not-buggy          
{"status":"not-buggy"}

Automate It

Reproducing the steps to run psalm on this codebase is trivial.

With a concise Github Action definition this project can get static analysis on every push. Throw in a annotation service and Pull Request changes are marked with Psalm warnings and exceptions.

Screenshot of an annotated Pull Request on GitHub.

The Github workflow definition defines how to:

  1. Install composer.
  2. Install composer dependencies (with caching).
  3. Run composer check.
  4. Report the Psalm errors.

The Fun Part

This sets up the foundation for a highly productive development environment:

  • Psalm static analysis provides instant feedback on correctness of code.
  • wp-envallows for fast verification of running code.
  • GitHub Actions automates type checking as an ongoing concern.

Coming up: exploring functional programming patterns for WP-API with the help of Psalm.

Categories
Programming

Learning to Like Exceptions

If you would have told me two years ago that I would being writing Java for my livelihood I would have punched you.

Transitioning from more dynamically typed environments to Java felt like I was being bossed around by javac and I hated it. The most tedious example of this was exception handling. A few projects and library later I’ve learned to love them.

Your Methods Lie

Look at any Android project’s source code you’re going to see source code riddled with null checks like this:

Object thing = mWidget.getThing();

if (thing != null) {
  thing.doSomething();
}

The problem is mWidget.getThing() lied to us. It says it returns Object but it in fact can return nothing or in Java: null1.

Usually the null check exists because at some point calling thing.doSomething() caused a NullPointerException and some poor user experienced a crashing app.

In Java you can’t completely avoid null checks but you can do things to avoid exacerbating the problem.

Don’t be Afraid to Throw

In my efforts to make things easier I find that many times I create a utility method that wraps another “exception happy” method (like working with io).

public class UploadUtil {

  public static Uri uploadImage(Bitmap image) {
    try {
        String filename = BitmapUtil.generateUniqueJpgFilename(filenamePrefix);
        File file = BitmapUtil.saveBitmapAsJpg(photo, Environment.getExternalStoragePublicDirectory(
                Environment.DIRECTORY_PICTURES), filename);

        return Uri.fromFile(file);
    } catch (IOException exception) {
      return null;
    }
  }

}

Pretty simple to follow: provide a Bitmap get back a Uri and do something with the Uri:

Uri uri = UploadUtil.uploadImage(myBitmap);

Request request = new StreamFileUploadRequest(uri);

But now whenever I use the UploadUtil.uploadImage(Bitmap) method I have to remember to do a null check every time.

If another developer (e.g. my future self) were to use this method they probably won’t know to do a null check unless:

  1. They have access to the source code and read it
  2. The null case is documented
  3. They read the documentation

I thought I was saving myself some effort by handling the exception in one place, but it’s now even worse because instead of a compile time error this has a high chance of not being discovered until a NullPointerException crash.

Nothing is Wrong

The method signature of UploadUtil.uploadImage(Bitmap) says it returns a Uri, so let’s make sure we’re not lying anymore by also returning nothing:

public class UploadUtil {

  public static Uri uploadImage(Bitmap image)
  throws IOException {
    String filename = BitmapUtil.generateUniqueJpgFilename(filenamePrefix);
    File file = BitmapUtil.saveBitmapAsJpg(photo, Environment.getExternalStoragePublicDirectory(
            Environment.DIRECTORY_PICTURES), filename);

    return Uri.fromFile(file);
  }

}

That’s easy, just throw the exception. Now the user of uploadImage(Bitmap) will have an explicit code path for handling this:

try {
  Uri uri = UploadUtil.uploadImage(myBitmap);
  Request request = new StreamFileUploadRequest(uri);
} catch (IOException exception) {
  Log.e(TAG, "Failed to upload", e);
  notifyUser(exception);
}

Another benefit is the compiler will now point out all the places you should have been checking for null.

Be Nice to Others

I used to hate using methods that threw exceptions because of the try/catch dance but I have learned that a thrown Exception is another developer looking out for me.

I have some general guidelines for myself now concerning exceptions:

  1. Instead of returning null throw an Exception if it makes sense.
  2. Wait to catch exceptions at the highest level possible. Ideally when the software is directly interacting with the user.

Exceptions can feel heavy handed and don’t make sense in every case so I’m not dogmatic about these guidelines. If you’re doing Android development using Android Studio then take a look at Nullable annotations.


  1. One exciting thing about Swift is that it solves this problem with optionals.↪︎
Categories
Programming Tools

Android Debugging with JDB and TextMate

For android development I do my best to avoid Eclipse by using TextMate and the command line. The biggest missing piece with this setup was an easy way to get a debugger up and running. A quick trip to Google landed me on Command Line Android Development: Debugging which outlines how to get jdb attached to a running Android app instance.

I quickly grew tired of typing all of the breakpoints out and invoking a handful of commands, so I hacked together a TextMate bundle I named Android Debug to automate the process.

I have never found a use for TextMate’s bookmarking feature so it seemed like a great place to identify breakpoints. When you invoke the debugging command from TextMate it will find all of the bookmarked lines in the *.java files in your src folder and dump them into a .jdbrc file.

4_26_13_6_22_PM

To figure out which app to launch I parse the AndroidManifest.xml file for the package id and main Activity then launch the app in Waiting For Debugger mode.

4_26_13_6_27_PM

Once the app is up and waiting a jdb instance is launched and reads the breakpoints in from the .jdbrc file. After getting familiar with all the jdb commands I feel pretty comfortable debugging this way.

4_26_13_6_29_PM

Now that I finally have a quick way to debug I can go easy on adb logcat. I’m going to try to automate more parts of my Android development workflow in this bundle. There’s probably some good stuff to steal from the abandoned Android TextMate Bundle.

The only remaining pain point for me in this whole setup is ant. I’d really love it if someone could show me how to get ant debug to compile faster. Currently changing a single *.java file requires 30 seconds to get an apk compiled. It looks like dx is taking a long time to merge all of the pre-dexed libraries the WordPress for Android project is using.

Categories
Programming

Are you responsive?

My work has had me focused on making websites more responsive. Part of taking a non-responsive design and back-porting some media queries into the CSS is identifying where the breakpoints for a particular design exist.

To aid in identifying where these breakpoints are I built a page with an iframe in it that would tell me how wide it is at any given time. More features were added and eventually we had a useful little tool. So here it is for your pleasure, the elegantly named:

HTTP://ICANHAZ.RESPONSIV.ES/

One particularly useful features is the bookmarklet. Drag that thing to your browser’s bookmark bar and then click it when you want to load up whatever page you happen to be looking at.

If you’d like to check out the source code it’s on Github. It’s mostly client side Javascript but with a little Node.js and CoffeeScript to help determine the X-FRAME-OPTIONS header for the site you are loading.

Categories
Programming Software

Source Control

If a piece of software claims to be able to manage my source code, I want it to manage all of my source code. Let me describe a tool that I use daily for my job whose purpose is to manage source code.

The Tool

This tool is quite simple to use. It’s been around for a while now. It has a straight forward interface and a very easy model of use to understand. I checkout code. I update code. I change code. I update code. I commit code. Pretty dead easy. Maybe a conflict happens but not really usually a big deal.

Collaborating

Sometimes I need to share some of these changes with a collaborator so I use The Tool to make a patch file. I then email/upload/transfer it in some way to my collaborator who — if her code is in the same working state and she knows how to use a tool to apply my patch file to her working copy — she can then apply the changes that are represented in my patch file. Meanwhile, if I make any changes to my working copy my patch may no longer even apply to my own working copy. At this point it could be useful to note what revision my working copy was at when I made the patch, you know, just for sanity’s sake.

Let me reiterate what just happened there. If I want to share some changes that I have made to my source code, I have to use tools other than The Tool (the one responsible for source control). Does that strike anyone else as a little odd? I need to use a tool other than my source control tool to manage my source code.

The patch file that I make has no context attached to it. It knows not which repository it came from nor the state of the repository when it was created. Very quickly the repository is going to change because there are fifty (maybe even more people) committing to this repository all the time.

Experimenting

Sometimes I get an idea — or even less — an inkling of an idea. I want to test it out in my own little sandbox and experiment with it and see if it can go anywhere. This idea consists mostly of new files but it also requires me to modify some existing ones. Time has passed and the idea is somewhat working but I need to get to something else more pressing. Ok, so what do I do with this experiment? I certainly don’t want to lose it because even though it’s not fully baked, there is some value in it.

Guess what? You’re screwed. You can create a patch and save it, but inevitably the files you modified will be changed. Maybe you make a branch1, but no, branches are for important things not your little experiments. Imagine how messy the branches folder would be if everyone used it to dump their little experiments.

Identifying the Problem

I started explaining my woes to one of my coworkers. I try my best to be diplomatic because I am not one to get involved with flame wars. The problem was identified and alas the problem is not The Tool. Apparently it’s my workflow. So perhaps I’m The Tool.

Did I mention how I love Git?

  1. Yes, the “b” word. Please excuse my language.
Categories
Programming

Explore the WordPress.com REST API

WordPress.com has unveiled a new REST API and I wrote a tool to help debug and explore it.

In fact, the documentation for the REST API is built by the API itself! With this information we were able to build a console to help debug and explore the various resources that are now available through the new API. So let me introduce you to the new REST console for WordPress.com.

Categories
Programming

I write code

I have never labeled myself a programmer. As one who has learned the craft via the omnipotent Google search box and the sharing of open source codes, I have always felt that I have yet to venture through the initiation rite that I am told consists of reading The Dragon Book and K&R, and building a compiler. Or maybe it’s because I prefer the dynamic, loosely-typed, “toy”, scripting languages to the ones real programmers use.

Regardless of how I am identified I do have a ferocious appetite to learn new things, try new tools, and challenge anything that becomes a little too precious. So in order to prevent myself from stagnating I have intentionally not self-identified what it is I do for a living other than “write code”. I write a fair amount of PHP but I am not a PHP Developer1. I have recently spent most of my time writing Javascript in the DOM but I do not identify as a Javascript Developer2. I spent a solid 2-3 years writing Ruby for eight hours a day learning how to “meta-program” as well as craft a gem and do “test driven development” but I would not consider myself a Ruby Developer3.

All of these different languages brought with them different ways of doing things and, more importantly, different people and cultures for me to learn from.

To put it as plainly as possible: I enjoy solving problems using computers. My solutions tend to involve a web browser and a web server. When this no longer interests me or — more depressingly — I can no longer maintain the skills necessary to make a living doing it, I will stop.

Until then I write code.

  1. Sometimes I want to kick PHP right between the “H”
  2. Sorry, I mean “Javascript Ninja”
  3. Sorry, I mean “Ruby Rockstar”