Hop is a tool for automating decisions in production software stacks. These including things like routing, assignment, and pricing. Hop lets you encode your business rules and objectives in Go. You then build them into into simple, atomic binaries that read and write JSON data, and ship them off to just about any environment, from an edge device to AWS Lambda.

Hop handles the search part of finding solutions to business problems. Under the hood it uses decision diagram techniques. This post provides a basic introduction to these mechanics. It is not exhaustive, and merely imparts a working knowledge of some of the concepts underlying Hop's algorithms. For the theoretical foundations of optimization using decision diagrams, see the excellent book on the subject.

Hop provides various solvers for maximizing or minimizing an objective, an satisfying a set of constraints. All of these require the modeler to provide a root state.

Think of the root as the starting point for search. It encodes the state of a system right now. In a delivery network, for example, the root state might include the current location and backlog for every driver, where unassigned orders go, and recent observations of travel speeds.

A Hop state must implement two methods: `Feasible`

and `Next`

.

```
package model
type State interface {
Feasible() bool
Next() []State
}
```

`Feasible`

should return true if a state is actionable. This means it can be implemented in the real world. Most decision models are feasible once they have considered and assigned some value to every choice they need to make.

`Next`

should return every state that can be reached from a given state. Think of it as a transition function that projects the state of a system once a choice is made.

If we are maximizing or minimizing some objective, our model also needs a `Value`

method to tell Hop the value of each state.

```
type Valuer interface {
Value() int
}
```

An example should make this clear.

Consider the 0-1 knapsack problem. We have a set of items 1 through n. Each item has a value and a weight. We can carry a total weight of 50. We would like to carry the maximum possible value within our weight limit.

Let's say our input data looks like this.

Item | Value | Weight |
---|---|---|

1 | 200 | 25 |

2 | 150 | 10 |

3 | 225 | 20 |

Let's figure out how to model it as a decision diagram.

To get started, we need a root state. We aren't carrying anything to begin with, so our root state `R`

is an empty set of items. It has weight and value of 0 each. We also call this the root "layer" and say it has a depth of 0. It is infeasible because we haven't made decisions about any items yet.

State | Depth | Items | Value | Weight | Feasible |
---|---|---|---|---|---|

`R` |
0 | - | 0 | 0 | No |

Let's consider packing each item in order 1 through n. For each item, we have two options: leave it out or pack it. States `R0`

and `R1`

correspond to those two options for item 1. Together they form layer 1 of our diagram. Note that we have more items to decide on, so we continue to label our states as infeasible.

State | Depth | Items | Value | Weight | Feasible |
---|---|---|---|---|---|

`R0` |
1 | - | 0 | 0 | No |

`R1` |
1 | 1 | 200 | 25 | No |

Now we decide what to do about item 2 in the same manner. Isn't it nice having computers to do this sort of work for us?

State | Depth | Items | Value | Weight | Feasible |
---|---|---|---|---|---|

`R00` |
2 | - | 0 | 0 | No |

`R01` |
2 | 2 | 150 | 10 | No |

`R10` |
2 | 1 | 200 | 25 | No |

`R11` |
2 | 1,2 | 350 | 35 | No |

Deciding whether or not to include item 3 generates a layer with 8 states in it. Each layer has twice the number of states (the "width") of the layer before it. This is called "expanding" the next layer. If you're thinking that our layers are going to get hopelessly big with a small number of items, you're right!

State | Depth | Items | Value | Weight | Feasible |
---|---|---|---|---|---|

`R000` |
3 | - | 0 | 0 | Yes |

`R001` |
3 | 3 | 225 | 20 | Yes |

`R010` |
3 | 2 | 150 | 10 | Yes |

`R011` |
3 | 2,3 | 375 | 30 | Yes |

`R100` |
3 | 1 | 200 | 25 | Yes |

`R101` |
3 | 1,3 | 425 | 45 | Yes |

`R110` |
3 | 1,2 | 350 | 35 | Yes |

`R111` |
3 | 1,2,3 | 575 | 55 | No |

Something interesting happened constructing layer 3. State `R111`

exceeded our weight limit. So we remove it from consideration as an infeasible state.

There aren't any more states to generate, so we choose `R101`

as our optimal state. It provides the most value without exceeding our weight limit.

The exercise we just went through constructed what is called an "exact" decision diagram. That means it represents the entire state space of our problem, as shown in the following graph. Note that we leave `R111`

off the graph since it isn't feasible.

```
___________ R __________
/ \
___ R0 ___ _ R1 _
/ \ / \
R00 R01 R10 R11
/ \ / \ / \ |
R000 R001 R010 R011 R100 R101 R110
```

Obviously it gets impossible to generate exact diagrams for problems of any reasonable size. Luckily, we have a technique called restriction. Instead of generating all possible states, we merely keep a subset of them up to a configurable diagram width. We defer the remaining states for exploration later. This is exactly what Hop does.

Say we use a "value" restrictor that keeps the best states of each layer and defers the rest. Let's set our maximum diagram width to 2. We want to maximize total value, so the value restrictor keeps the two states from each layer with the highest values. Our restricted decision diagram looks like this.

```
R
/ \
R0 R1
/ \
R10 R11
| |
R101 R110
```

This diagram is much smaller, and in this case it manages to find the optimal state for our problem. The other states aren't actually gone, we're just going to look at them later. We focussed our search on a promising section of the state space, and it paid off.

Once we've have this restricted diagram, we have two feasible solutions to consider: `R101`

and `R110`

. `R101`

is the better of the two, so we label it as our "incumbent" (best known solution). Now we can continue searching the rest of the state space. If we find a better feasible state, we'll use that as a new incumbent.

We've covered two of the techniques Hop uses to search the state space of a decision model: expansion and restriction. There are many more pieces to Hop's search algorithm, and Hop makes them plug-able so the modeler can exploit domain knowledge about a problem's structure. The more Hop understands about a problem, the more efficient it can be at finding good solutions and reducing the size of the state space.

Briefly, Hop uses the following procedure to explore any state.

*Expand*a deeper layer using each state's`Next`

method.*Filter*out states that should not be considered for some reason.*Reduce*states that are dominated by other states.*Restrict*the states for further exploration and defer the rest.*Merge*deferred and previously merged states to generate bounds.

Hop repeats this procedure for each layer. By default, Hop uses a small restriction width to find feasible solutions quickly. Settings like width can be easily overridden using command line options or environment variables. Search components like restrictors can be plugged directly into the solver options. Hop provides interfaces to follow for custom implementations, if so desired.

### Exploring time windows, timeliness penalties, and unassigned penalties for routing on Nextmv Cloud

Learn how to route a fleet of vehicles while working with time windows, time penalties, and unassigned penalties using the Dispatch app on Nextmv Cloud.

### Our first virtual retreat: Our exact agenda and what we learned

It’s important to create opportunities to connect when building a distributed company. Retreats are one way to do that, even if they’re virtual. Here’s an in-depth look at our exact agenda for our first virtual retreat at Nextmv and what we learned in the process.

### A faster, easier way to add custom logic to assignment problems

We’ve released Hop v0.7.1! This release introduces a new assigner framework for Hop to make it easier for customers to add custom business logic to assignment problems within our fleet engine.

### An introduction to fleet routing on Nextmv Cloud

You have two vehicles and ten locations to visit. What's the best way to route your fleet? You have seconds to solve and Nextmv Cloud. Ready, set, go!

### Milk Moovement: Digitizing and optimizing raw milk delivery

Transporting raw milk from farms to processing plants is a daily occurrence that seems simple at first glance. But it gets complex quickly when time is of the essence and milk volumes vary.

### Introducing expanders in Hop

We've released Hop v0.7.0! This release introduces a cool new feature we call expanders into Hop to help customers manage time to first feasible solution and memory use as they scale their models.

### Binaries are beautiful

Building decision models into binaries is a beautiful thing. It eliminates a lot of sticky deployment processes and gets you to production faster.

### Nextmv raises $8 million for Series A round to power decision science

We're thrilled to have FirstMark Capital lead our Series A round, putting even more momentum behind our vision to bring the power of decision science to every developer.

### E.L.F. Santa’s Supply Chain

Everyone talks about Santa's big night on his sleigh - a vision of efficiency with millions of chimneys traversed in a mere 24 hours

### To Infinity and Beyond - Launching Nextmv Cloud

Launching Nextmv Cloud

### TSP, PDTSP, CVRP, CVRPTW, oh my! The alphabet soup of route optimization

Our quarantined world is even more on-demand than it used to be. We order groceries, gadgets and green goddess salad, and they all show up at our doors within minutes.

### Routing, Packing, and Clustering - Optimization Fundamentals

Routing, Packing, and Clustering - Optimization Fundamentals

### nextmv raises a $2.7M Seed round

We completed our seed round!

### How Hop Hops

How does Hop make decisions?

### Hop to It - Launching nextmv

What does nextmv do?