SOCS 2018, second day

IJCAI 2018 and SOCS 2018 is being held in Stockholm from July the 13th to July the 19th as a part of the Federated AI Meeting. In this and other posts I will give some short comments on interesting things I’ve heard. If anything is wrong or misrepresented, please don’t hesitate to get in touch with me.

The third day I attended the second day ofSOCS 2018, the 11th Symposium on Combinatorial Search. The proceedings for SOCS 2018 are available here.

Continuing the topic and time-line from the first day invited talk, Richard E. Korf talked about the history of A* related algorithms from 1983 and onwards.

The main source for history of science is publications, but the is not the whole story. Typically the reason for moving from one area to another.

First interest for Korf in search was solving Rubik’s cube, where bidirectional search was needed.

Claims that DFS can be faster than BFS since it is possible to use trailing for undoing changes, instead of copying. Interesting to compare with my Gecode research experience. THe main difference in my view is probably that the view here is of moves that make very small changes in the state, in which case it is true.

Interest in iterative deepening DFS (IDFS) led to limiting on cost instead of depth giving iterative deepening A* (IDA*). This was first used to solve 15-puzzles and Rubik’s cube.

Quote quoted

It is not always important to be the first person, it is important to be the last person to discover something.

Important with cross-fertilization. For example, pattern databases were adapted from end-game databases. Similarly, planning and search have lots of cross-fertilizations. One instance of using ideas from another field was the reason for minimin lookahead search, which is inspired by the dichotomy between search for and making a move in two-player games, but adapting it to a single-agent search. This inspiration further led to Real-Time A* (RTA) and learning A.

Important lesson from RTA* which introduces an additional constraint (limit on time for decisions), was that it could solve larger problems.

Weighted A* is an interesting algorithm. It uses a factor W to adjust the heuristic. The algorithm is often very effective, and setting the value of W to 0 gives Dijkstra’s algorithm, to 1 gives A*, and larger values makes it into a heuristic.

Investigation into how the heuristic is used and exploited led to Recursive Best-First Search (RBFS) which is a linear-space best-first search that can handle a non-monotonic cost function. It dominates IDA* but is harder to implement. In the paper a version called Simple RBFS. This was the first algorithm invented, and was important for understanding. Reviewers wanted to remove it, but Korf insisted to keep it.

If people don’t understand your work, then they won’t use it

Graph search (A, best-first, …) takes much more memory than tree search algorithm (DFS; IDA, …), which is often very important. Even though memories have gotten larger, computers have gotten faster and can fill it up faster.

Frontier search tries to alleviate this, by only keeping only the open set of nodes. In Euclidean space, this reduces the number of nodes to keep by reducing the polynomial by one. Much more complicated to re-construct the solution. Simplest is to do bi-directional search, find the middle, and do divide and conquer.

Regarding time complexity. Standard brute force search has b^d. Idea was that using a heuristic could lower the value of b. However, when looking into actual behavior of puzzles is that the heuristic lowers d.

Conclusion was that a lot of work has been in handling memory issues, and one of the best sources of inspiration is from other fields.

Ira Pohl

Ira Pohl (weighted A*, bidirectional search) gave a improvised extra invited talk related to both other invited talks.

Graduate student in 1966 just before A*. This was when the department of computer science was started, mostly by numerical analysts, plus John McCarthy doing AI, Niklaus Wirth doing programming languages, and John Miller (computational physicist). Pohl really felt that non-numerical problems were more interesting. McCarthy wanted to show everything was a first-order logic problem and Wirth was doing programming languages. Miller on the other hand had lots of money, and let his graduate students to mostly what they were interested in. The main assignment was to help the physicists use the new super-computer IBM Model 90 and the language PL/I.

Initial interest was to find Knights tours suing Warnsdorffs rule, and that could be done. This led to interest in graph theory, and applying the same ideas to Hamiltonian paths which led to a publication.

Next idea was to implement a set of graph algorithms in PL/1. This included Dijkstra, BFS (even though it was not a standard idea), and so on. After being asked for help from operations research researchers who had the basic ideas and sketches for bi-directional Dijkstras search. Learning about A* led to trying the idea of bi-directionality in that area.

Search Algorithms and Frameworks

The Heuristic Search Framework

A short presentation of a framework using C++ templates to make nice combinatorial ways to construct search algorithms. Some nice visualizations for showing the progress of algorithms.

Multi-Agent Path Finding

Multi-agent path finding (MAPF) introduces many agents in a path finding problem where the agents should not collide.

Robust Multi-Agent Path Finding

In this paper, the idea is to also introduce robustness, so that a delay for one agent does not influence the other agents.

Two solvers. One Conflict Based Search extended to robustness and one based on Picat. Base rules in Picat are agents is in one position at each time step, each location has only one agent at each time step, and between two time-steps an agent moves between two adjacent locations, and then robustness added.

For small maps the Picat solver was more efficient, while for larger maps the CBS was better.

Sub-optimal SAT-based Approach to Multi-agent Path-finding Problem

The idea here is to create sub-optimal solutions to a MAPF problem. Two different costs, makespan (latest to arrive) or total number of moves for all agents. Optimizing for one may make the other worse.

The solution approach is to create a SAT-problem that limits the makespan and the cost of a solutions, so that the instance is satisfiable iff there exists a solutions that satisfies the limits.

The expansion uses Time Expansion, which multiplies the graph with the time parameter. Variants of the encoding can be used to get different versions that either find optimal solutions or not. This gives a lot of flexibility.

The performance is comparable with other state of the art solutions.

Rapid Randomized Restarts for Multi-Agent Path Finding

The high level overview

  • Add randomization to MAPF solvers
  • See that the distributions are heavy-tailed
  • Add parallel runs and rapid random restarts.

The point is to show that this works well also in MAPF, similar to other fields, which it does.

Experiments in waraehouse like settings (4-connected) and aviation domain (much more general connections, lattice-like 63-way branching).

A* and Path Planning (CHAIR: Daniel Harabor)

GBFHS: A Generalized Breadth-First Heuristic Search Algorithm

Some history

  • 1966 bidirectional blind search
  • 1968 A* For heuristic unidirectional search
  • 1969 Bidirectional heuristic search - tried to combine both but failed
  • Lots and lots of research
  • 2015 Barker & Korf said it is probably unlikely to
  • 2016 MM/MMe bidirectional heuristic search that always meet in the middle
  • 2017 NBS (NBSe) Improvement that automatically determines the search between two directions

In 2018, we know know Barker & Korf was wrong? Some experiments using 10 pancake problems and varying the strength of the heuristic show that there are still problems that depend on the heuristic. In particular, if the heuristic becomes stronger, the behavior of the algorithm should not become worse.

Formal definition of intuition is given. Informally, a well-behaved algorithm will when given 2 heuristics, the behavior of the algorithm should be better if the heuristic is better.

New algorithm takes a problem/domain, least edge cost, split function, and heuristic function. Assumes the costs are non-negative and the heuristic is an A*-admissible heuristic.

GBFHS is well-behaved. Idea is to start with a cost limit of 0, and increase the amount of search to do by increasing the allowed cost when no solution exists.

Behaves better than all of MMe, NBS, and NBSe on 15-puzzles and 16-pancake domains.

Fast, Near-Optimal Path-Planning on State Lattices Using Subgoal Graphs

Subgoal graphs (SGG) are used to create an overlay graph on an underlying graph that can be used to speed up path planning. The SGG must satisfy certain conditions to work as intended.

SGG is a framework that can be applied to many different domains.

For state-lattices it is hard to construct effective SGGs. Sacrificing optimality and constructing the SGG heuristically leads to much better performance. In particular, the SGG is much smaller.

Searching through a full game tree (even using alpha-beta pruning) to find the minimax-value is often impractical. One way to tackle this is to use something like MCTS that does a statistical evaluation.

The paper presents an approximation algorithm for the minimax value that given a parameter e, it will return a value that is at most e away from the minimax value.