Pathfinding Algorithms
Pathfinding Algorithms
Pathfinding algorithms are a set of instructions used to find the best route between two points on a graph or map. These algorithms are fundamental in computer science and are used to identify the shortest or most efficient path, considering various constraints like distance, cost, or obstacles. They work by traversing the graph, which is a collection of nodes (locations) and edges (paths between locations), to determine the optimal route.
Common Pathfinding Algorithms:
Several algorithms have been developed for pathfinding, each with its own strengths and applications. Some of the most well-known include:
- Dijkstra’s Algorithm: This classic algorithm finds the shortest path between a starting node and all other nodes in a graph with non-negative edge weights. It works by maintaining a set of visited nodes and a priority queue of nodes to visit, always selecting the node with the smallest known distance.
- (A-star) Search Algorithm:* A* is an extension of Dijkstra’s algorithm that incorporates a heuristic function to guide the search towards the target node more efficiently. This heuristic estimates the cost to reach the goal from a given node, which helps in making more informed decisions about which path to explore next. A* is widely used due to its performance and accuracy.
- Breadth-First Search (BFS): BFS explores all the neighbor nodes at the present depth prior to moving on to the nodes at the next depth level. It is guaranteed to find the shortest path in an unweighted graph.
- Depth-First Search (DFS): DFS explores as far as possible along each branch before backtracking. While it’s not always optimal for finding the shortest path, it is useful for tasks like cycle detection in a graph.
How They Work:
In essence, pathfinding algorithms systematically explore a graph. For instance, the A* algorithm uses two lists: an “open list” of potential best-path nodes and a “closed list” of visited nodes. It calculates a score (the “F score”) for each node, which is the sum of the cost to get to that node from the start (the “G score”) and the estimated cost to get from that node to the goal (the “H score”). The algorithm repeatedly selects the node with the lowest F score from the open list to explore, moving nodes to the closed list as they are processed, until the goal is reached.
Applications:
Pathfinding algorithms are crucial in a wide range of real-world applications, including:
- Navigation and Transportation: They are used in GPS systems and mapping services like Google Maps to find the best routes for vehicles, pedestrians, and public transit, often considering factors like traffic and road closures.
- Video Games: In gaming, these algorithms enable non-player characters (NPCs) to navigate game worlds realistically, finding paths around obstacles to reach their destinations.
- Robotics: Autonomous robots use pathfinding to navigate their environment, avoiding collisions and finding the most efficient way to complete tasks.
- Computer Networks: They are used to find the most efficient paths for data to travel through a network, a process known as routing.
- Logistics and Supply Chain Management: Pathfinding helps in optimizing routes for delivery trucks and managing the flow of goods in a supply chain.
New Method Is the Fastest Way To Find the Best Routes
https://www.quantamagazine.org/new-method-is-the-fastest-way-to-find-the-best-routes-20250806/
Ben Brubaker
Staff Writer
August 6, 2025
A canonical problem in computer science is to find the shortest route to every point in a network. A new approach beats the classic algorithm taught in textbooks.
If you want to solve a tricky problem, it often helps to get organized. You might, for example, break the problem into pieces and tackle the easiest pieces first. But this kind of sorting has a cost. You may end up spending too much time putting the pieces in order.
This dilemma is especially relevant to one of the most iconic problems in computer science: finding the shortest path from a specific starting point in a network to every other point. It’s like a souped-up version of a problem you need to solve each time you move: learning the best route from your new home to work, the gym and the supermarket.
“Shortest-paths is a beautiful problem that anyone in the world can relate to,” said Mikkel Thorup(opens a new tab), a computer scientist at the University of Copenhagen.
Intuitively, it should be easiest to find the shortest path to nearby destinations. So if you want to design the fastest possible algorithm for the shortest-paths problem, it seems reasonable to start by finding the closest point, then the next-closest, and so on. But to do that, you need to repeatedly figure out which point is closest. You’ll sort the points by distance as you go. There’s a fundamental speed limit for any algorithm that follows this approach: You can’t go any faster than the time it takes to sort.
Forty years ago, researchers designing shortest-paths algorithms ran up against this “sorting barrier.” Now, a team of researchers has devised a new algorithm that breaks it(opens a new tab). It doesn’t sort, and it runs faster than any algorithm that does.
“The authors were audacious in thinking they could break this barrier,” said Robert Tarjan(opens a new tab), a computer scientist at Princeton University. “It’s an amazing result.”
The Frontier of Knowledge
To analyze the shortest-paths problem mathematically, researchers use the language of graphs — networks of points, or nodes, connected by lines. Each link between nodes is labeled with a number called its weight, which can represent the length of that segment or the time needed to traverse it. There are usually many routes between any two nodes, and the shortest is the one whose weights add up to the smallest number. Given a graph and a specific “source” node, an algorithm’s goal is to find the shortest path to every other node.
The most famous shortest-paths algorithm, devised(opens a new tab) by the pioneering computer scientist Edsger Dijkstra in 1956, starts at the source and works outward step by step. It’s an effective approach because knowing the shortest path to nearby nodes can help you find the shortest paths to more distant ones. But because the end result is a sorted list of shortest paths, the sorting barrier sets a fundamental limit on how fast the algorithm can run.
In 1984, Tarjan and another researcher improved Dijkstra’s original algorithm(opens a new tab) so that it hit this speed limit. Any further improvement would have to come from an algorithm that avoids sorting.
In the late 1990s and early 2000s, Thorup and other researchers devised algorithms that broke the sorting barrier, but they needed to make certain(opens a new tab) assumptions(opens a new tab) about weights. Nobody knew how to extend their techniques to arbitrary weights. It seemed they’d hit the end of the road.
“The research stopped for a very long time,” said Ran Duan(opens a new tab), a computer scientist at Tsinghua University in Beijing. “Many people believed that there’s no better way.”
Duan wasn’t one of them. He’d long dreamed of building a shortest-paths algorithm that could break through the sorting barrier on all graphs. Last fall, he finally succeeded.
Out of Sorts
Duan’s interest in the sorting barrier dates back nearly 20 years to his time in graduate school at the University of Michigan, where his adviser was one of the researchers who worked out how to break the barrier in specific cases. But it wasn’t until 2021 that Duan devised a more promising approach.
The key was to focus on where the algorithm goes next at each step. Dijkstra’s algorithm takes the region that it has already explored in previous steps. It decides where to go next by scanning this region’s “frontier” — that is, all the nodes connected to its boundary. This doesn’t take much time at first, but it gets slower as the algorithm progresses.
Duan instead envisioned grouping neighboring nodes on the frontier into clusters. He would then only consider one node from each cluster. With fewer nodes to sift through, the search could be faster at each step. The algorithm also might end up going somewhere other than the closest node, so the sorting barrier wouldn’t apply. But ensuring that this clustering-based approach actually made the algorithm faster rather than slower would be a challenge.
Duan fleshed out this basic idea over the following year, and by fall 2022, he was optimistic that he could surmount the technical hurdles. He roped in three graduate students to help work out the details, and a few months later they arrived at a partial solution(opens a new tab) — an algorithm that broke the sorting barrier for any weights, but only on so-called undirected graphs.
In undirected graphs, every link can be traversed in both directions. Computer scientists are usually more interested in the broader class of graphs that feature one-way paths, but these “directed” graphs are often trickier to navigate.
“There could be a case that A can reach B very easily, but B cannot reach A very easily,” said Xiao Mao(opens a new tab), a computer science graduate student at Stanford University. “That’s going to give you a lot of trouble.”
Promising Paths
In the summer of 2023, Mao heard Duan give a talk about the undirected-graph algorithm at a conference in California. He struck up a conversation with Duan, whose work he’d long admired.
“I met him for the first time in real life,” Mao recalled. “It was very exciting.”
After the conference, Mao began thinking about the problem in his spare time. Meanwhile, Duan and his colleagues were exploring new approaches that could work on directed graphs. They took inspiration from another venerable algorithm for the shortest-paths problem, called the Bellman-Ford algorithm, that doesn’t produce a sorted list. At first glance, it seemed like an unwise strategy, since the Bellman-Ford algorithm is much slower than Dijkstra’s.
“Whenever you do research, you try to take a promising path,” Thorup said. “I would almost call it anti-promising to take Bellman-Ford, because it looks completely like the stupidest thing you could possibly do.”
Duan’s team avoided the slowness of the Bellman-Ford algorithm by running it for just a few steps at a time. This selective use of Bellman-Ford enabled their algorithm to scout ahead for the most valuable nodes to explore in later steps. These nodes are like intersections of major thoroughfares in a road network.
“You have to pass through [them] to get the shortest path to a lot of other stuff,” Thorup said.
In March 2024, Mao thought of another promising approach. Some key steps in the team’s original approach had used randomness. Randomized algorithms can efficiently solve many problems, but researchers still prefer nonrandom approaches. Mao devised a new way to solve the shortest-paths problem without randomness. He joined the team, and they worked together over the following months via group chats and video calls to merge their ideas. Finally, in the fall, Duan realized they could adapt a technique from an algorithm(opens a new tab) he’d devised in 2018 that broke the sorting barrier for a different graph problem. That technique was the last piece they needed for an algorithm that ran faster than Dijkstra’s on both directed and undirected graphs.
The finished algorithm slices the graph into layers, moving outward from the source like Dijkstra’s. But rather than deal with the whole frontier at each step, it uses the Bellman-Ford algorithm to pinpoint influential nodes, moves forward from these nodes to find the shortest paths to others, and later comes back to other frontier nodes. It doesn’t always find the nodes within each layer in order of increasing distance, so the sorting barrier doesn’t apply. And if you chop up the graph in the right way, it runs slightly faster than the best version of Dijkstra’s algorithm. It’s considerably more intricate, relying on many pieces that need to fit together just right. But curiously, none of the pieces use fancy mathematics.
“This thing might as well have been discovered 50 years ago, but it wasn’t,” Thorup said. “That makes it that much more impressive.”
Duan and his team plan to explore whether the algorithm can be streamlined to make it even faster. With the sorting barrier vanquished, the new algorithm’s runtime isn’t close to any fundamental limit that computer scientists know of.
“Being an optimist, I would not be surprised if you could take it down even further,” Tarjan said. “I certainly don’t think this is the last step in the process.”
Computer Scientists Establish the Best Way to Traverse a Graph
Ben Brubaker
Staff Writer
October 25, 2024
Dijkstra’s algorithm was long thought to be the most efficient way to find a graph’s best routes. Researchers have now proved that it’s “universally optimal.”
f you’ve been making the same commute for a long time, you’ve probably settled on what seems like the best route. But “best” is a slippery concept. Perhaps one day there’s an accident or road closure, and your fastest route becomes the slowest.
Scenarios like this are also a challenge for researchers who develop algorithms, the step-by-step procedures that computers use to solve problems. Many different algorithms can solve any given problem, and the question of which is best can be frustratingly ambiguous.
For example, imagine an algorithm that’s designed to find the fastest route between two points. There are lots of possible ways to design such an algorithm so that it doesn’t fail. A successful algorithm will always return the fastest route, whether you use it in London or Los Angeles, and whether it’s rush hour or the middle of the night.
But those algorithms aren’t all the same. The time each one takes to find the right answer will vary depending on where and when it’s used, and cases that are hard for one algorithm may be easy for another. Ideally, you’d want an algorithm that always runs faster than the others.
For most problems, it’s simply not possible to find such a unicorn. But a new proof(opens a new tab) shows that for the quintessential path-finding problem, one algorithm is close to ideal: Assuming worst-case traffic patterns, it’s the best approach on every possible street grid. What’s more, the algorithm is nearly 70 years old and a staple of the undergraduate computer science curriculum. The new work will be presented with a best-paper award at the 2024 Symposium on Foundations of Computer Science next week.
“It’s amazing,” said Tim Roughgarden(opens a new tab), a computer scientist at Columbia University. “I can’t imagine a more compelling research paper about a problem we teach students in undergrad algorithms.”
Heaps and Bounds
The story of this iconic path-finding algorithm began with a detour. In 1956, the 26-year-old Dutch computer scientist Edsger Dijkstra wanted to write a program that would show off the capabilities of a brand-new computer called the ARMAC. While shopping with his fiancée in Amsterdam, he stopped in at a café for a break. That’s when he hit on the idea for the algorithm(opens a new tab) that now bears his name. He didn’t have writing materials on hand, so over the course of 20 minutes he worked out the details in his head.
In an interview(opens a new tab) toward the end of his life, Dijkstra credited his algorithm’s enduring appeal in part to its unusual origin story. “Without pencil and paper you are almost forced to avoid all avoidable complexities,” he said.
Dijkstra’s algorithm doesn’t just tell you the fastest route to one destination. Instead, it gives you an ordered list of travel times from your current location to every other point that you might want to visit — a solution to what researchers call the single-source shortest-paths problem. The algorithm works in an abstracted road map called a graph: a network of interconnected points (called vertices) in which the links between vertices are labeled with numbers (called weights). These weights might represent the time required to traverse each road in a network, and they can change depending on traffic patterns. The larger a weight, the longer it takes to traverse that path.
To get a sense of Dijkstra’s algorithm, imagine yourself wandering through a graph, writing down the travel time from your starting point to each new vertex on a piece of scratch paper. Whenever you have a choice about which direction to explore next, head toward the closest vertex you haven’t visited yet. If you discover a faster route to any vertex, jot down the new time and cross out the old one. When you’re sure that you’ve found the fastest path, move the travel time from your notes to a separate, more presentable list.
“It’s a great algorithm,” said Erik Demaine(opens a new tab), a computer scientist at the Massachusetts Institute of Technology. “It’s very fast, simple and easy to implement.”
To put this procedure into practice, you’d need to decide on a system for organizing your notes — a data structure, in the lingo of computer science. That may sound like a minor technical detail, but time spent searching through your notes whenever you need to edit or remove an entry can have a big effect on the overall runtime of the algorithm.
Dijkstra’s paper used a simple data structure that left room for improvement. In the following decades, researchers developed better ones, affectionately dubbed “heaps,” in which certain items are easier to find than others. They take advantage of the fact that Dijkstra’s algorithm only ever needs to remove the entry for the closest remaining vertex. “A heap is basically a data structure that allows you to do this very quickly,” said Václav Rozhoň(opens a new tab), a researcher at the Institute for Computer Science, Artificial Intelligence and Technology (INSAIT) in Sofia, Bulgaria.
In 1984, two computer scientists developed a clever heap design(opens a new tab) that enabled Dijkstra’s algorithm to reach a theoretical limit, or “lower bound,” on the time required to solve the single-source shortest-paths problem. In one specific sense, this version of Dijkstra’s algorithm is the best possible. That was the last word on the standard version of the problem for nearly 40 years. Things only changed when a few researchers took a closer look at what it means to be “best.”
Best Behavior
Researchers typically compare algorithms by studying how they fare in worst-case scenarios. Imagine the world’s most confusing street grid, then add some especially perplexing traffic patterns. If you insist on finding the fastest routes in these extreme circumstances, the 1984 version of Dijkstra’s algorithm is provably unbeatable.
But hopefully, your city doesn’t have the world’s worst street grid. And so you may ask: Is there an algorithm that’s unbeatable on every road network? The first step to answering this question is to make the conservative assumption that each network has worst-case traffic patterns. Then you want your algorithm to find the fastest paths through any possible graph layout, assuming the worst possible weights. Researchers call this condition “universal optimality.” If you had a universally optimal algorithm for the simpler problem of just getting from one point on a graph to another, it could help you beat rush hour traffic in every city in the world.
“This sounds too good to be true,” said Bernhard Haeupler(opens a new tab), a computer scientist affiliated with INSAIT and the Swiss Federal Institute of Technology Zurich (ETH Zurich).
Haeupler became fascinated with the promise of universal optimality while writing a grant proposal in the mid-2010s. Many researchers find that part of the job tedious, but Haeupler saw it as an opportunity. “It allows you to throw your skepticism out and just dream big,” he said.
Those dreams came to fruition in 2021, when Haeupler and two graduate students proved(opens a new tab) that it was possible to build universally optimal algorithms for several important graph problems. He didn’t think to ask whether the same condition was achievable for the classic single-source shortest-paths problem. That would have to wait until a different graduate student dared to dream big.
The Shortest Path to Victory
In early 2023, Rozhoň was at the tail end of his graduate program at ETH Zurich. He had just finished a paper(opens a new tab) about going beyond worst-case analysis in a different context, and he was brainstorming new ideas to pursue with his co-author Jakub Tětek(opens a new tab), then a graduate student at the University of Copenhagen. Rozhoň suggested they try to devise a universally optimal algorithm for the single-source shortest-paths problem.
“I said, ‘No, but that’s not possible; that just cannot be done,’” Tětek recalled. But Rozhoň convinced him to give it a try. In the spring, the team grew to three with the addition of Richard Hladík(opens a new tab), a graduate student at ETH Zurich whom Rozhoň and Tětek had met when all three were high schoolers in the Czech Republic.
The trio tinkered with many different aspects of Dijkstra’s algorithm and the heap that it used, and they managed to kludge together a universally optimal variant. But the resulting algorithm was complicated, and they couldn’t pinpoint what conditions were actually necessary for universal optimality. In a field that thrives on comprehensive and rigorous proofs, this wasn’t enough.
The three students would turn from mathematical networks to social ones. Rozhoň had begun discussing the problem with Haeupler while both were visiting colleagues in New York. From there, Haeupler flew to Panama for a holiday, but he wasn’t quite ready to set the problem aside.
“It really was vacation,” he said. “But then also, thinking doesn’t stop.”
During a Zoom call a few days into Haeupler’s trip, the team of four settled on a new approach. They decided to focus mainly on the choice of data structure. Soon, they began to suspect that that would be enough — they could just leave the rest of Dijkstra’s algorithm intact. Within a month, they’d proved it.
The key ingredient turned out to be a special property of some data structures that lets them quickly access recently added items. Heaps with this property were first constructed(opens a new tab) over 20 years ago, but in all the years that followed, nobody made full use of it. The four researchers proved that they only needed to construct a data structure with this new property and all the other features of the 1984 heap. All they needed to do now was design it.
The last person to join the team was Robert Tarjan(opens a new tab), a computer scientist at Princeton University who was one of the inventors of that special 1984 heap. Tarjan had won the Turing Award, considered the highest honor in the field, and had also been a mentor to Haeupler in the late 2000s. When Tarjan visited Zurich in May, Haeupler invited him over for fondue — his specialty — and mentioned the new shortest-paths project. Tarjan was immediately in.
The five researchers set to work developing a heap data structure with all the properties they needed. They started with an unwieldy design and improved it bit by bit until they were finally satisfied. “Every time we looked at it, we were able to simplify a little bit,” Rozhoň said. “I was actually surprised how simple it was in the end.”
Some variants of Dijkstra’s algorithm have seen real-world use in software like Google Maps. The new result probably won’t have such practical applications, for which there are many considerations beyond theoretical optimality guarantees. But it may change how researchers study optimality, prompting them to look beyond the usual worst-case analysis. Often, algorithms only achieve stronger guarantees at the cost of added complexity. The new result suggests that simple algorithms with these stronger guarantees might be more widespread than researchers previously thought — the team has already identified two(opens a new tab) other(opens a new tab) examples.
“The general concept is very compelling,” Tarjan said. “It remains to be seen how far one can take this.”