Electronic computers have revolutionised our world through their capacity to automatically and very rapidly perform tasks that a short while ago were unthinkable. At their core, computers of any kind are machines for performing mathematical operations. Therefore, their spectacularly appreciating contribution to human society stems from their ever-increasing ability to perform these operations quickly and reliably. In turn, this has been made possible by two trends over the last 50 years.

The first of these has been an exponential decrease of the size and cost of the electronic parts (especially transistors, the basic number-crunching unit) that can be manufactured *en masse*. As a result, we can now pack thousands of times more transitors onto a given area than just a few years ago, at a fraction of the cost. This means we can crunch out many more mathematical operations per second than before and preserve the answers onto ever-denser memory storage devices (hard disks and RAM); so that a problem that some time ago would have taken months or years of supercomputer time can now be done in seconds by the microchip in a mobile phone.

The second, and often less appreciated thing that has allowed computers to be increasingly useful is the discovery of efficient “algorithms”, or recipes, for solving the mathematical problems underlying all computer programs. Electronic computing power, no matter how large, is not of practical use if in order to find a solution to a problem, the computer has to try all the possibilities blindly, because this number of possibilities can rapidly become out of reach of the number-crunching power of even a huge computer. Happily, for many common problems, such as solving algebraic equations or a GPS finding the shortest route from the store to your home, mathematicians have been able to find “shortcuts”, so that the computer only has to perform a relatively small number of calculations to determine the best answer.

The combination of efficient algorithms and rapidly increasing number-crunching power means that, over the last few decades, ever-larger problems have become solvable in real-time or at least very quickly, so that computers can help humans with many common tasks. However, we have also begun to encounter problems for which nobody has been able to find efficient short cuts. These include many of the problems we care most about, such as new drug design, scheduling activities, checking that engineering systems work as they are designed to, and many others. For these problems, computers simply have to go through all the possible guesses before finding the actual answer, which means that if the problem size increases even modestly, the computer can no longer solve it quickly enough to be useful.

It is not known if these so-called “NP-complete” problems actually do have efficient algorithms that we are not yet able to find, or if they have the built-in property of not being solvable via short-cuts. This question, called “P vs NP”, is the most famous problem in Computer Science, and despite a million-dollar prize for its solution, very little progress has been made. What we do now know is that these problems are all just versions of one another (i.e. the world has exactly one unique super-problem disguising itself in various cloaks); and that the computer, while unable to solve the problem on its own, can efficiently check whether a given guess at an answer is correct or not. This and other (very specialised) mathematical evidence has convinced most expert mathematicians that in fact no such efficient algorithms can ever be found for NP-complete problems, i.e, that “P is not equal to NP”.

There are also other, more philosophical reasons to suspect that NP-complete problems can't be solved efficiently. If the brain is a giant computer and if P were equal to NP, then just checking something is right would be as easy as finding the answer to a problem from scratch. That doesn't fit with most of our intuitions. For example, if being able to appreciate an opera or painting was the same as composing an opera or painting, we would all be brilliant musicians or painters; if checking a friend's puzzle solution was right was as easy as figuring it out on your own, we would all be brilliant mathematicians. In such a world, there would be no talent and no genius, and in general in such a world creativity would either not exist or be an illusion, which does not fit our general experience.

**Aims and motivation for the ABACUS project**

If NP-complete problems don't have shortcuts, as most mathematicians believe, then to solve them pragmatically we need to check most of the possible solutions before finding the answer. Electronic computers can't help much, because although they crunch numbers very fast, they can only do one operation at a time, and are rapidly overwhelmed by the number of possibilities. This has led scientists to begin to think of new kinds of computers that can do many things at once (as our brains and bodies do) and therefore can find the answer efficiently by overwhelming the problem.

Several ideas are being developed. Two famous directions of work are quantum computers and DNA computers. Quantum computing attempts to harness quantum effects - the strange nature of physics at small scales - to carry out operations simultaneously, while in DNA computing, many billions of individual DNA molecules are each used as a micro-computer, able to perform simple calculations and store data, all at the same time. These and other technologies are very promising, but also face major technological hurdles and may never end up being practically developed into useful computing machines.

We will use biological agents to explore physical structures (networks) that encode a mathematical problem. One example of biological agents are actin filaments (thin, long biomolecules) propelled by molecular motors (myosin) that cover the network floor.

In the ABACUS project, we are working on a different, but conceptually similar idea: using a population of individual self-propelled microscopic biological “bugs”, such as bacteria, fungi or moving biomolecules to explore specially designed physical structures (networks) that encode specific mathematical problems, such that each path through the structure corresponds to one possible solution to the problem. At a high level, the principle is the same as the observation that you can figure out the way out of a complicated maze by releasing thousands of little mice in it, who will distribute themselves throughout the maze until one or more eventually find the way out. Our mazes are microscopic networks of channels and halls that encode a problem of interest, and our “mice” (we call them agents) are equally tiny molecules or bacteria that explore the “maze”, all at the same time.

The general approach will then look like in the figure on the right. Motile agents (“bugs”) enter the network from a source and explore the network, which encodes a mathematical problem. Those exit channels from which many agents emerge represent the solutions of the problem. Because very many agents can explore a network at the same time, even big problems can be solved in a relatively short time.

The project has two broad aims. The first is to develop a mathematical method for taking any problem we are interested in; and designing a network such that each path through the network is a guess at an answer to the problem and therefore in travelling such a path an agent checks whether the solution works or not. The second aim is to actually demonstrate how the technology could work by physically building simple prototypes of such devices and using specially selected “agents” to explore them until the solution is found.

This approach may turn out to have great advantages over other to solving complex problems: the technique is very energy-efficient and can in principle be scaled up to solve large problems that electronic computers take a very long tome to solve.