icon

Spaceships are doing it for themselves

Marianne Freiberger Share this page

It requires only a little processing power, but it's a great leap for robotkind: engineers at the University of Southampton have developed a way of equipping spacecraft and satellites with human-like reasoning capabilities, which will enable them to make important decisions for themselves. Using a new control system called sysbrain engineers will be able to programme these space vehicles to avoid accidents, fix their own faults, and maybe even save the Earth from asteroid impact, all without step-by-step guidance from humans.

Sandor Veres with his model satellites.

The new system is currently being tested in the lab using a fleet of model satellites operating in an environment that simulates conditions in space. "[The models] are spatially aware of their environment, they can foresee the future, plan and execute," says Sandor Veres, who leads the research. "Essentially this is similar to human reasoning."

Using sensors, the satellites can observe their environment, for example the position of other satellites. They then project their view of the current state of the world into the near future and derive statements about what's happening, for example "this satellite is going to collide with me". Using a set of pre-programmed rules of behaviour, for example "avoid collisions", they then use logical inference to decide what action to take.

The machines' ability to think logically comes from a mathematical system called temporal logic, whose roots can be traced back to the 10th century Persian philosopher Ibn Sina. Like other systems of logic, it gives a way of representing statements about the world in a formal language and sets out rules of logical inference that can be implemented on a computer. For example, if we know that a statement P (eg "something is moving towards me") implies a statement Q (eg "there will be a collision"), then if we observe that statement P is actually true, we can immediately deduce that statement Q is also true. Temporal logic has the added ability to deal with statements that can change over time: while in more basic systems a statement like "something is moving towards me" is either true or false, temporal logic allows it to change its truth value over time, depending on other factors. This enables a machine to explore sequences of events and the implications of any course of action it decides to take.

Collision prevention is just a simple example of what sysbrain can enable machines to do. "We are looking at situations that involve much higher logical complexity," says Veres. "For example, we have simulated a situation where one of the thrusters of an agent fails. The agent can detect the problem and reconfigure its controls without much delay before something goes badly wrong. It does all this very fast, much faster than any human would be able to. Its processing power is staggering, in fact it's hugely underused."

But speed isn't the only advantage of a spacecraft that can make its own decisions. When a spacecraft is far away from the Earth remote communication with ground control comes with a delay, which a complex mission can't afford. As an example scenario, think about an asteroid heading for Earth, big enough to cause serious damage but small enough to be nudged off its collision course by a spacecraft attaching to it and using its thrusters. "The asteroid could be near to Mars at the time it's being dealt with," says Veres. "One-way communication from Mars to Earth takes 8 to 23 minutes - you can't control operations with that kind of delay." A mission to prevent asteroid impact with Earth would be complex, involving tracking of the asteroid, making contact and executing a new orbit, but as the work of one of Veres' students has shown, it is not beyond the capabilities of their system.

Intelligent spacecraft may save us from asteroid impact. Image courtesy NASA.

In principle there's no reason why sysbrain should only be used in space missions. Autonomous underwater vehicles, which also operate a long way from human control stations, provide another area of application. And an important one at that, thinking about what intelligent underwater vehicles might have done to mitigate last year's disaster in the gulf of Mexico.

What makes the new technology particularly easy to use is the fact that it uses natural language programming. "We want engineers to be able to programme these intelligent systems without too much requirement for programming skills," says Veres. "So we have created a programming environment where [everything] can be written down in English sentences." These sentences, as long as they adhere to a certain structure and vocabulary, are automatically translated into high-level programming code, without the engineers having to learn difficult programming languages.

So far, sysbrain is only operating model satellites in the lab. Veres and his team want to test it in more complex environments before implementing it on actual space missions. But they are already in contact with NASA and ESA, to see how it might be used in the future.


But where does all this put us in our quest for fully-fledged artificial intelligence? Veres points out that everyday human actions like driving down the road in a car involve much more than just physical awareness. "If something complex happens, like a fire engine approaching or a group of people demonstrating on the road, then you can only decide what to do if you understand the social context. I think that very soon we will have robotos who understand the physical environment, but creating robots that understand the social aspects of our environment - or the physical and the social together - that's something that won't happen very soon."