Photo: YouTube
In an article published in Nautilus online titled "Robots Show Us Who We Are" Robotic Engineer Alan Winfield explains research being done by the laboratory he oversees. Specifically on how robots imitate humans:
Elon Musk has described Tesla as the largest robotics company because their cars are essentially robots on wheels. What do you make of Tesla’s efforts to achieve autonomous driving?
There’s no doubt they make very nice motor cars. I’m much more skeptical about the autopilot technology. We rely on the manufacturers’ assurances that they’re safe. I do quite a lot of work with both the British Standards Institute and the Institute of Electrical and Electronics Engineers Standards Association. The standards have not yet been written for driverless car autopilot. If you don’t have standards, it’s quite hard to test for the safety of such a system. For that reason, I’m very critical of the fact that you can essentially download the autopilot at your own risk. If you’re not paying attention and the autopilot fails, you may, if you’re very unlucky, pay with your life.
Have you spoken with any Tesla owners?
I know several people who have Teslas. Several years ago, I was discussing with one of them how very lucky he was to be paying attention when something happened on the motorway in England and he had to make an evasive maneuver to avoid a serious crash. That’s the paradox of driverless vehicles—insurance companies require drivers to be alert and paying attention, yet the amount of time that they’ve got to react is unreasonably short. Autonomous vehicles only make sense when they are sufficiently advanced and sophisticated that you don’t even have a steering wheel. And that’s a long way into the future. A long way.
How do you view the way Tesla trains its autopilot technology?
They’re using human beings essentially as test subjects as part of their development program. And other road users are, in a sense, unwittingly part of the process. It sounds reasonable in principle, but I think the safety implications are very unwise.
Fair enough. Tell us how you got interested in experimenting with robot culture.
My friend and coauthor Susan Blackmore wrote a book some years ago called The Meme Machine. You’re familiar with the idea of memes. Meme was suggested by Richard Dawkins in his even more famous book called The Selfish Gene, where he defined a meme as a unit of cultural transmission, as a cultural analog, if you like, for the gene. Hence the similarity between the two words. But memes are quite hard to pin down in the sense that a gene typically has some coding associated with it as part of the DNA. That’s one of the criticisms of memetics. But let’s put those criticisms aside. The fact is that ideas and behaviors spread in human culture and in fact, in animal culture, by imitation. Imitation is a fundamental mechanism for the spread of behaviors. Humans are by far the best imitators of all the animals that we know. We seem to be born with the ability to imitate as infants. What we are interested in doing is modeling that process of behavioral imitation.
You started by creating what you call copybots. I love that the idea for them was once just a thought experiment Blackmore came up with.
Yes. We were able to build the copybots for real, with physical robots. They’re small, slightly larger than a salt shaker, but they’re sophisticated. Each one had a Linux computer with WiFi. It can see with a camera. It has something like a sense of touch by virtue of a ring of eight infrared sensors. We seeded some of the robots with a dance. The pattern of movement would describe a triangle or a square and other robots would observe that movement pattern with their own camera. Imitation was embodied. We don’t allow telepathy between robots, even though it’d be perfectly easy to arrange for that. It’s a process of inference, like watching your dance teacher and trying to imitate their moves.
What is significant about the copybots’ ability to imitate one another?
The fundamentally important part of our work is that the robots, even though they’re in a relatively clean and uncluttered environment, still imitate badly. The fidelity of imitation tends to vary wildly, even in a single experiment. That allows you to see the emergence, the evolution, of new variations on those behaviors. New dances tend to emerge as a result of that less-than-perfect fidelity. The wonderful thing about these real physical robots is that you get the noisy imitation for free.
What are some signs you see that imitation can lead to the emergence of culture?
We see heredity because robot behaviors have parent and grandparent behaviors. You also have selection. If your memory has, say, 10 dances in it, and five of them are very similar, but the other five are very different, you are more likely to choose one of the five that are similar, because they’re more dominant. If you choose randomly with equal probability, you are more likely to choose one of those dominant dances. So, you see the emergence of simple traditions, if you like, and a new dance emerges. It becomes dominant in the collective memories of all of the robots. That really is evidence of the emergence of artificial traditions—i.e., culture. It’s a demonstration that these very simple robots can model something of profound importance.
What else did you find experimenting with copybots?
We found that the memes that emerge over time, the dances, evolve to be easier to transmit. They evolve to match the physiology, the sensorium, of the robots. I believe we’re the first to model cultural evolution with real physical robots.
The storytelling robots you’re working with take imitation and cultural communication to the next level. Can you tell us about that?
It was only recently, in the last couple of years, that Sue Blackmore and I realized that we could extend the story of artificial culture, the work of the copybots, to the storybots, where the storybots would be literally telling each other stories. That’s the next step. We are very excited by that. We would have had some results if it were not for the pandemic, which closed the lab for the best part of a year or more. They build on another thread of work that I’ve been doing for around five or six years, working on robots with a simulation of themselves inside themselves. It’s technically difficult to do, especially if you want to run the robots in real time and update the robots’ behavior on the basis of what the robot imagines.
How does robot imagination relate to storytelling?
It is in a sense still the imitation of behavior, but the imitation of behavior through a much more sophisticated mechanism, which is, you tell me a story and I then repeat that story, but I repeat it after I’d re-imagined it and reinterpreted it as if it were my own imagination. That’s exactly what happens with storytelling, particularly oral storytelling. If you tell your daughter a story and she tells it back to you, it’s probably going to change. It’s probably going to be a slightly different story. The listener robot will be hearing a speech sequence from another robot with its microphones and then re-imagining that in its own, inbuilt functional imagination. But because oral transmission is noisy, we are probably going to get the thing that happens with a game of telephone. Language is an extraordinarily powerful medium of cultural transmission. Being able to model that would really take us a huge step forward.
Do you one day want to see humanoid robots having their own culture?
This is purely a science project. I’m not particularly interested in literally making robots that have a culture. This is simply modeling interesting questions about the emergence of culture in animals and humans. I don’t deny that, at some future time, robots might have some emergent culture. You could imagine some future generation of robot anthropologists studying this, trying to make sense of it.
What makes robots such a useful tool in understanding ourselves?
Robots have physical bodies like we do. Robots see the world from their own first-person perspective. And their perception of the world that they find themselves in is flawed, imperfect. So there are a sufficient number of similarities that the model, in a sense, is plausible—providing, of course, you don’t ask questions that are way beyond the capabilities of the robots. Designing experiments, and coming up with research hypotheses that can be reasonably tested, given the limitations of robots, is part of the fun of this work.
Do you think robots can be built with consciousness, or is it something unique to biological beings like us?
Although it’s deeply mysterious and puzzling, I don’t think there’s anything magical about consciousness. I certainly don’t agree with those who think there is some unique stuff required for consciousness to emerge. I’m a materialist. We humans are made of physical stuff and we apparently are conscious, and so are many animals. That’s why I think we should be able to make artificially conscious machines. I’d like to think that the work we’re doing on simulation-based internal models in robots and in artificial theory of mind is a step in the direction of machine consciousness.
Are you worried that we might stumble into creating robots that can suffer, that can feel their own wants and desires are being ignored or thwarted?
I do have those worries. In fact, a German philosopher friend of mine, Thomas Metzinger, has argued that, as responsible roboticists, we should worry. One of the arguments that Thomas makes is that the AI might be suffering without you actually being aware that it’s suffering at all.
AI are moral subjects, but only in the limited sense that I don’t believe that animals should suffer. Animal cruelty is definitely something that we should absolutely stop and avoid. For the same reason, if and when we build more sophisticated machines, I think it’s appropriate not to be cruel to those, too.
Do you think that robots and AI will be key in understanding consciousness?
I think it would. You may know the quote of Richard Feynman who said if I can’t build it, I don’t understand it. I’m very committed to what’s called the synthetic method, essentially doing science by modeling things. A robot is a fantastic microscope for studying questions in the life sciences.
How would we know that we had built a conscious machine?
I remember asking an old friend of mine, Owen Holland, who I think was one of the people who had the first grant ever in the UK, if not in the world, to investigate machine consciousness, “Well, how will you know if you’ve built it?” His answer was, “Well, we don’t, but we might learn something interesting on the way.” That’s always true
There is no doubt given the role robots (and automation) play in our current society that the future holds more roles that will be given to robots. Presumably, this will result in a more efficient society. What does that mean? What roles will robots play that humans cannot?
These questions and more remain open. Stay tuned!