As you may guess from the domain of this site, software souls, one of my principal interests in robotics is creating behaviors, to make robots act as if they were alive. And here comes software. But I separate two aspects, because I think that to create the behavior just coding is too much complex, even for seasoned developers, and, worse, is too much rigid and difficult to change:
– The software to create, define or configure the behavior (the same way animations are created with specific software, and even hardware, tools)
– The software embedded in the robot to control it “executing” the behavior
In addition, if we can create a really easy to use tool to define and configure behaviors, not only developers could create behaviors, but anyone that could use a computer.
This post is focused in the software embedded in the robot and the two inputs that it will receive from the behavior creator: the behavior specification. The next diagram try to show the main parts:
The embedded robot software have four main components and two interfaces:
– A configurable behavior event generator that will be arising, internally or externally triggered, needs, wishes, and other action driving “behavior events”
– The sensor perception component that receive and process the information from the robot environment
– The motion component that control the robot motion.
– The coordination component that organize all the information to generate the movements showing its behavior.
The two interfaces uncouple the event generator from the perception and motion.
The user could use the default configuration or adjust it to change different aspects related to its behavior, perception or motion. Just using this configuration it will act stand-alone, based on the configuration and its environment. If we want it to do any specific tasks or tasks we should supply a DSL based specification of the tasks, that will be executed but conditioned by the configured behavior.