Adaptive Behavior Variations of a Robot using its own Emotions

DOI : 10.17577/IJERTV2IS110913

Download Full-Text PDF Cite this Publication

Text Only Version

Adaptive Behavior Variations of a Robot using its own Emotions

P.M. Manoj K.S. Ganesh kumar

M.Tech Embedded Systems, M.Tech VLSI Design, Karunya University, Karunya University,

Coimbatore, Tamilnadu, India. Coimbatore, Tamilnadu, India.

Abstract

Artificial emotion is the basic pre-requisite for the intelligence in navigation of our Robot. Five basic emotions such as Fear, Angry, Disgust, Happiness and Sadness are used as the key-tool in modifying the navigation of the robot. These emotions are generated as a result of sensors input to the robot. Fear is incurred when the robot faces obstacles in its path and it limits its speed in such crowded environments and makes an alternative path-plan to avoid the collisions. Anger is incurred, when the robot faces a large number of obstacles, it limits its speed and tries to push the obstacles away. Disgust helps the robot in finding the alternate paths rather than using the same path for repetitive travels. The robot faces sadness when it reaches a goal after a long time, due to the obstacles in its path and hence the robot finds an alternate path during its next travel which gives happiness to the robot which in-turn improves the speed of the robot. Thus all the emotions are used to improve the robots performance in terms of speed and reducing the number of collision counts.

Key words- Artificial emotions, Goals, Adaptive performance.

  1. Introduction

    Emotions and reason have historically been regarded as independent process competing for control of the brain. This view largely stems from philosophies such as Cartesian dualism that have influenced Western thinking for centuries [1]. Such philosophies tend to view many emotions in a negative light, regarding them as base instincts that the rational mind should strive to overcome. Recent years have seen increasing acceptance of philosophies that challenge the strong division between mind and body[2],[13]. Psychological and neurobiological evidence links emotions to functions that were once considered purely cognitive, such as problem solving, learning, memory and perception[3]. This has led some authors to conclude that emotions are a pre- requisite for intelligence [1]-[5],[12]. Other

    researchers argue that, even if emotions are not necessary for intelligence, they can be beneficial to adaptive behavior, the implementation of artificial emotions in robot Manuscript received is worth consideration[14]. As we define them, artificial emotions are not real emotions subjectively experienced by robots, nor are they superficial external responses intended to mimic human emotions. Rather, they are software mechanisms inspired by theories of biological emotions that enable a robot to appropriately respond to certain situations that arise during its interactions with a dynamic environment. While artificial emotions can be applied to models that facilitate human-machine interaction[6],[9], We largely focus on the effects of emotions on general performance. In particular, artificial emotions can motivate a robot to reprioritize its goals, modulate its behavior parameters, and provide learning rewards. These interactions should improve a robots ability to adapt to conditions that exceed its original design constraints [2]. First, we outline an emotion-based hybrid reactive/deliberative robotic architecture that supports these functions. Next, we describe its implementation as a mobile robot navigation system[7],[8],[16]. Finally, we present the results of a series of experiments that quantitatively compare its performance with an emotionless counterpart.

  2. Biological Emotions

    Architectures for autonomous agents and robots bear little resemblance to the cognitive architectures of biological brains. Therefore, it is generally not advantageous to attempt to model the full range of biological emotions or for artificial emotions to have as broad an influence over cognitive processes as their real-world counterparts. Neverthrless, certain insights can be gleaned from studies of biological emotions that may improve the performance of computational models.

    In the biological world, emotions are linked to both learned and innate processes. For example, humans and many animals have evolved an innate predisposition to fearing certain objects such as snakes[2]. While learning is required to associate a

    particular object with the fear response, organisms that can more readily learn to fear appropriate environmental threats possess a survival advantage and therefore more likely to reproduce.

    Emotions interact with cognition at multiple architectural levels. This is confirmed by the existence of two distinct fear circuits in the brain. One passes fast reactive signals from the thalamus to the amygdale, allowing a humal or animal to quickly respond to immediate threats. The other consists of slower more refined signals received by the amygdale from the cortex, enabling an organism to appropriately act to facilitate long-term survival.

  3. Robotic Emotions

    Research into robotic emotions can be divided into two domains [2]: 1. Social interaction- Emotions can enable robots to behave in a socially appropriate manner when interacting with humans. 2. Adaptation- Emotions are adaptive behaviours that can potentially improve a robots general performance.

    Disregarding the functional benefits of emotions to robots, humans are accustomed to emotional interaction. Thus, there is significant demand for robots that can portray emotions when interacting with humans or appropriately respond to detected human emotions. Although some social robots utilize biologically inspired models of emotion, many researchers focus more on the appearance of robotic emotions or the subjective evaluations of humans who interact with the robots. While some authors have argued that artificial emotions can serve a useful role in robotics outside of the social domain, few completed implementations have been demonstrated. The majority of robots utilize emotions as adaptive behaviours are nevertheless applied in a social context, interacting with humans or other robots. Artificial emotions are commonly represents as discrete states that drive a robots actions[3]. The underlying control architecture is often purely reactive[3], so the interactions between emotions and deliberative planning have received a little attention. Few authors attempt to model the influence of emotions on learning [3] or the ability to learn to experience emotions appropriate to the situation. Quantitative results are scarce [2], and those that exist to tend to analyze the performance of an entire system; individual mechanisms or emotions are generally not decoupled. It is therefore likely that many reported performance differences are due to the actions of a small subset of the emotions or mechanisms implemented, with others serving no useful purpose. We represent emotions not as discrete logic states but as continuous modulations of the robots decisions

    and actions. These modulations are applied to a hybrid architecture that incorporates reactive control, deliberate planning and exploration capabilities. We utilize emotions to improve the adaptive capabilities of a single robot for a nonsocial task: navigation within unknown indoor environments.

  4. Architectural Description

    Biological emotions contain both innate and learned components, and they interact with multiple levels of cognitive processes. We model these varied interactions within a hybrid reactive/deliberative architecture. Higher levels of the architecture have a supervisory role, providig loose goals that can be obeyed or ignored by lower level processes as the situation dictates. Emotions interact with all levels of the hierarchy. Reactive emotions are fast hardcoded stimulus/response patterns tightly coupled to perception and action systems. Deliberative emotions are slow learned associations that affect decisions made by high-level planning systems[2].

    Not all decisions and actions in the biological world are influenced by emotions. Furthermore, cognitive functions that can benefit from emotions do not necessarily require emotions to perform at a basic level. Similarly, our architecture is not driven solely by emotions. Each of its major functions can operate in the absence of an emotional influence. In our model, artificial emotions are second-order processes that bias decisions and actions to better suit the context of a situation.

    Five basic emotions are represented in our architecture as distinct stimulus/response patterns (Table 1). They are not intended to be exact facsimiles of human emotions with the same names. Rather, they simply approximate some of the functions of biological emotions that are useful in the context of robotics.

    Table 1. Stimulus and response for the emotions used

    Emotion

    Stimulus

    Response

    Fear

    When an obstacle is found in the path

    Reduce the speed and avoid collisions

    Anger

    When more obstacles in the path

    Reduce the obstacle radius and finds path to avoid them

    Disgust

    Explore environment to improve world knowledge

    Finds the shortest path to reach the target

    Sadness

    Goal is reached with less

    satisfaction due to more

    obstacles/collisi ons in the path

    During next travels, it uses alternate path rather than using the same path

    Happiness

    Goal is reached in time

    Provides positive reinforcement to successful behaviours

    1. Reactive Emotions

      The intensity value of a reactive emotion is dependent on the probability that its associated stimulus will occur. Each emotion employs a different appraisal function to estimate this probability using local sensor data. Reactive appraisals are also subject to localized biases from deliberative emotions. The resulting intensity value is damped, allowing an emotion to persist for some time after its stimulus has abated. Our architecture does not represent competing drives as discrete behaviors. Instead, multiple drives are integrated into each control layer, enabling the robot to favor one response without completely disregarding another. Reactive emotions are expressed as control parameter modulations. These modulations smoothly change the robots bias toward certain drives without explicitly controlling its behaviors.

    2. Deliberative Emotions

      Deliberative emotions are also derived from stimulus probability estimates. However, deliberative appraisal functions utilize local features and global representations to associate emotional intensities with specific objects in the environment. The deliberative emotions associated with an object bias the robots plans regarding that object. Like control, planning is generally a continuous process, and multiple goals can simultaneously be pursued. Certain decisions are binary, however Biasing such a decision means increasing or decreasing the probability that the robot will perform a particular action.

      A task specific implementation of our architecture is shown in the following figure 1.

      Figure 1. Basic architecture.

  5. Emotion-Modulated Reactive Control

    The reactive controller employs a two-stage optimization to loosely follow planned paths while avoiding obstacles. First, heading angel is selected for the robot, utilizing an obstacle avoidance approach similar to the vector field histogram algorithm[2]. This approach involves the following optimization problem for a discrete list of directions

    :

    1

    = arg max(,) W1 1()

    ()

    Angular error function a1() favors directions that are closer to the goal direction. Obstacle avoidance function d1(c) prefers directions with more distant obstacles. Angular inertia function i1(c) gives preference to smaller changes in direction, preventing the robot from oscillating between multiple directions that are otherwise equally favorable. Directional weight vector W1 is a row vector comprising three unit interval elements that control the relative strength of each competing objective. The robot is roughly circular, so for simplicity, it is represented as a object, and each obstacle is enlarged by radius r0. The obstacle aversion drive thus becomes increasingly dominant as r0 increses.

    Next, linear velocity and angular velocity

    are selected in a way that move the robot in the intended direction at an appropriate speed[2]. The method employed is similar to velocity space approaches such as the curvature-velocity and dynamic window algorithms.

      1. Reactive Fear

        The robots maximum speed can be modulated in a context-dependant manner. The robot detects no nearby obstacles in the open area, so its reactive fear intensity value is close to zero, but the fear increases when it travels a narrow path or near any obstacles, as a result the velocity of the robot is decreased, thereby reducing the probability of collision.

      2. Reactive Anger

    Reactive anger intensity increases when the robots progress toward the goal is likely to be obstructed by frequent obstacles. Detection of an obstructed state involves the summation of the robots velocity vectors over time t. Obstacle radius linearly decreases over a certain limit as the reactive anger intensity increases.

    With the obstacle radius correctly modulated by reactive anger the robot initially avoids contact with the obstacles, but the resulting repetitive motion is soon recognized as an obstructed state. This causes reactive anger intensity to quickly increase, reducing the obstacle radius until the robot can push the obstacles out of the way, once free of obstacles, the reactive anger value rapidly decays, increasing the robots aversion to a safer level, allowing it to navigate normally[2].

  6. Emotion-Modulated Path Planning

    The path planning is performed by the A* heuristic which is a best-first graph search method

    that prioritizes nodes by the estimated quality of their associated paths. In standard A* path planning methods, nodes are assigned either an occupied or unoccupied status, and all unoccupied nodes are equally weighted. However, if no unobstructed paths exist, the planning algorithm gracefully fails by choosing the best of the unfavorable options available. The reactive controller generally prevents the robot from colliding with obstacles, even if it is instructed to pass through them, and it can sometimes reactively navigate through nodes that the planner regards as occupied. Replanning is triggered when the robots reactive fear anf anger intensities exceed the predefined thresholds[3]. Another variable that can trigger replanning is the robots pseudoreactive disgust intensity, obtained from the mean sensor-map mismatch.

    1. Deliberative Fear and Anger

      If a collision sensor is triggered, deliberative fear intensities grow in nodes within rectilinear distance of the node where the collision occurs; otherwise, they decay at a certain rate. If deliberative

      fear increases in nodes, surrounding the point of collision. This results in a strong negative bias to their cost, causing the robot to immediately plan a path elsewhere. Thus, the robot sustains a smaller number of collisions each time it encounters a dynamic obstacle, and t more quickly completes the task.

      Fear is clearly advantageous in the example presented. However, there are situations in which it can become hindrance. For example, if the robot incurs a collision in a doorway that is the only exit from a room, it will become afraid of the doorway, preventing it from leaving the room. This problem is addressed by utilizing deliberative anger to suppress emotions such as fear when they obstruct the robots progress toward a goal. Overall, these results demonstrate that deliberative fear can improve the robots performance in certain situations and deliberative anger can counteract some of its adverse effect[2].

    2. Deliberative Disgust

      When deliberative disgust is disabled the robot travels back and forth along the same path. However, if deliberative disgust is activated, a negative bias is applied to nodes that the robot has explored, increasing the probability that the robot will instead plan a path through unexplored nodes. So it covers a large portion of the map before arriving as its goal. The robots task is to explore the environment, dynamically updating its internal map as it navigates to a specified location and then returns. If performance is judged in terms of quantity of world knowledge obtained, disgust greatly improves the robots performance.

    3. Happiness and Sadness

      For simplicity, sadness is modeled as the negative state of happiness. Upon completion (or timeout) of a navigation instruction, the instruction is assigned success rating. The success rating is currently dependent on the normalized ratio of path time to path distance. The utility of happiness/sadness as a learning mechanism is demonstrated in a known static environment where the robot is instructed to repeatedly travel between two points. After one passage through the narrow corridor, the robot determines that the time taken was unacceptably long for the distance covered. The happiness/sadness values of nodes traversed throughout the instruction are therefore largely negative (indicating sadness), resulting in a strong negative bias to their cost during future planning [2]. Thus, if performance is to be judged in terms of task completion speed, happiness and sadness do improve the robots performance.

  7. Implementation

    Octagonal shaped robot with IR sensors placed in all its sides are used to implement the concept. PIC microcontroller is used to design the program.

    Figure 2. Implementation

  8. Conclusion

Thus, When all these emotions are grouped together the robots overall performance has a significant increase when compared to the one without these emotions. The collision counts decreases a lot due to the fear and anger emotions, Exploration coverage and mean velocity has a significant increase due to the Disgust, Happiness and Sadness. And the chances for the robot to reach the destination also increased significantly.

These emotions can also be used in remote robots that are used in Military purpose for bomb defusals and other such robots that navigate by themselves. In future, it can also be implemented in Smart cars for a driverless system.

Reference

[1]. Christopher P. Lee-Johnson and Dale A. Carnegie, Mobile Robot Navigation Modulated by Artificial Emotions, in IEEE Transactions on Systems, Man and Cybernetics, Part-B: Cybernetics, vol. 40, no. 2,pp. 469-480, April 2010.

[2]. Joost Broekers, Emotion and Reinforcement: Affective Facial Expressions facilitate robot learning, in Human Computing, LNAI 4451, pp. 113-132, 2007.

[3]. Jean Marc Fellons, From Human Emotions to Robot Emotions, in American Association for Artificial Intelligence Spring Symposium, Stanford University, March 2004.

[4]. Dale A. Carnegie, Ashil Prakash, Chris Chitty, Bernard Guy, A Human-Like Semi- Autonomous Mobile Security Robot in 2nd

International Conference on Autonomous Robots and Agents, December 2004.

[5]. Mark Neal, Jonathan Timmis, Timidity: A Useful Emotional Mechanism for Robot Control, in Informatica, vol 27(2). pp. 197-204, 2003.

[6]. Fong, T. Fourbaksh, I. Dautenhatin, A Survey of Socially Interactive Robots, in Robotics and Autonomous Systems, pp. 143-166, 2003.

[7]. Sebastian Thrun, Robotic mapping: A Survey, Carnegie Mellon University, February 2002.

[8]. R.R. Murphy, C.L. Lisetti, R. Tardif, L.Irish and

A. Gage, Emotion based control of Cooperating Heterogeneous Mobile Robots in IEEE Transaction of Robot. Autom., vol. 18, no.5, pp. 744-757, October 2002.

[9]. F. Michaud, J.Audet, D. Letourneau, L. Lussier,

C. Theberge-Turmel, and S. Caron, Experiences of an Autonomous Robot attending the AAAI Conference, in IEEE Intelligent Systems, vol 16, no.5, pp. 23-29, Sep/Oct. 2001.

[10].Robert Plutchik, The nature of emotions, American Scientist, vol 89, pp. 344-350, 2001.

[11].Joseph E. Le Doux, Emotions Circuits in the Brain, Annu. Rev. Neurosci, vol 23, pp. 155- 184, 2000.

[12].Velasquez J.D., An Emotion Based approach to Robotics, IROS, pp. 235-240, 1999.

[13].Paul Ekman, Basic Emotions, Handbook of Cognition and Emotion, Sussex, W.K: Johnwiley & Sons Ltd., 1999.

[14].J.D. Velasquez and P.Maes, Cathexis: A computational model of emotions, in 1st International Conference Autonomous Agents, pp, 518-519, 1997.

[15].S.C.Gadanho and J. Hallam, Robot Learning driven by Emotions, N.L. Stein and K. Oatley, Eds. Hove, U.K.: Lawrence Erlbaum, pp. 169- 200, 1992.

[16].Reid Simmons, The curvature velocity method for local obstacle avoidance, Carnegie Mellon University, 1995.

Leave a Reply