[1] M. Rolínek, D. Zietlow, and G. Martius. Variational autoencoders pursue pca directions (by accident). In Conference on Computer Vision and Pattern Recognition (CVPR'19), pages 12406-12415, June 2019. [ bib ]
[2] S.-C. Lin, G. Martius, and M. Oettel. Analytical classical density functionals from an equation learning network, 2019. arXiv preprint https://arxiv.org/abs/1910.12752. [ bib ]
[3] S. Blaes, M. Vlastelica, J.-J. Zhu, and G. Martius. Control What You Can: Intrinsically motivated task-planning agent. In Advances in Neural Information Processing Systems 32 (NeurIPS'19). Curran Associates, Inc., 2019. [ bib ]
[4] S. Bogomolov, G. Frehse, A. Gurung, D. Li, G. Martius, and R. Ray. Falsification of hybrid systems using symbolic reachability and trajectory splicing. In 22nd ACM Intl. Conf. on Hybrid Systems: Computation and Control (HSCC 2019), pages 1-10, 2019. [ bib ]
[5] H. Sun and G. Martius. Machine learning for haptics: Inferring multi-contact stimulation from sparse sensor configuration. Frontiers in Neurorobotics, 13:51, 2019. [ bib | DOI | http ]
Robust haptic sensation systems are essential for obtaining dexterous robots. Currently, we have solutions for small surface areas, such as fingers, but affordable and robust techniques for covering large areas of an arbitrary 3D surface are still missing. Here, we introduce a general machine learning framework to infer multi-contact haptic forces on a 3D robot's limb surface from internal deformation measured by only a few physical sensors. The general idea of this framework is to predict first the whole surface deformation pattern from the sparsely placed sensors and then to infer number, locations, and force magnitudes of unknown contact points. We show how this can be done even if training data can only be obtained for single-contact points using transfer learning at the example of a modified limb of the Poppy robot. With only 10 strain-gauge sensors we obtain a high accuracy also for multiple-contact points. The method can be applied to arbitrarily shaped surfaces and physical sensor types, as long as training data can be obtained.

[6] D. Baumann, J.-J. Zhu, G. Martius, and S. Trimpe. Deep reinforcement learning for event-triggered control. In 2018 IEEE Conference on Decision and Control (CDC), pages 943-950, 2018. [ bib | DOI | http ]
Keywords: control engineering computing;learning (artificial intelligence);multi-agent systems;networked control systems;nonlinear control systems;multiple control tasks;model-based ETC designs;event-triggered control methods;high-performance control;usual time-triggered methods;mathematical model;controller;deep reinforcement learning algorithms;DRL approach;Mathematical model;Reinforcement learning;Heuristic algorithms;Aerospace electronics;Sensors;Task analysis;Numerical models
[7] M. Rolínek and G. Martius. L4: Practical loss-based stepsize adaptation for deep learning. In Advances in Neural Information Processing (NeurIPS'18), pages 6434-6444. Curran Associates, Inc., 2018. [ bib | Supplement | .pdf ]
[8] H. Sun and G. Martius. Robust affordable 3D haptic sensation via learning deformation patterns. In Proceedings International Conference on Humanoid Robots (IEEE Humanoids), pages 846-853, New York, NY, USA, 2018. IEEE. Oral Presentation. [ bib | http ]
[9] C. Pinneri and G. Martius. Systematic self-exploration of behaviors for robots in a dynamical systems framework. In Proc. Artificial Life XI, pages 319-326. MIT Press, Cambridge, MA, 2018. [ bib | DOI | http ]
One of the challenges of this century is to understand the neural mechanisms behind cognitive control and learning. Recent investigations propose biologically plausible synaptic mechanisms for self-organizing controllers, in the spirit of Hebbian learning. In particular, differential extrinsic plasticity (DEP) has proven to enable embodied agents to self-organize their individual sensorimotor development, and generate highly coordinated behaviors during their interaction with the environment. These behaviors are attractors of a dynamical system. In this paper, we use the DEP rule to generate attractors and we combine it with a “repelling potential” which allows the system to actively explore all its attractor behaviors in a systematic way. With a view to a selfdetermined exploration of goal-free behaviors, our framework enables switching between different motion patterns in an autonomous and sequential fashion. Our algorithm is able to recover all the attractor behaviors in a toy system and it is also effective in two simulated environments. A spherical robot discovers all its major rolling modes and a hexapod robot learns to locomote in 50 different ways in 30min.

[10] S. S. Sahoo, C. H. Lampert, and G. Martius. Learning equations for extrapolation and control. In J. Dy and A. Krause, editors, Proc. 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, volume 80, pages 4442-4450. PMLR, 2018. [ bib | .html | .pdf ]
We present an approach to identify concise equations from data using a shallow neural network approach. In contrast to ordinary black-box regression, this approach allows understanding functional relations and generalizing them from observed data to unseen parts of the parameter space. We show how to extend the class of learnable equations for a recently proposed equation learning network to include divisions, and we improve the learning and model selection strategy to be useful for challenging real-world data. For systems governed by analytical expressions, our method can in many cases identify the true underlying equation and extrapolate to unseen domains. We demonstrate its effectiveness by experiments on a cart-pendulum system, where only 2 random rollouts are required to learn the forward dynamics and successfully achieve the swing-up task.

[11] V. Botella-Soler, S. Deny, G. Martius, O. Marre, and G. Tkacik. Nonlinear decoding of a complex movie from the mammalian retina. PLOS Computational Biology, 14(5):1-27, 05 2018. [ bib | DOI | http ]
Author summary Neurons in the retina transform patterns of incoming light into sequences of neural spikes. We recorded from ∼100 neurons in the rat retina while it was stimulated with a complex movie. Using machine learning regression methods, we fit decoders to reconstruct the movie shown from the retinal output. We demonstrated that retinal code can only be read out with a low error if decoders make use of correlations between successive spikes emitted by individual neurons. These correlations can be used to ignore spontaneous spiking that would, otherwise, cause even the best linear decoders to “hallucinate” nonexistent stimuli. This work represents the first high resolution single-trial full movie reconstruction and suggests a new paradigm for separating spontaneous from stimulus-driven neural activity.

[12] R. Der and G. Martius. Self-organized behavior generation for musculoskeletal robots. Frontiers in Neurorobotics, 11:8, 2017. arXiv preprint http://arxiv.org/abs/1602.02990. [ bib | DOI | Supplement | http ]
[13] G. Martius and C. H. Lampert. Extrapolation and learning equations, 2016. arXiv preprint https://arxiv.org/abs/1610.02995. [ bib ]
[14] G. Martius, R. Hostettler, A. Knoll, and R. Der. Compliant control for soft robots: emergent behavior of a tendon driven anthropomorphic arm. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 767-773, 2016. [ bib | DOI | Supplement | .pdf ]
Keywords: Control systems;Muscles;Robot kinematics;Robot sensing systems;Springs;Tendons
[15] R. Der. In search for the neural mechanisms of individual development: behavior-driven differential hebbian learning. Frontiers in Robotics and AI, 2(37), 2016. [ bib | DOI | http ]
[16] R. Der and G. Martius. Dynamical self-consistency leads to behavioral development and emergent social interactions in robots. In 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pages 49-56. IEEE, Sept 2016. [ bib | DOI | Supplement | .pdf ]
distinguished oral paper award
[17] R. Der and G. Martius. Novel plasticity rule can explain the development of sensorimotor intelligence. Proceedings of the National Academy of Sciences, 112(45):E6224-E6232, 2015. [ bib | DOI | Supplement | http ]
Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher-level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive rhythmic behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system-specific modifications of the DEP rule. They rather arise from the underlying mechanism of spontaneous symmetry breaking, which is due to the tight brain body environment coupling. The new synaptic rule is biologically plausible and would be an interesting target for neurobiological investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution.

[18] G. Martius and E. Olbrich. Quantifying emergent behavior of autonomous robots. Entropy, 17(10):7266, 2015. [ bib | DOI | Supplement | http ]
[19] R. Der. On the role of embodiment for self-organizing robots: behavior as broken symmetry. In M. Prokopenko, editor, Guided Self-Organization: Inception, volume 9 of Emergence, Complexity and Computation, pages 193-221. Springer, 2014. [ bib | Supplement | .pdf ]
[20] G. Martius, R. Der, and J. M. Herrmann. Robot learning by guided self-organization. In M. Prokopenko, editor, Guided Self-Organization: Inception, volume 9 of Emergence, Complexity and Computation, pages 223-260. Springer Berlin Heidelberg, 2014. [ bib | DOI | http ]
[21] G. Martius, L. Jahn, H. Hauser, and V. V. Hafner. Self-exploration of the stumpy robot with predictive information maximization. In A. del Pobil, E. Chinellato, E. Martinez-Martin, J. Hallam, E. Cervera, and A. Morales, editors, Proc. From Animals to Animats, SAB 2014, volume 8575 of LNCS, pages 32-42. Springer, 2014. [ bib | Supplement | .pdf ]
best paper award
Keywords: Self-exploration; intrinsic motivation; robot control; information theory; dynamical systems; learning
[22] R. Der and G. Martius. Behavior as broken symmetry in embodied self-organizing robots. In Advances in Artificial Life, ECAL 2013, pages 601-608. MIT Press, 2013. [ bib | Supplement | .pdf ]
[23] G. Martius. Robustness of guided self-organization against sensorimotor disruptions. Advances in Complex Systems, 16(02n03):1350001, 2013. [ bib | DOI | .pdf ]
Self-organizing processes are crucial for the development of living beings. Practical applications in robots may benefit from the self-organization of behavior, e.g. to increase fault tolerance and enhance flexibility, provided that external goals can also be achieved. We present results on the guidance of self-organizing control by visual target stimuli and show a remarkable robustness to sensorimotor disruptions. In a proof of concept study an autonomous wheeled robot is learning an object finding and ball-pushing task from scratch within a few minutes in continuous domains. The robustness is demonstrated by the rapid recovery of the performance after severe changes of the sensor configuration.

[24] G. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013. [ bib | DOI | Supplement | http ]
Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.

[25] K. Zahedi, G. Martius, and N. Ay. Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis. Frontiers in Psychology, 4(801), 2013. [ bib | DOI | http ]
One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviours. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviours, because a maximisation of the PI corresponds to an exploration of morphology- and environment-dependent behavioural regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost.

[26] N. Ay, H. Bernigau, R. Der, and M. Prokopenko. Information-driven self-organization: the dynamical system approach to autonomous robot behavior. Theory in Biosciences, 131(3):161-179, 2012. [ bib ]
[27] R. Der and G. Martius. The Playful Machine - Theoretical Foundation and Practical Realization of Self-Organizing Robots. Springer, Berlin Heidelberg, 2012. [ bib | http | .pdf ]
Autonomous robots may become our closest companions in the near future. While the technology for physically building such machines is already available today, a problem lies in the generation of the behavior for such complex machines. Nature proposes a solution: young children and higher animals learn to master their complex brain-body systems by playing. Can this be an option for robots? How can a machine be playful? The book provides answers by developing a general principle-homeokinesis, the dynamical symbiosis between brain, body, and environment-that is shown to drive robots to self-determined, individual development in a playful and obviously embodiment-related way: a dog-like robot starts playing with a barrier, eventually jumping or climbing over it; a snakebot develops coiling and jumping modes; humanoids develop climbing behaviors when fallen into a pit, or engage in wrestling-like scenarios when encountering an opponent. The book also develops guided self-organization, a new method that helps to make the playful machines fit for fulfilling tasks in the real world.

[28] G. Martius and J. M. Herrmann. Variants of guided self-organization for robot control. Theory in Biosci., 131(3):129-137, 2012. [ bib | DOI | http | .pdf ]
Autonomous robots can generate exploratory behavior by self-organization of the sensorimotor loop. We show that the behavioral manifold that is covered in this way can be modified in a goal-dependent way without reducing the self-induced activity of the robot. We present three strategies for guided self-organization, namely by using external rewards, a problem-specific error function or assumptions about the symmetries of the desired behavior. The strategies are analyzed for two different robots in a physically realistic simulation.

[29] K. Zahedi, G. Martius, and N. Ay. Predictive information in reinforcement learning of embodied agents. In Int. Workshop on Guided Self-Organization 5, 2012. Abstract. [ bib ]
[30] G. Martius and J. M. Herrmann. Tipping the scales: Guidance and intrinsically motivated behavior. In Advances in Artificial Life, ECAL 2011, pages 506-513. MIT Press, 2011. [ bib | Supplement | .pdf ]
[31] K. Zahedi, N. Ay, and R. Der. Higher coordination with less control - A result of information maximization in the sensorimotor loop. Adaptive Behavior, 18(3-4):338-355, 2010. [ bib | DOI | http ]
This work presents a novel learning method in the context of embodied artificial intelligence and self-organization, which has as few assumptions and restrictions as possible about the world and the underlying model. The learning rule is derived from the principle of maximizing the predictive information in the sensorimotor loop. It is evaluated on robot chains of varying length with individually controlled, noncommunicating segments. The comparison of the results shows that maximizing the predictive information per wheel leads to a higher coordinated behavior of the physically connected robots compared with a maximization per robot. Another focus of this article is the analysis of the effect of the robot chain length on the overall behavior of the robots. It will be shown that longer chains with less capable controllers outperform those of shorter length and more complex controllers. The reason is found and discussed in the information-geometric interpretation of the learning process.

[32] R. Der and G. Martius. Playful Machines: Tutorial. http://robot.informatik.uni-leipzig.de/tutorial?lang=en, 2010. [ bib ]
[33] G. Martius. Goal-Oriented Control of Self-Organizing Behavior in Autonomous Robots. PhD thesis, Georg-August-Universität Göttingen, 2010. [ bib | http ]
[34] G. Martius and J. M. Herrmann. Taming the beast: Guided self-organization of behavior in autonomous robots. In S. Doncieux, B. Girard, A. Guillot, J. Hallam, J.-A. Meyer, and J.-B. Mouret, editors, From Animals to Animats 11, volume 6226 of LNCS, pages 50-61. Springer, 2010. [ bib | DOI | http ]
best paper award
[35] G. Martius, F. Hesse, F. Güttler, and R. Der. LpzRobots: A free and powerful robot simulator. http://robot.informatik.uni-leipzig.de/software, 2010. [ bib ]
[36] F. Hesse. Self-Organizing Control for Autonomous Robots. PhD thesis, University of Göttingen, Institute for Nonlinear Dynamics, 2009. [ bib ]
[37] F. Hesse, R. Der, and J. M. Herrmann. Modulated Exploratory Dynamics Can Shape Self-Organized Behavior. Advances in Complex Systems, 12(03):273, 2009. [ bib | DOI | .html | .pdf ]
We study an adaptive controller that adjusts its internal parameters by self-organization of its interaction with the environment. We show that the parameter changes that occur in this low-level learning process can themselves provide a source of information to a higher-level context-sensitive learning mechanism. In this way the context is interpreted in terms of the concurrent low-level learning mechanism. The dual learning architecture is studied in realistic simulations of a foraging robot and of a humanoid hand that manipulated an object. Both systems are driven by the same low-level scheme, but use the second-order information in different ways. While the low-level adaptation continues to follow a set of rigid learning rules, the second-order learning modulates the elementary behaviors and affects the distribution of the sensory inputs via the environment.

[38] F. Hesse, G. Martius, R. Der, and J. M. Herrmann. A sensor-based learning algorithm for the self-organization of robot behavior. Algorithms, 2(1):398-409, 2009. [ bib | http ]
Ideally, sensory information forms the only source of information to a robot. We consider an algorithm for the self-organization of a controller. At short timescales the controller is merely reactive but the parameter dynamics and the acquisition of knowledge by an internal model lead to seemingly purposeful behavior on longer timescales. As a paradigmatic example, we study the simulation of an underactuated snake-like robot. By interacting with the real physical system formed by the robotic hardware and the environment, the controller achieves a sensitive and body-specific actuation of the robot.

[39] N. Ay, N. Bertschinger, R. Der, F. Güttler, and E. Olbrich. Predictive information and explorative behavior of autonomous robots. The European Physical Journal B, 63(3):329-339, 2008. [ bib | DOI | http | .pdf ]
[40] R. Der, F. Güttler, and N. Ay. Predictive information and emergent cooperativity in a chain of mobile robots. In S. Bullock, J. Noble, R. Watson, and M. A. Bedau, editors, Proc. Artificial Life XI, pages 166-172. MIT Press, Cambridge, MA, 2008. [ bib | .pdf | .pdf ]
[41] R. Der, F. Güttler, and N. Ay. Predictive information and emergent cooperativity in a chain of mobile robots. In Artificial Life XI. MIT Press, 2008. [ bib ]
[42] G. Martius, K. Fiedler, and J. M. Herrmann. Structure from Behavior in Autonomous Agents. In Proc. IEEE Intl. Conf. Intelligent Robots and Systems (IROS 2008), pages 858 - 862, 2008. [ bib | DOI ]
[43] G. Martius, S. Nolfi, and J. M. Herrmann. Emergence of interaction among adaptive agents. In M. Asada, J. C. T. Hallam, J.-A. Meyer, and J. Tani, editors, Proc. From Animals to Animats 10 (SAB 2008), volume 5040 of LNCS, pages 457-466. Springer, 2008. [ bib | DOI ]
Robotic agents can self-organize their interaction with the environment by an adaptive homeokinetic controller that simultaneously maximizes sensitivity of the behavior and predictability of sensory inputs. Based on previous work with single robots, we study the interaction of two homeokinetic agents. We show that this paradigm also produces quasi-social interactions among artificial agents. The results suggest that homeokinetic learning generates social behavior only in the the context of an actual encounter of the interaction partner while this does not happen for an identical stimulus pattern that is only replayed. This is in agreement with earlier experiments with human subjects.

[44] F. Güttler. Realitätsnahe simulationsumgebung einer selbstorganisierenden roboterwelt. Master's thesis, University Leipzig, 2007. [ bib | www: ]
[45] N. Hamed. Self-Referential Dynamical Systems and Developmental Robotics. PhD thesis, University of Leipzig, 2007. [ bib ]
[46] F. Hesse, R. Der, and J. M. Herrmann. Reflexes from self-organizing control in autonomous robots. In L. Berthouze, C. G. Prince, M. Littman, H. Kozima, and C. Balkenius, editors, Proc. 7th Intl. Conf. on Epigenetic Robotics, volume 134 of Cognitive Studies, pages 37-44. Lund University, 2007. [ bib ]
Homeokinetic learning provides a route to the self-organization of elementary behaviors in autonomous robots by establishing low-level sensomotoric loops. Strength and duration of the internal parameter changes which are caused by the homeokinetic adaptation provide a natural evaluation of external states, which can be used to incorporate information from additional sensory inputs and to extend the function of the low-level behavior to more general situations. We illustrate the approach by two examples, a mobile robot and a human-like hand which are driven by the same low-level scheme, but use the second-order information in different ways to achieve either risk avoidance and unconstrained movement or constrained movement. While the low-level adaptation follows a set of rigid learning rules, the second-order learning exerts a modulatory effect to the elementary behaviors and to the distribution of their inputs.

[47] G. Martius, J. M. Herrmann, and R. Der. Guided self-organisation for autonomous robot development. In F. Almeida e Costa, L. Rocha, E. Costa, I. Harvey, and A. Coutinho, editors, Advances in Artificial Life 9th European Conference, ECAL 2007, volume 4648 of LNCS, pages 766-775. Springer, 2007. [ bib ]
The paper presents a method to guide the self-organised development of behaviours of autonomous robots. In earlier publications we demonstrated how to use the homeokinesis principle and dynamical systems theory to obtain self-organised playful but goal-free behaviour. Now we extend this framework by reinforcement signals. We validate the mechanisms with two experiment with a spherical robot. The first experiment aims at fast motion, where the robot reaches on average about twice the speed of a not reinforcement robot. In the second experiment spinning motion is rewarded and we demonstrate that the robot successfully develops pirouettes and curved motion which only rarely occur among the natural behaviours of the robot.

[48] R. Der, G. Martius, and F. Hesse. Let it roll - emerging sensorimotor coordination in a spherical robot. In L. M. Rocha, L. S. Yaeger, M. A. Bedau, D. Floreano, R. L. Goldstone, and A. Vespignani, editors, Proc, Artificial Life X, pages 192-198. Intl. Society for Artificial Life, MIT Press, August 2006. [ bib ]
Self-organization and the phenomenen of emergence play an essential role in living systems and form a challenge to artificial life systems. This is not only because systems become more life like but also since self-organization may help in reducing the design efforts in creating complex behavior systems. The present paper exemplifies a general approach to the self-organization of behavior which has been developed and tested in various examples in recent years. We apply this approach to a spherical robot driven by shifting internal masses. The complex physics of this robotic object is completely unknown to the controller. Nevertheless after a short time the robot develops systematic rolling movements covering large distances with high velocity. In a hilly landscape it is capable of manoeuvering out of the basins and in landscapes with a fixed rotational geometry the robot more or less adatps its movements to this geometry - the controller so to say develops a kind of feeling for its environment although there are no sensors for measuring the positions or the velocity of the robot. We argue that this behavior is a result of the spontaneous symmetry breaking effects which are responsible for the emergence of behavior in our approach.

[49] R. Der and G. Martius. From motor babbling to purposive actions: Emerging self-exploration in a dynamical systems approach to early robot development. In S. Nolfi, G. Baldassarre, R. Calabretta, J. C. T. Hallam, D. Marocco, J.-A. Meyer, O. Miglino, and D. Parisi, editors, Proc. From Animals to Animats 9, SAB 2006, volume 4095 of LNCS, pages 406-421. Springer, 2006. [ bib ]
Self-organization and the phenomenon of emergence play an essential role in living systems and form a challenge to artificial life systems. This is not only because systems become more lifelike, but also since self-organization may help in reducing the design efforts in creating complex behavior systems. The present paper studies self-exploration based on a general approach to the self-organization of behavior, which has been developed and tested in various examples in recent years. This is a step towards autonomous early robot development. We consider agents under the close sensorimotor coupling paradigm with a certain cognitive ability realized by an internal forward model. Starting from tabula rasa initial conditions we overcome the bootstrapping problem and show emerging self-exploration. Apart from that, we analyze the effect of limited actions, which lead to deprivation of the world model. We show that our paradigm explicitly avoids this by producing purposive actions in a natural way. Examples are given using a simulated simple wheeled robot and a spherical robot driven by shifting internal masses.

[50] R. Der, F. Hesse, and G. Martius. Rocking stamper and jumping snake from a dynamical system approach to artificial life. Adaptive Behavior, 14(2):105-115, 2006. [ bib | DOI | .pdf ]
Dynamical systems offer intriguing possibilities as a substrate for the generation of behavior because of their rich behavioral complexity. However this complexity together with the largely covert relation between the parameters and the behavior of the agent is also the main hindrance in the goal-oriented design of a behavior system. This paper presents a general approach to the self-regulation of dynamical systems so that the design problem is circumvented. We consider the controller (a neural net work) as the mediator for changes in the sensor values over time and define a dynamics for the parameters of the controller by maximizing the dynamical complexity of the sensorimotor loop under the condition that the consequences of the actions taken are still predictable. This very general principle is given a concrete mathematical formulation and is implemented in an extremely robust and versatile algorithm for the parameter dynamics of the controller. We consider two different applications, a mechanical device called the rocking stamper and the ODE simulations of a "snake" with five degrees of freedom. In these and many other examples studied we observed various behavior modes of high dynamical complexity.

Keywords: autonomous robots, self-organization, homeostasis, homeokinesis, dynamical systems, learning
[51] R. Der, F. Hesse, and R. Liebscher. Contingent robot behavior generated by self-referential dynamical systems. Technical report, University of Leipzig, 2005. [ bib ]
[52] R. Der, F. Hesse, and G. Martius. Learning to feel the physics of a body. In Computational Intelligence for Modelling, Control and Automation, CIMCA 2005, volume 2, pages 252-257, Washington, DC, USA, 2005. [ bib ]
Despite the tremendous progress in robotic hardware and in both sensorial and computing efficiencies the performance of contemporary autonomous robots is still far below that of simple animals. This has triggered an intensive search for alternative approaches to the control of robots. The present paper exemplifies a general approach to the self-organization of behavior which has been developed and tested in various examples in recent years. We apply this approach to an underactuated snake like artifact with a complex physical behavior which is not known to the controller. Due to the weak forces available, the controller so to say has to develop a kind of feeling for the body which is seen to emerge from our approach in a natural way with meandering and rotational collective modes being observed in computer simulation experiments.

[53] M. Herrmann, M. Holicki, and R. Der. On ashby's homeostat: A formal model of adaptive regulation. In S. Schaal, editor, Proc. From Animals to Animats 8 (SAB 2004), pages 324 - 333. MIT Press, 2004. [ bib ]
[54] R. Der, M. Herrmann, and M. Holicki. Self-organization in sensor-motor loops by the homeokinetic principle. Verhandlungen der Deutschen Physikalischen Gesellschaft, page 510, Jan. 2002. [ bib ]
[55] R. Der and R. Liebscher. True autonomy from self-organized adaptivity. In Proc. Workshop Biologically Inspired Robotics, Bristol, 2002. [ bib ]
[56] R. Der. Self-organized acquisition of situated behaviors. Theory Biosci., 120:179-187, 2001. [ bib ]
[57] J. M. Herrmann. Dynamical systems for predictive control of autonomous robots. Theory in Biosci., 120:241-252, 2001. [ bib ]
[58] R. Der. Self-organized robot behavior from the principle of homeokinesis., 1999. [ bib ]
[59] R. Der, U. Steinmetz, and F. Pasemann. Homeokinesis - a new principle to back up evolution with learning. In Proc. Intl. Conf. on Computational Intelligence for Modelling, Control and Automation (CIMCA 99), volume 55 of Concurrent Systems Engineering Series, pages 43-47, Amsterdam, 1999. IOS Press. [ bib | .html ]
[60] R. Der and M. Herrmann. Self-adjusting reinforcement learning. In Nonlinear Theory and Applications - NOLTA 96, pages 441 - 444, 1996. [ bib ]
[61] R. Der and M. Herrmann. Efficient Q-learning by division of labour. In Proc. Intl. Conf. on Artificial Neural Networks - ICANN95, pages 129 - 134, 1995. [ bib ]