Team Description for RoboCup 2013
|
|
|
- Lee Nash
- 10 years ago
- Views:
Transcription
1 Team Description for RoboCup 2013 Thomas Röfer 1, Tim Laue 1, Judith Müller 2, Michel Bartsch 2, Arne Böckmann 2, Florian Maaß 2, Thomas Münder 2, Marcel Steinbeck 2, Simon Taddiken 2, Alexis Tsogias 2, Felix Wenk 2 1 Deutsches Forschungszentrum für Künstliche Intelligenz, Cyber-Physical Systems, Enrique-Schmidt-Str. 5, Bremen, Germany 2 Universität Bremen, Fachbereich 3 Mathematik und Informatik, Postfach , Bremen, Germany 1 Introduction B-Human is a joint RoboCup team of the Universität Bremen and the German Research Center for Artificial Intelligence (DFKI). The team consists of numerous students as well as three researchers. The latter have already been active in a number of RoboCup teams such as the GermanTeam and the Bremen Byters (both Four-Legged League), B-Human and the BreDoBrothers (Humanoid Kid- Size League), and B-Smart (Small-Size League). We entered the Standard Platform League in 2008 as part of the BreDo- Brothers, a joint team of the Universität Bremen and the Technische Universität Dortmund, providing the software framework, state estimation modules, and the get up and kick motions for the NAO. For RoboCup 2009, we discontinued our Humanoid Kid-Size League activities and shifted all resources to the SPL, starting as a single location team after the split-up of the BreDoBrothers. Since then, the team B-Human has won every official game it played except for the final of RoboCup We won all five RoboCup German Open competitions since In 2009, 2010, and 2011, we also won the RoboCup. This year, we hope to be able to replicate previous successes by winning the RoboCup again in Eindhoven. This team description paper is organized as follows: Section 1.1 presents all current team members, followed by short descriptions of our publications since the last RoboCup in Sect New developments after RoboCup 2012 to make the vision system more tolerant to changing lighting conditions are the topic of Sect. 2. Our new self-localization approach is described in Sect. 3. Section 4 presents our re-implementation of ultrasound-based obstacle detection. B-Human s new signature move, i. e. taking an arm out of the way when necessary, is described in Sect. 5. Changes in our software infrastructure are outlined
2 Fig. 1: The team B-Human at the RoboCup German Open 2013 award ceremony. in Sect. 6. We are also working on the league s new GameController (cf. Sect. 7). The paper concludes in Sect Team Members B-Human currently consists of the following people who are shown in Fig. 1: Students. Alexis Tsogias, Andreas Stolpmann, Arne Böckmann, Florian Maaß, Malte Jonas Batram, Marcel Steinbeck, Martin Böschen, Martin Kroker, Michel Bartsch, Robin Wieschendorf (not on the picture) Simon Taddiken, Thomas Münder. Staff. Judith Müller, Thomas Röfer (team leader), Tim Laue. 1.2 Publications since RoboCup 2012 As in previous years, we released our code after RoboCup 2012 but this time only accompanied by an update report [1] to the public on our website Up to date, we know of 15 teams that based their RoboCup systems on one of our code releases (AUTMan Nao Team, Austrian Kangaroos, BURST, Edinferno, NimbRo SPL, NTU Robot PAL, SPQR) or used at least parts of it (Austin Villa, Cerberus, MRL SPL, Nao Devils, Northern Bites, RoboCanes, RoboEireann, UChile). In fact, it seems as if the majority of teams participating in RoboCup 2013 uses B-Human s walking engine.
3 At the RoboCup Symposium we will present a method to generate and execute kick motions [2]. The method comprises calculating the trajectory of the kick foot online and moving the rest of the robot s body such that it is dynamically balanced. The latter is achieved by moving the Zero-Moment Point of the NAO, which is determined using inverse dynamics. We will also present a new object recognition system, in which the ball, the goals, and the field lines are found based on color similarities with a detection rate that is comparable to color-table-based object recognition under static lighting conditions, but that is substantially better under changing illumination [3]. It still runs in real time (< 6 ms per image) even on a NAO V3.2. At this year s RoboCup Symposium s open source track, a topic will be presented that has not been covered in detail by previous publications or code releases: The software architecture underlying the B-Human system [4]. The publication points out design choices and strengths of our current architecture, in particular in comparison to the currently popular Robot Operating System (ROS) [5]. For many years, our institute also had a team that participated in the RoboCup Small Size League: B-Smart. Former members of this team contributed to that league s standard vision system SSL-Vision [6]. In the paper by Zickler et al. [7], the history and recent applications and developments of this system such as the application as a source for ground truth data in the Standard Platform League [8] are described. An ongoing project outside the current RoboCup domain, but still in the context of sport robotics, is the development of the entertainment robot Piggy [9] that is able to play a simple ball game with humans. This robot is stationary and does not play in a team but has some capabilities that are beyond the current RoboCup state of the art: On the one hand, its vision system is almost independent of (changing) lighting conditions and colors and thus capable of perceiving and tracking balls even outdoors. On the other hand, Piggy is able to perform dynamic and safe human-robot interaction. 2 Vision 2.1 Color Classification Due to more varying lighting conditions at the German Open 2013, we implemented a more robust and easier to setup color classification. The main idea is based on using the HSV (hue, saturation, value) colorspace, because in that space, mainly the V channel will vary significantly if the lighting conditions change. Therefore, color classes are defined as ranges of the channels H and S, while there are only only implicit minimum thresholds for the V channel. Since the conversion from the YCbCr colorspace, in which the images are delivered by NAO s cameras, to the HSV model is rather expensive, additional thresholds for the channels Cb and Cr limit the number of pixels that have actually to be converted to the HSV colorspace. All colors are defined separately, which allows
4 Fig. 2: An image (overlaid by the perceptions) and its color-classification for defining ambiguous color classifications that are only resolved by the context in which they are used. For instance, in Fig. 2, a part of the ball is classified both as orange and white. One of the main problems with this approach is the classification of the color white. Based on the description above a simple maximum threshold for the channel S should be sufficient to classify a pixel as white. However, due to the noise level of the camera, white pixels in particular at the border between green and white are slightly more saturated, making it very hard to define a threshold. Therefore we decided to use an alternative approach to classify white. For our vision system to work properly, white only needs to be distinguished from green and orange. Therefore we first check whether a pixel is not green in the HSV colorspace. If it is, the pixel is transformed into the RGB color space and validated against minimum blue and red thresholds. 2.2 Goal Detection For getting a more persistent detection of goals, the corresponding perceptor was reimplemented. A scan-based approach now detects yellow regions intersecting with the horizon based on the new color classification. These potential goal posts are then evaluated for their match to optimal goal post models (cf. Fig. 3). This process checks agains a maximum distance and a minimal height evaluation as well for being on the own field by using the calculated field border (cf. Sect. 2.4). If these criteria do not excluding the posts, a constant width, the ratio between width and height, and the expected width and height for the measured distance are checked for further validation. In addition, the relation between different potential posts has influence on their ratings. In the end, the two best-rated potential posts above a minimum rating threshold are accepted as actual goal post percepts.
5 Fig. 3: Validation of potential goal posts 2.3 Obstacle Detection This year s rule changes introduced a bigger field and an additional robot. The bigger field allows for more and different tactics such as passing or man-toman marking. However to successfully deploy such tactics the robots need a good model of the surrounding environment. Therefore we have developed a new visual obstacle detection method for this year s RoboCup. First a SSE-based implementation of a Sobel filter is used to generate an edge image of the Y channel. Afterwards the algorithm of Otsu [10] is used to calculate the optimal threshold between edges and non-edges in this image. Each edge pixel above the threshold belongs either to a field line, an obstacle, or the background. On the SPL field, each field line is completely surrounded by the green carpet. Since edges only occur at the sides of field lines we can assume that edges that do not separate a green from a non green area are obstacles or belong to the background. Since robots cannot leave the field, the background can be treated as an obstacle as well. Therefore every edge pixel above the threshold that does not separate a green from a non-green region is classified as belonging to a potential obstacle, i. e. an obstacle spot. For each of these obstacle spots, a height is calculated by searching the image upwards and downwards for the beginning of a green region. If the height is above a certain threshold, an obstacle spot is accepted. In a post-processing step, obstacle spots that might result from arms of other robots are excluded, because the distance computation assumes that obstacles touch the ground, which is not the case for arms. The remaining obstacle spots are clustered according to the distance between them and entered into a radial model of the environment. The model is similar to the one described by Hoffmann et al. [11]. 2.4 Field Boundary Detection The new rules state that fields may be located close to one another with no barrier between them. This required the implementation of a new field boundary
6 detector, since the old one scanned the image from top to bottom. This resulted in wrong boundaries located on other fields. The new approach uses both cameras and scans upwards beginning in the image from the lower camera, continuing in the image from the upper camera. This way it is possible to detect the boundary of the own filed more accurately. The field border is shown in both lower images of Fig Self-Localization A robust and precise self-localization has always been an important requirement for successfully participating in the Standard Platform League. B-Human has always based its self-localization solutions on probabilistic approaches [12] as this paradigm has been proven to provide robust and precise results in a variety of robot state estimation tasks. Since many years, B-Human s self-localization is realized by a particle filter implementation [13, 14] as this approach enables a smooth resetting of robot pose hypotheses. However, in recent years, different additional components have been added to complement the particle filter: For achieving a higher precision, an Unscented Kalman Filter [15] locally refined the particle filter s estimate, a validation module compared recent landmark observations to the estimated robot pose, and, finally, a side disambiguator enabled the self-localization to deal with two yellow goals [16]. Despite the very good performance in previous years, we are currently aiming for a new, closed solution that integrates all components in a methodically sound way and avoids redundancies, e. g. regarding data association and measurement updates. Hence, the current solution that has already been used successfully during the RoboCup German Open 2013 has been designed as a particle filter with each particle carrying an Unscented Kalman Filter. While the UKF instances perform the actual state estimation (including data association and the consequent hypothesis validation), the particle filter carries out the hypotheses management and the sensor resetting. Currently, only side disambiguation (based on game states and a team-wide ball model) still remains a separate component. However, current work in progress is to include this side assignment in the main state estimation process by adding it to each particle s state. 4 Ultrasound Obstacle Detection Since we were not very satisfied with the ultrasound-based obstacle detection system implemented for RoboCup 2012 [1], we reimplemented it for this year. The NAO is equipped with two ultrasound transmitters (one on each side) and two receivers (again one on both sides), which results in four possible combinations of transmitter and receiver used for a measurement. We model the areas covered by these different measurement types as cones with an opening angle of 90 and an origin at the position between the transmitter and the receiver used (cf. Fig. 4a). Since NAO s torso is upright in all situations in which we
7 (a) (b) Fig. 4: Ultrasound obstacle detection. a) Overlapping measuring cones. b) Obstacle grid that shows that sometimes even three robots can be distinguished. Clustered cells are shown in red, resulting obstacle positions are depicted as crosses. rely on ultrasound measurements, the whole modeling process is done in 2-D. We also limit the distances measured to 1.2 m, because experiments indicated that larger measurements sometimes result from the floor. The environment of the robot is modeled as a 2.4 m 2.4 m grid of cells with a size of 30 mm 30 mm each. The robot is always centered in the grid, i. e. the contents of the grid are shifted when the robot moves. Instead of rotating the grid, the rotation of the robot relative to the grid is maintained. The measuring cones are not only very wide, they also largely overlap. To exploit the information that, e. g., one sensor sees a certain obstacle, but another sensor with a partially overlapping measuring area does not, the cells in the grid are ring buffers that store the last 16 measurements that concerned each particular cell, i. e. whether a cell was measured as free or as occupied. With this approach, the state of of a cell always reflects recent measurements. Experiments have shown that the best results are achieved when cells are considered as part of an obstacle if at least 10 of the last 16 measurements have measured them as occupied. All cells above the threshold are clustered. For each cluster, an estimated obstacle position is calculated as the average position of all cells weighted by the number of times each cell was measured as occupied. Since the ultrasound sensors have a minimum distance that they can measure, we distinguish between two kinds of measurements: close measurements and normal measurements. Close measurements are entered into the grid by strengthening all obstacles in the measuring cone that are closer than the measured distance, i. e. cells that are already above the obstacle threshold are considered to have been measured again as occupied. For normal measurements, the
8 area up to the measured distance is entered into the grid as free. For both kinds of measurements, an arc with a thickness of 100 mm is marked as occupied in the distance measured. If the sensor received additional echoes, the area up to the next measurement is again assumed to be free. This sometimes allows narrowing down the position of an obstacle on one side, even if a second, closer obstacle is detected on the other side as well or even on the same side (cf. Fig. 4b). Since the regular sensor updates only change cells that are currently in the measuring area of a sensor, an empty measurement is added to all cells of the grid every two seconds. Thereby, the robot forgets old obstacles. However, they stay long enough in the grid to allow the robot to surround them even when it cannot measure them anymore, because the sensors point away from them. 5 Arm Motions In [1] we introduced arm contact detection to recognize when a robot s arm collides with an obstacle. In this year s system we integrated dynamic arm motions on top of that feature. If a robot detects an obstacle with an arm, it may decide to move that arm out of the way to be able to pass the obstacle without much interference. In addition, arm motions can be triggered by the behavior control, allowing the robot to move its arm aside even before actually touching an obstacle. An arm motion is defined by a set of states, which consist of target angles for the elbow joint and the shoulder joint of each arm. Upon executing an arm motion, the motion engine interpolates intermediate angles between two states to provide a smooth motion. When reaching the last state, the arm remains there until the engine gets a new request or a certain time runs out. While the arm motion engine is active, its output overrides the default arm motions, which are normally generated during walking. A development version of this feature has already been used during RoboCup 2012 and it was improved for the German Open It since proved to be very valuable in many game situations, because the arms of our robots are out of the way if they need to be, but they still function as obstacles to the opponent team in defensive situations. 6 Infrastructural Changes This year we have spent a lot of time on improving our infrastructure. Among the more noteworthy features are a custom kernel, a new behavior description language, nearly complete C++11 support due to the use of LLVM clang as compiler and cross compiler, a streamlined installation procedure, logging of camera images at full frame rate during the game, and some extensions to the module framework.
9 Fig. 5: Images taken by the upper camera of a NAO (original resolution on the left, corresponding thumbnails on the right). 6.1 Operating System We have reconfigured the official Aldebaran kernel to include several new drivers and we enabled hyper-threading. The Wi-Fi driver has been replaced by the rt5370sta driver to improve the link quality. In order to enable hyper-threading, the LAN driver had to be replaced by the Realtek r8169 driver, because the original driver crashed. In addition, we improved the official camera driver. It now supports the manual setting of white balance, hue, and fade-to-black. The modified kernel can be downloaded from our GitHub repository Logging Evaluating the actions of robots during the games is an important but difficult task. For this purpose logging has been used and improved over the past years. We could log the percepts during games, or save logs including full resolution images with a robot connected to a PC. However, both methods have great disadvantages. The former offers high frame rates due to a very low computational 1
10 load and can be used on all robots during games, but it lacks the possibility of comparing the percepts with the actual images. This makes the evaluation difficult. In the second method, it is not possible to store the logs on the robots, due to full resolution images, which makes a connection to an external PC necessary. Sending the images to a computer takes a lot of time, so that the frame rates drop significantly, and it would not be allowed during actual games. These problems have now been solved by computing thumbnails of the images and storing them in the log files. The thumbnails are computed by shrinking the images, so that each resulting pixel is the average of an 8 8 block in the original image. The resulting images have a resolution of (upper image) and (lower image) pixels. Although the resolution is highly reduced, it is still possible so recognize important parts on the field, as one can see in Fig C-Based Behavior Last year, we already replaced the Extensible Agent Behavior Specification Language (XABSL) [17] by an extension of C++ [1]. As XABSL, it followed the paradigm of hierarchical state machines. It avoided most of overhead of coupling two different programming languages together, but it still required a pre-compiler to translate the behavior language into plain C++. This year, we replaced the pre-compiler by a set of C++ macros that still support a very similar syntax, but do not need a separate compiler run anymore. In contrast to the previous approach, the new language supports options with formal parameters in regular C++ notation, for which even default values can be specified. This results in easy to read behavior descriptions. Since the behaviors are actually modeled in C++, IDEs fully support the language. In addition, a new view allows to visualize the option graph directly in our simulator SimRobot. 7 GameController At last year s RoboCup, we presented a new GameController for the Standard Platform League (cf. Fig. 6), which has become the new official game controller. It was successfully used at the RoboCup German Open as well as the RoboCup U.S. Open. For this year, we are adding support for the Humanoid League. We will also develop a NAOqi module that can be used to handle the communication with the GameController on the NAO, and it will implement the official button and LED interface. We also decided to reimplement the GameStateVisualizer, which is used to display the score to the audience, to keep the software integrated in a single code base. 8 Conclusions B-Human has developed several major features since RoboCup The new robust vision approach reduces the need to calibrate colors before each game.
11 Fig. 6: Screenshot of the new GameController during a game The new self-locator integrates functionality previously distributed over separate modules in a single, streamlined implementation. The obstacle detection using both vision and ultrasound eases realizing advanced behaviors such as passing or man-to-man marking, as well as actively controlling the arms to avoid obstacles. Our custom kernel greatly improves the network connectivity, delivers camera images as they were supposed to be, and improves the overall system performance by nearly 30%. The new logging capabilities considerably increased the overall debugging experience leading to higher code quality. Besides our code releases, the new GameController is another contribution of B-Human to the development of the Standard Platform League. References 1. Röfer, T., Laue, T., Müller, J., Bartsch, M., Batram, M.J., Böckmann, A., Lehmann, N., Maaß, F., Münder, T., Steinbeck, M., Stolpmann, A., Taddiken, S., Wieschendorf, R., Zitzmann, D.: B-Human team report and code release 2012 (2012) Only available online: uploads/2012/11/coderelease2012.pdf. 2. Wenk, F., Röfer, T.: Online generated kick motions for the NAO balanced using inverse dynamics. In: RoboCup 2013: Robot Soccer World Cup XVII. Lecture Notes in Artificial Intelligence, Springer (2014) to appear. 3. Härtl, A., Visser, U., Röfer, T.: Robust and efficient object recognition for a humanoid soccer robot. In: RoboCup 2013: Robot Soccer World Cup XVII. Lecture Notes in Artificial Intelligence, Springer (2014) to appear.
12 4. Röfer, T., Laue, T.: On B-Human s code releases in the Standard Platform League - software architecture and impact. In: RoboCup 2013: Robot Soccer World Cup XVII. Lecture Notes in Artificial Intelligence, Springer (2014) to appear. 5. Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: Proceedings of the Open-Source Software workshop of the International Conference on Robotics and Automation (ICRA), Kobe, Japan (2009) 6. Zickler, S., Laue, T., Birbach, O., Wongphati, M., Veloso, M.: SSL-vision: The shared vision system for the RoboCup Small Size League. In Baltes, J., Lagoudakis, M.G., Naruse, T., Shiry, S., eds.: RoboCup 2009: Robot Soccer World Cup XIII. Volume 5949 of Lecture Notes in Artificial Intelligence., Springer (2010) Zickler, S., Laue, T., Gurzoni Jr, J., Birbach, O., Biswas, J., Veloso, M.: Five years of SSL-Vision - impact and development. In: RoboCup 2013: Robot Soccer World Cup XVII. Lecture Notes in Artificial Intelligence, Springer (2014) to appear. 8. Burchardt, A., Laue, T., Röfer, T.: Optimizing particle filter parameters for selflocalization. In del Solar, J.R., Chown, E., Ploeger, P.G., eds.: RoboCup 2010: Robot Soccer World Cup XIV. Volume 6556 of Lecture Notes in Artificial Intelligence., Springer (2011) Laue, T., Birbach, O., Hammer, T., Frese, U.: An entertainment robot for playing interactive ball games. In: RoboCup 2013: Robot Soccer World Cup XVII. Lecture Notes in Artificial Intelligence, Springer (2014) to appear. 10. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man and Cybernetics 9 (1979) Hoffmann, J., Jüngel, M., Lötzsch, M.: A vision based system for goal-directed obstacle avoidance used in the RC 03 obstacle avoidance challenge. In: In 8th International Workshop on RoboCup 2004 (Robot World Cup Soccer Games and Conferences), Lecture Notes in Artificial Intelligence, Springer (2004) Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press, Cambridge (2005) 13. Fox, D., Burgard, W., Dellaert, F., Thrun, S.: Monte-Carlo localization: Efficient position estimation for mobile robots. In: Proceedings of the Sixteenth National Conference on Artificial Intelligence, Orlando, FL, USA (1999) Laue, T., Röfer, T.: Particle filter-based state estimation in a competitive and uncertain environment. In: Proceedings of the 6th International Workshop on Embedded Systems, Vaasa, Finland (2007) 15. Julier, S.J., Uhlmann, J.K.: A new extension of the kalman filter to nonlinear systems. In: Proceedings of AeroSense: The 11th International Symposium on Aerospace/Defense Sensing, Simulation and Controls, Orlando, FL, USA (1997) Röfer, T., Laue, T., Müller, J., Graf, C., Böckmann, A., Münder, T.: B-Human Team Description for RoboCup In Chen, X., Stone, P., Sucar, L.E., der Zant, T.V., eds.: RoboCup 2012: Robot Soccer World Cup XV Preproceedings, RoboCup Federation (2012) 17. Loetzsch, M., Risler, M., Jüngel, M.: XABSL - A Pragmatic Approach to Behavior Engineering. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006), Beijing, China (2006)
Team Description for RoboCup 2014
Team Description for RoboCup 2014 Thomas Röfer 1, Tim Laue 2, Judith Müller 2, Michel Bartsch 2, Jonas Beenenga 2, Dana Jenett 2, Tobias Kastner 2, Vanessa Klose 2, Sebastian Koralewski 2, Florian Maaß
Team Report and Code Release 2013
Team Report and Code Release 2013 Thomas Röfer 1, Tim Laue 1, Judith Müller 2, Michel Bartsch 2, Malte Jonas Batram 2, Arne Böckmann 2, Martin Böschen 2, Martin Kroker 2, Florian Maaß 2, Thomas Münder
Team Description for RoboCup 2013 in Eindhoven, the Netherlands
Team Description for RoboCup 2013 in Eindhoven, the Netherlands Patrick de Kok, Nicolò Girardi, Amogh Gudi, Chiel Kooijman, Georgios Methenitis, Sebastien Negrijn, Nikolaas Steenbergen, Duncan ten Velthuis,
ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, [email protected]
ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME by Alex Sirota, [email protected] Project in intelligent systems Computer Science Department Technion Israel Institute of Technology Under the
How To Make A Robot Team In German National Robocup
GermanTeam 2004 The German National RoboCup Team Thomas Röfer 1, Ronnie Brunn 2, Ingo Dahm 3, Matthias Hebbel 3, Jan Hoffmann 4, Matthias Jüngel 4, Tim Laue 1, Martin Lötzsch 4, Walter Nistico 3, Michael
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
Dutch Nao Team Team Description for RoboCup 2014 - João Pessoa, Brasil
Dutch Nao Team Team Description for RoboCup 2014 - João Pessoa, Brasil Patrick de Kok 1, Duncan ten Velthuis 1, Niels Backer 1, Jasper van Eck 1, Fabian Voorter 1, Arnoud Visser 1, Jijju Thomas 2, Gabriel
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
A Behavior Architecture for Autonomous Mobile Robots Based on Potential Fields
A Behavior Architecture for Autonomous Mobile Robots Based on Potential Fields Tim Laue and Thomas Röfer Bremer Institut für Sichere Systeme, Technologie-Zentrum Informatik, FB 3, Universität Bremen, Postfach
Poker Vision: Playing Cards and Chips Identification based on Image Processing
Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Camera-based selective weed control application module ( Precision Spraying App ) for the autonomous field robot platform BoniRob
Ref: C0598 Camera-based selective weed control application module ( Precision Spraying App ) for the autonomous field robot platform BoniRob Christian Scholz, Maik Kohlbrecher and Arno Ruckelshausen, University
On Fleet Size Optimization for Multi-Robot Frontier-Based Exploration
On Fleet Size Optimization for Multi-Robot Frontier-Based Exploration N. Bouraqadi L. Fabresse A. Doniec http://car.mines-douai.fr Université de Lille Nord de France, Ecole des Mines de Douai Abstract
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
A Genetic Algorithm-Evolved 3D Point Cloud Descriptor
A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal
Tracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey [email protected] b Department of Computer
Obstacle Avoidance Design for Humanoid Robot Based on Four Infrared Sensors
Tamkang Journal of Science and Engineering, Vol. 12, No. 3, pp. 249 258 (2009) 249 Obstacle Avoidance Design for Humanoid Robot Based on Four Infrared Sensors Ching-Chang Wong 1 *, Chi-Tai Cheng 1, Kai-Hsiang
Robust and accurate global vision system for real time tracking of multiple mobile robots
Robust and accurate global vision system for real time tracking of multiple mobile robots Mišel Brezak Ivan Petrović Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical
Component-based Robotics Middleware
Component-based Robotics Middleware Software Development and Integration in Robotics (SDIR V) Tutorial on Component-based Robotics Engineering 2010 IEEE International Conference on Robotics and Automation
Canny Edge Detection
Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Z-Knipsers Team Description Paper for RoboCup 2016
Z-Knipsers Team Description Paper for RoboCup 2016 Georgios Darivianakis 1, Benjamin Flamm 1, Marc Naumann 1, T.A. Khoa Nguyen 1, Louis Lettry 2, and Alex Locher 2 1 Automatic Control Lab 2 Computer Vision
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Multiple Network Marketing coordination Model
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions,
Signature Region of Interest using Auto cropping
ISSN (Online): 1694-0784 ISSN (Print): 1694-0814 1 Signature Region of Interest using Auto cropping Bassam Al-Mahadeen 1, Mokhled S. AlTarawneh 2 and Islam H. AlTarawneh 2 1 Math. And Computer Department,
WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
False alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
Interactive Visualization of Magnetic Fields
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 21 No. 1 (2013), pp. 107-117 Interactive Visualization of Magnetic Fields Piotr Napieralski 1, Krzysztof Guzek 1 1 Institute of Information Technology, Lodz University
Video Conferencing Display System Sizing and Location
Video Conferencing Display System Sizing and Location As video conferencing systems become more widely installed, there are often questions about what size monitors and how many are required. While fixed
COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION
COLOR-BASED PRINTED CIRCUIT BOARD SOLDER SEGMENTATION Tz-Sheng Peng ( 彭 志 昇 ), Chiou-Shann Fuh ( 傅 楸 善 ) Dept. of Computer Science and Information Engineering, National Taiwan University E-mail: [email protected]
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
An Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
Effective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
Colour Image Segmentation Technique for Screen Printing
60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone
Image Analysis Using the Aperio ScanScope
Image Analysis Using the Aperio ScanScope Allen H. Olson, PhD Algorithm Development Engineer Aperio Technologies INTRODUCTION Why should I choose the Aperio ScanScope over competing systems for image analysis?
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair
Understanding Megapixel Camera Technology for Network Video Surveillance Systems Glenn Adair Introduction (1) 3 MP Camera Covers an Area 9X as Large as (1) VGA Camera Megapixel = Reduce Cameras 3 Mega
Unit 1: INTRODUCTION TO ADVANCED ROBOTIC DESIGN & ENGINEERING
Unit 1: INTRODUCTION TO ADVANCED ROBOTIC DESIGN & ENGINEERING Technological Literacy Review of Robotics I Topics and understand and be able to implement the "design 8.1, 8.2 Technology Through the Ages
Π8: Indoor Positioning System using WLAN Received Signal Strength Measurements Preface
Π8: Indoor Positioning System using WLAN Received Signal Strength Measurements Preface In this deliverable we provide the details of building an indoor positioning system using WLAN Received Signal Strength
XCTTM Technology from Lutron : The New Standard in Sensing. Technical white paper J.P. Steiner, PhD April 2009
XCTTM Technology from Lutron : The New Standard in Sensing Technical white paper J.P. Steiner, PhD April 2009 Introduction Occupancy sensors have long been used to control lighting based on the presence
Image Normalization for Illumination Compensation in Facial Images
Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines
Face detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
Circle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
COACHING THE TACKLE Jim Mc CORRY
COACHING THE TACKLE Jim Mc CORRY Defensive play is comprised of both individual defending and team defending and they form the platform for many a successful side. The purpose of this coaching session
A Surveillance Robot with Climbing Capabilities for Home Security
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 11, November 2013,
Network Robot Systems
Network Robot Systems Alberto Sanfeliu 1, Norihiro Hagita 2 and Alessandro Saffiotti 3 Guest Editors of the Special Issue on NRS 1 Institut de Robòtica I Informàtica Industrial (UPC-CSIC), Universitat
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
zen Platform technical white paper
zen Platform technical white paper The zen Platform as Strategic Business Platform The increasing use of application servers as standard paradigm for the development of business critical applications meant
Rules for the IEEE Very Small Competition Version 1.0
7th LATIN AMERICAN IEEE STUDENT ROBOTICS COMPETITION Joint with JRI 2008 (Brazilian Intelligent Robotic Journey) and SBIA 2008 (19 th Brazilian Symposium on Artificial Intelligence) Rules for the IEEE
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
Cerberus 07 Team Description Paper
Cerberus 07 Team Description Paper H. Levent Akın, Çetin Meriçli, Barış Gökçe, Barış Kurt, Can Kavaklıoğlu and Abdullah Akçe Boğaziçi University Department of Computer Engineering 34342 Bebek, Ístanbul,
Internet of Things and Cloud Computing
Internet of Things and Cloud Computing 2014; 2(1): 1-5 Published online May 30, 2014 (http://www.sciencepublishinggroup.com/j/iotcc) doi: 10.11648/j.iotcc.20140201.11 A method to increase the computing
Remote Client Program... 3. Web Client... 39
Remote Client / Web Client USER MANUAL T Series Digital Video Recorder Remote Client Program... 3 Remote Client Program Installation... 4 Remote Client... 6 Main Window... 6 Site Registration... 7 Group
Computer Vision. Color image processing. 25 August 2014
Computer Vision Color image processing 25 August 2014 Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved [email protected], [email protected] Color image
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
E190Q Lecture 5 Autonomous Robot Navigation
E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator
Microcontrollers, Actuators and Sensors in Mobile Robots
SISY 2006 4 th Serbian-Hungarian Joint Symposium on Intelligent Systems Microcontrollers, Actuators and Sensors in Mobile Robots István Matijevics Polytechnical Engineering College, Subotica, Serbia [email protected]
Robot Task-Level Programming Language and Simulation
Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
Prof. Dr. Raul Rojas FU Berlin. Learning and Control for Autonomous Robots
Prof. Dr. Raul Rojas FU Berlin Learning and Control for Autonomous Robots Embodied Intelligence: A new Paradigm for AI - Intelligence needs a body: mechanics - Computer vision in real time - Artificial
On-line Robot Adaptation to Environmental Change
On-line Robot Adaptation to Environmental Change Scott Lenser CMU-CS-05-165 August, 2005 School of Computer Science Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee:
This week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 2006-2007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
Software Development Workflow in Robotics
Software Development Workflow in Robotics Alois Knoll Simon Barner, Michael Geisinger, Markus Rickert Robotics and Embedded Systems Department of Informatics Technische Universität München ICRA 2009 Workshop
Online Learning for Offroad Robots: Using Spatial Label Propagation to Learn Long Range Traversability
Online Learning for Offroad Robots: Using Spatial Label Propagation to Learn Long Range Traversability Raia Hadsell1, Pierre Sermanet1,2, Jan Ben2, Ayse Naz Erkan1, Jefferson Han1, Beat Flepp2, Urs Muller2,
HAND GESTURE BASEDOPERATINGSYSTEM CONTROL
HAND GESTURE BASEDOPERATINGSYSTEM CONTROL Garkal Bramhraj 1, palve Atul 2, Ghule Supriya 3, Misal sonali 4 1 Garkal Bramhraj mahadeo, 2 Palve Atule Vasant, 3 Ghule Supriya Shivram, 4 Misal Sonali Babasaheb,
Video Camera Image Quality in Physical Electronic Security Systems
Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems In the second decade of the 21st century, annual revenue for the global
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Real Time Baseball Augmented Reality
Department of Computer Science & Engineering 2011-99 Real Time Baseball Augmented Reality Authors: Adam Kraft Abstract: As cellular phones grow faster and become equipped with better sensors, consumers
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems
Automated Recording of Lectures using the Microsoft Kinect
Automated Recording of Lectures using the Microsoft Kinect Daniel Sailer 1, Karin Weiß 2, Manuel Braun 3, Wilhelm Büchner Hochschule Ostendstraße 3 64319 Pfungstadt, Germany 1 [email protected] 2 [email protected]
A Method for Controlling Mouse Movement using a Real- Time Camera
A Method for Controlling Mouse Movement using a Real- Time Camera Hojoon Park Department of Computer Science Brown University, Providence, RI, USA [email protected] Abstract This paper presents a new
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
SHADOWSENSE PERFORMANCE REPORT: LATENCY
SHADOWSENSE PERFORMANCE REPORT: LATENCY I. DOCUMENT REVISION HISTORY Revision Date Author Comments 1.1 Now\19\2015 John La Re formatted for release 1.0 July\06\2015 Jason Tang Yuk Created the document
Document Image Retrieval using Signatures as Queries
Document Image Retrieval using Signatures as Queries Sargur N. Srihari, Shravya Shetty, Siyuan Chen, Harish Srinivasan, Chen Huang CEDAR, University at Buffalo(SUNY) Amherst, New York 14228 Gady Agam and
Demo: Real-time Tracking of Round Object
Page 1 of 1 Demo: Real-time Tracking of Round Object by: Brianna Bikker and David Price, TAMU Course Instructor: Professor Deepa Kundur Introduction Our project is intended to track the motion of a round
A Cognitive Approach to Vision for a Mobile Robot
A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 [email protected] Damian Lyons Fordham University,
IEEE Projects in Embedded Sys VLSI DSP DIP Inst MATLAB Electrical Android
About Us : We at Ensemble specialize in electronic design and manufacturing services for various industrial segments. We also offer student project guidance and training for final year projects in departments
Sensor Integration in the Security Domain
Sensor Integration in the Security Domain Bastian Köhler, Felix Opitz, Kaeye Dästner, Guy Kouemou Defence & Communications Systems Defence Electronics Integrated Systems / Air Dominance & Sensor Data Fusion
Francisco Martín José M. Cañas Carlos Agüero Eduardo Perdices
Behavior-based Iterative Component Architecture for robotic applications with the Nao humanoid Francisco Martín José M. Cañas Carlos Agüero Eduardo Perdices Robotics Group Robotics Group Robotics Group
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
2. Colour based particle filter for 3-D tracking
MULTI-CAMERA 3-D TRACKING USING PARTICLE FILTER Pablo Barrera, José M. Cañas, Vicente Matellán, and Francisco Martín Grupo de Robótica, Universidad Rey Juan Carlos Móstoles, Madrid (Spain) {barrera,jmplaza,vmo,fmartin}@gsyc.escet.urjc.es
