Publications of
Professor Jill D. Crisman

crisman@ece.neu.edu

Jill D. Crisman - 1994 

CDSP-JDC-113  - Computing Optical Flow in Color Image Sequences.   J. Lai, J.  Gauch, and J. D. Crisman, 1994. Innovation and Technology in Biology and Medicine, 15, 3.

CDSP-JDC-112  - Deictic Primitives for General Purpose Navigation.   J. D. Crisman and M.  Cleary, March 21-24, 1994. AIAA Conf. on Intelligent Robots in Factory, Field, Space, and Service (CIRFFSS): Finding our Common Ground. Houston, TX. Invited by P. Bonasso.

Jill D. Crisman - 1993 

CDSP-JDC-111  - Multiple, Cooperating Simple Agents for the Area Coverage Problem.   J. D. Crisman, October 1993. Working Notes of the AAAI Fall Symposium Series: Instantiating Real World Agents, Raleigh, NC. Invited by P. Bonasso.

CDSP-JDC-110  - A Robot Hand for Simply Controlled Grasping Virtual Reality Systems.   C. Kanojia, M.  Cosgrove, J. D. Crisman, and I. Zeid, October 1993. Teleoperation '93 Conf., New York, NY. Invited by S. J. Goldenstein.

CDSP-JDC-109  - Generic Target Tracking Using Color.   J. D. Crisman and Y. Du, September 1993. SPIE Intelligent Robotics and Computer Vision XII: Active Vision and 3D Methods. Invited by D. J. Raviv.

CDSP-JDC-108  - Adaptive Control of Camera Position for Stereo Vision.   J. D. Crisman, Y. Du, and M. Cleary, September 1993. SPIE Optics, Illumination, and Image Sensing for Machine Vision VIII. Invited by N.  Wittels.

CDSP-JDC-107  - The Lobster as a Model for an Omnidirectional Robotic Ambulation Control Architecture.   J. Ayers and J. D.  Crisman. In Biological Neural Networks in Invertebrate Neuroethology and Robots, edited by R. Beer, R. Ritzmann and T. McKenna, Academic Press, 287-316, 1993.

CDSP-JDC-106  - Color Region Tracking for Robot Navigation.   J. D. Crisman, Active Vision, edited by A. Blake and A. Yuille, MIT Press, 7, 107-120, 1993.

 Jill D. Crisman - 1994

 CDSP-JDC-113
Computing Optical Flow in Color Image Sequences.   J. Lai, J.  Gauch, and J. D. Crisman, 1994. Innovation and Technology in Biology and Medicine, 15, 3.

Abstract:   In order to study the motion of non-rigid objects in biomedical image sequences, it is often necessary to compute a dense optical flow field. Gradient-based techniques have been relatively successful at computing dense optical flow field for grey-scale images when additional constraints are added. The fundamental formula of gradient-based techniques is the so called optical flow constraint equation, which explores the linear relationship between the intensity gradients and the optical flow components. Since this provides only one equation with two unknowns, additional constraints have to be employed to compute optical flow field. One popular approach is to add smoothness constraints while computing optical flow. This, however, often introduces error at the edges of moving objects. To avoid this a priori employment of smoothness constraint, our approach is to use color images to obtain three optical flow constraint equations corresponding to three color components. This gives us an over-determined linear system with three equations and two unknowns which can be solved using linear least-squares algorithm. No additional constraints are necessarily needed to compute the optical flow. After the optical flow field has been computed, we construct a confidence measure for each computed optical flow vector based on the projections of color component gradients. This confidence measure is then employed in a relaxation algorithm to improve the quality of the computed optical flow field. This paper presents such a method of computing optical flow in color image sequences. Experiment results on synthetic and real motion images are very supportive and promissing. Further research directions are also discussed.

 CDSP-JDC-112
Deictic Primitives for General Purpose Navigation.   J. D. Crisman and M.  Cleary, March 21-24, 1994. AIAA Conf. on Intelligent Robots in Factory, Field, Space, and Service (CIRFFSS): Finding our Common Ground. Houston, TX. Invited by P. Bonasso.

Abstract:   We are investigating visually-based deictic primitives to be used as an elementary command set for general purpose navigation. Each deictic primitive specifies how the robot should move relative to a visually distinctive target. The system uses no prior information about target objects (e.g. shape and color), thereby insuring general navigational capabilities which are achieved by sequentially issuing these deictic primitives to a robot system.

Our architecture consists of five control loops, each independently controlling one of the five rotary joints of our robot. We show that these control loops can be merged into a stable navigational system if they have the proper delays. We have also developed a simulation which we are using to define a set of deictic primitives which can be used to achieve general purpose navigation. Encoded in the simulated environment are positions of visually distinctive objects which we believe will make good visual targets. We discuss the current results of our simulation.

Our deictic primitives offer an ideal solution for many types of partially supervised robotic applications. Scientists could remotely command a planetary rover to go to a particular rock formation that may be interesting. Similarly an expert at plant maintenance could obtain diagnostic information remotely by using deictic primitives on a mobile platform. Moreover, since no object models are used in the deictic primitives, we could imagine that the exact same control software could be used for all of these applications.

 Jill D. Crisman - 1993

 CDSP-JDC-111
Multiple, Cooperating Simple Agents for the Area Coverage Problem.   J. D. Crisman, October 1993. Working Notes of the AAAI Fall Symposium Series: Instantiating Real World Agents, Raleigh, NC. Invited by P. Bonasso.

Abstract:   Many seemingly complex behaviors observed in animals are actually simple reactive behaviors to sensory stimulus. It is believed by many researchers that all motion in animals is activated by combinations of these low-level behaviors. Moreover, community activities, such as gathering in groups or cooperative searching for food has also been successfully demonstrated as using simple primitive behaviors. Many robotic tasks can be accomplished using the same types of simple reactive primitives. However, the environment to which animals have adapted does not always map easily to `structural' types of robot tasks. For example, in robot floor vacuuming, the goal is to insure that the living room is completely swept while avoiding obstacles, pets, etc. While animals have adapted by evolution to robustly catch prey and avoid predators, they rarely exhibit behaviors that insure complete coverage of a local area. (In fact, extracting all the prey from a local area may not be in the best evolutionary interests of the animal since it would be eliminating the replenishing capability of a nearby food source.) In this paper, we discuss our preliminary ideas about primitive behaviors that can be appended to animal-based behaviors, that may be used for floor vacuuming and other area coverage problems. The approach is to have a single sweeping robot that moves across the room from end to end. Two simple robots act as mobile beacons to find and demarcate the ends of a sweep path . We add a simple behavior to the beacons that cause then to move only a short distance to mark the end of the next path. The sweeping robot uses simple animal like tracking behaviors. We show our initial experiments in multiagent floor vacuuming and compare these results with other types of solutions.

 CDSP-JDC-110
A Robot Hand for Simply Controlled Grasping Virtual Reality Systems.   C. Kanojia, M.  Cosgrove, J. D. Crisman, and I. Zeid, October 1993. Teleoperation '93 Conf., New York, NY. Invited by S. J. Goldenstein.

Abstract:   We have developed an articulated robot hand, based on the anthropomorphic model, which can grasp a wide variety of objects and is relatively simple to control. Each of three fingers has four joints and two controllable degrees of freedom: one for curling and the other for rotation about the base of the finger. We demonstrate the robot hand's flexibility during grasping while being teleoperated with a Power Glove.

 CDSP-JDC-109
Generic Target Tracking Using Color.   J. D. Crisman and Y. Du, September 1993. SPIE Intelligent Robotics and Computer Vision XII: Active Vision and 3D Methods. Invited by D. J. Raviv.

Abstract:   We have designed a mobile robot system that uses moving stereo cameras to simplify the navigational task. The vision processing of this system is responsible for tracking a generic object through a sequence of images. Because of the real-time nature of the robot navigation task, the visual tracking algorithm must be fast and reliable. In this paper, we describe a method to analyze the performance of tracking algorithms for generic objects. For our analysis, we do not assume that we know any a priori shape information about the object, therefore our target can be a generic object. We do assume that the target is visually `distinctive' and that we are processing images quickly enough that the appearance of the object changes only a small amount between consecutive image frames. In this paper, we compute the sensitivity of tracking algorithms to changes in visual scale and rotation. In performing the analysis, we first generate random color background images of varying spatial frequencies. Our target is a random color template having the same spatial frequencies as the background. We place scaled and rotated versions of the template into the background image and match image with the original target. We form a match-quality space that records the quality of match at each location in the image. We then a model to the match-quality space and use the amplitude and variance of these functions to quantify the quality of match. Examples and results are given to show the whole process.

 CDSP-JDC-108
Adaptive Control of Camera Position for Stereo Vision.   J. D. Crisman, Y. Du, and M. Cleary, September 1993. SPIE Optics, Illumination, and Image Sensing for Machine Vision VIII. Invited by N.  Wittels.

Abstract:   A major problem in using two-camera stereo machine vision to perform real-world tasks, such as visual object tracking, is deciding where to position the cameras. Humans accomplish the analogous task by positioning their heads and eyes for optimal stereo effects. This paper describes recent work toward developing automated control strategies for camera motion in stereo machine vision systems for mobile robot navigation. Our goal is to achieve fast, reliable pursuit of a target while avoiding obstacles. Our strategy results in smooth, stable camera motion despite robot and target motion. Our algorithm has been shown to be successful at navigating a mobile robot, mediating visual target tracking and ultrasonic obstacle detection. The architecture, hardware and simulation results are discussed.

 CDSP-JDC-107
The Lobster as a Model for an Omnidirectional Robotic Ambulation Control Architecture.   J. Ayers and J. D.  Crisman. In Biological Neural Networks in Invertebrate Neuroethology and Robots, edited by R. Beer, R. Ritzmann and T. McKenna, Academic Press, 287-316, 1993.

Abstract:   Lobsters can locomote in all walking directions while adapting to changing proprioceptive feedback. We have developed a model of the organization of the lobster walking system composed of central pattern generators, command systems, coordinating systems, exteroceptive input as well as phase modulating and amplitude modulating segmental feedback. The model is based on studies of electromyographic motor programs, reflexes and known central circuitry. Each limb control center consists of a finite-state machine composed of a neuronal oscillator, an intra-leg pattern generator and a recruiter. Walking command systems act on the oscillator, the pattern generator and recruiter of each limb while coordinating systems operate on the neuronal oscillator. The model emulates motor programs for all four walking directions which show adaptation to speed, ordered recruitment and load compensation. Limb walking gaits are modeled by phase resetting inputs to the segmental neuronal oscillators and show stable adaptation to speed. Segmental sensory feedback can reset the oscillator during load compensation as well as recruit additional motor units. Exteroceptive yaw correcting optomotor reflexes are modeled as operating through the walking command systems.

 CDSP-JDC-106
Color Region Tracking for Robot Navigation.   J. D. Crisman, Active Vision, edited by A. Blake and A. Yuille, MIT Press, 7, 107-120, 1993.

Abstract:   Combining perception with behavior is a central concept of active vision, and is essential to achieve, robust, real-time interaction of a robot with a complex, dynamic world. These visual behaviors often do not require accurate reconstruction of the three-dimensional world, which are extremely expensive to compute. In this chapter, we explore a road detection system, SCARF (Supervised Classification Applied to Road Following), which actively tracks the road location in a sequence of color images. SCARF does not accurately reconstruct a three-dimensional road model, yet it still achieves robust contineous motion steering of a mobile robot on roads.

      1993