designer eyewear @ sunglasses2000.com

About Me | My Resume | Research works | My Hobbies | My Family | Photo Album | Favorite Links | Search My Site | Contact Me
AI in Engineering
A Low-Cost Internet-Based TeleRobotic System for Access to Remote Laboratories
 

Abstract: This paper presents an account of the design of a low-cost Internet-based teleoperation system implemented on China's Internet. Using a multimedia-rich human-computer interface, combining predictive displays and graphical overlays, a series of simple tasks were performed within a simulated space environment scenario. Internet clients anywhere can monitor the robotic workspace, talk with technicians, and control the Arm/Hand integrated system with 15 DOF located in lab to perform tasks (such as grasping a vessel, pouring a liquid, and peg-in-hole assembling, etc.). Our main contributions are to establish a foundation for teleoperated science and engineering research, and we have addressed some issues involving the time-delay associated with the Internet. We also developed several key software adaptation technologies and products used for Internet-Based teleoperation, compatible with the BH-III Dexterous Hand, BH1 6-DOF Mechanical Arm and 3-Finger 11-DOF Data Glove, constructed in our laboratory. This system has been successfully tested and applied in remote robotic education (Virtual Laboratories) system via China's Internet using our Master/Slave architecture, which combines mixed modes of remote monitor/manipulate and local autonomous control.

 
Keywords: Remote Virtual Laboratories, Telerobotics, Time-delay, Dexterous hand, Data Glove, Share Control, Predictive Simulation, Virtual Reality, Human-Machine Interaction, Data Fusion, Neural Network
I Introduction
The field of TeleRobotics embodies mechanics, electronics, computer, intelligent control, and network communication technology. It has application in industry, exploration, public service, and medicine ( "robotic surgery''). With the development and application of teleoperated robotic technology, human is being released from dangerous, unreachable, and uncertain environments. Such uncertainties make it infeasible for robots to be designed which can perform complicated tasks autonomously. Robotic systems which have a high degree of remote site autonomy, can be teleoperated over a distance using ` supervisory control ' modes, whereupon high-level operator commands and low-level robotic system controllers can work harmoniously work in a coordinated control manner [1]. Moreover, with the deep development of computer network and multi-media information science, great interest has been generated by the Internet as it provides world-wide connectivity to remote devices situated virtually anywhere in the world. A robotic system can be made to send useful sensor-based information collected from work field to a remote teleoperated site via network. By means of virtual reality technology and network communication, human experts can achieve virtual presence at a remote work site, and make decision in an interactive mode via network to control working robot. One issue that we will address is the degradation of teleoperation due to network time-delay.

Many countries have developed research programs on robotic teleoperation, with considerable success recently. There are some representative works such as Sojourner Mars Pathfinder [2] of NASA in USA, ETS-VII hitching robot system in Japan, ROTEX spatial robot system [3] in Germany, and so on. There has been considerable interest in examining the implementation of telerobotic systems via a public Internet. Rovetta et al [4] used a mix of communication media for performing tele-surgery in 1995, but the work done was not based on public-access Internet, and they didn't focus on teleoperated robotic technology. Anderson [5] adopted the Internet as the medium for his SMART (Sequential Modular Architecture for Robotics and Tele-robotics) although without VR interfaces or human-machine interaction involving haptics. Wakita et al [6] suggested a combination of intelligent visual monitoring and a canonical set high-level commands as a means of "Intelligently Monitoring" in remote robot. In an August 1995 experiment, they performed an Internet-based experiment between the ETL lab at Tsukuba Japan and the Jet Propulsion Laboratory (JPL) in Los Angeles USA. Predictive displays and simulation-based planning were not included as part of their experiments. T.J. Tarn and Ning Xi et al [7][8] proposed an event-based approach to control remote robotic system via high-bandwidth Internet. Very few applications have been developed on low-bandwidth and non-dedicated links. The G Web Server of the Process Control and Automation Lab in the department of systems and control engineering at Case Western Reserve University has successfully provided remote access to laboratory experimental data via Internet [9]. The teleoperative tasks were planned on a user interface which presented static image scenes rather than multi-media information. In 1998, JPL also built an Internet-based web interface [10] for planning and teleoperating the "Mars Pathfinder". This system was open to the web, so students could simulate robotic plans and become exposed to this `big science' project through public access to special servers, but tele-operator wasn't able to manipulate the actual object, so in fact this mode was restricted to tele-operative planning. Therefore, although scientists have achieved many useful results in this field, the research and technology on public-access internet-based telerobotics system is still quite preliminary.

 
In China, such public-access teleoperation projects have not been widely available. One of the largest obstacles to multimedia-based communications modes is the limited bandwidth. In fact, there are four fundamental issues that need to be addressed in teleoperation based on Internet:
 
1. How to address issues surrounding system usability and safety.
2. How to address the problems associated with communication time-delays.
3. How to harmonize the remote monitor and local autonomy control
4. How to improve throughput of multimedia information on bandwidth-limited Internets.
 
This paper describes the design and implementation of a teleoperated robotic Hand/Arm system [11] using the Internet in order to serve as an experimental platform for addressing these issues [12]. The entire system was comprised by a number of components, including a dexterous hand autonomous sub-system, software modules for planning and predictive displays, the teleoperated, Internet-based client/server components, and a Virtual Reality display engine. All software components and equipment were developed to be supported over a distributed network. Furthermore, several key techniques were developed for interfacing to novel human interface devices such as a 3-finger 9-DOF dexterous hand, a 5-finger 11-DOF data glove, a 6-DOF teleoperated mechanical arm, and a visual/audio interactive module.
 
II Overview
The technical goal of this project was to create an experimental environment, suitable for remote robotic teleoperation, and to make use of it for telerobotics research and training in China. This platform allows a number of remote users to conduct training exercises and perform research experiments involving supervisory robot control, multi-sensor data fusion, communication time-delay, predictive systems modeling, and local autonomous control. At the same time, this system can be used in remote multimedia education and research via Internet, because it provides a method to broaden the high technology education through building an Internet-based robotics classroom. Consumers can operate a simulated robotic teach-panel to control the movement of robot, observe the 3-D robotic simulation platform, and also program to control the remote robot trajectory. Meanwhile, consumers can wear our remote control equipment (such as the data glove) and experience first-hand the challenges associated with remote teleoperation, guided by graphics simulation and multimedia interaction. In the graphics interface, consumers can feel the different operation's influence by multiple means, such as watching the movement of the robot, listening to robot's movement voice and the explanation of the technicians, and so on. The experience is compelling by virtue of the live and interactive human-machine interface, allowing consumers to gain first-hand knowledge of the characteristics of certain robot models. In K-12 projects (such as Science Fairs), this project has received a great deal of exposure. It was a popular exhibit within China's Education Achievement Exhibits (Beijing) and at the Tangshan Science and Technology exposition (in Hebei Province). This exposure is just the beginning of the system's application and popularity in China (a set of system experimental photos is shown in Fig.1) and the system stands as a model for future implementations at other sites.

Fig.1 Typical user scenarios during experimental operations

III System Architecture

The overall system architecture was designed as a modular, hierarchical system, in order to promote software modularity, maintainability, and real-time performance. The human-machine interface is also a key component in the overall teleoperated system. In the design of human-in-the-loop systems, it is important to consider the capacities and limitations of both the artificial and natural systems: I.e. the human operator and the computer system and teleoperating technology. The human-computer interface, to be sure, sets the overall usability of the system. The operator's performance can be severely impaired if the sensor data is not presented in a way which reduces the cognitive load on the operator. In addition, the data must be organized and presented in a way which allows the human operator to perform tasks in a natural manner, making use of natural perceptual/motor skills. Our teleoperated system has 15 DOF in Hand/Arm autonomous sub-system, 6 for the arm and 9 for the hand. 17 DOF are there in teleoperated sub-system, 6 for teleoperated mechanical arm and 11 for the data glove. Meanwhile, there are eight different kinds of sensors in the low-level control layer. Many modules need to communicate and exchange data in real-time, so an effective architecture was needed to decompose the complex system into simple modules using heuristics gathered from models of biological control systems. Because of environmental uncertainty and communication propagation delays, well-chosen reconstruction strategies were utilized, but since these are modular, they can be interchanged with modules developed by students who use the system for pedagogical reasons. The topological structure of the system modules is shown in Fig.2:
 
Fig.2 Systematic Physical topological structure

The following subsystems comprise the main structure of the overall architecture: the telerobot controllers, predictive simulation and overlay, Virtual Reality interface, autonomous multi-sensor integration, Internet-based visual/audio interaction, and a communications protocol robust to network time-delay and jitter. In terms of practical manipulating platform and experimental objectives, this system architecture reflects a need to perform autonomous control based on multi-sensor, teleoperated commands based on VR, distributed planning and predictive simulation, multi-agent fusion, and the visual/audio monitoring control in remote distance. We proposed the structure based on the task level used in the teleoperated system and divided the system into seven level structures. It includes seven task-oriented loops to achieve teleoperation, predictive simulation, real-time control and interaction with the environment. The human operator was required to analyze the sensor data (typically images) from operating scene, judge the work status, and to send the command and control signals by means of the user interface, based on virtual scenes from the simulation program or directly from live multi-media displays, both available on the human-machine interface. The low-level robotic system does not passively execute the teleoperated order, but verifies that the necessary constraints are in place, performs low-level sensory measures, and implements the plan that was launched and is guided via supervisory control. Meanwhile, It feeds useful information from the scene back to the operator. So the whole system from the high-level to low-level can share the information-processing requirements of the sensori-motor task. Fig. 3 shows the system architecture.
 
Fig.3 Systematic logical topological structure

This figure outlines our general framework from the perspective of its system architecture. A task-based scheduler algorithm was adopted, in order to mediate communications between different modules through the communication module. Optimized and partitioned modules ensure the system of generality, extensibility, and allocation of subtasks to facilitate a reliable control schedule. Moreover, we proposed a new task-based neural network model for multi-sensor data fusion, which proved to be a robust means of enhancing the system's autonomous perceptual abilities in a parallel system architecture.

IV Local Autonomous Sub-system
In most teleoperated systems, the local autonomous sub-system is the innermost loop of hierarchical control [13]. It obtains setpoint commands, movement constraints, and other directives via the network, and drives the low-level dynamical systems in an efficient and safe manner. In addition to fusing low-level sensor data, it also feeds the video, audio, and proximity signals back to teleoperator and simulation engine, which form a higher level for orchestrating the system functions. The autonomous sub-system physical structure is shown in Fig 4.
Fig.4 Autonomous sub-system physical structure

Because safety and functionality in local autonomous control are critical design constraints in teleoperation, optimal local intelligent control is necessary for the whole system [14]. We have included provisions for a secondary level of intelligent control, which `wraps' the autonomous sub-system, and has been shown in Fig.5. This layer is in charge of autonomous task planning, trajectory planning, multi-sensor data fusion, task decomposition/incorporation, and a fault-detection mechanism.

Fig.5 Intelligent control module in autonomous sub-system

V Multi-sensor Sub-system
Our system integrates sensory data from multiple sensory modalities. While much of this information can be utilized by the local autonomous subsystem, all of the sensory information is sent to the supervisory control layer so that teleoperation can proceed according to multiple sensing modalities. Stereovision cameras, a 6-DOF wrist force/torque sensor, gripper tactile sensors, joint angle potentiometers, etc., can all be integrated into the overarching teleoperation framework. Sensor-based data fusion and analysis are very important in manipulation and control because they allow the operator to perceive a rich workspace scene. Our multi-sensor sub-system forms the core of the system model which is displayed to the tele-operator. Furthermore, these functions involve sensing modalities which are typically covered in introductory robotics courses, and so the sense data can be used in laboratory exercises for courses in signal processing, robotics, and data visualization. Some of the kinds of sensors available in the multi-sensor sub-system are shown in Fig.4.

We have adopted a task-based neural network fusion model for integrating sensor information, estimating the state of the surrounding environment, triggering related tasks in a discrete event fashion, and for performing obstacle avoidance. A task-based selection mechanism, data filter, and pretreatment module are added as the inputs to the neural network, and a knowledge database and a fault detection threshold mechanism aid the data fusion as a decision-making tool. In the teleoperated system supervisory control mode, all tasks were decomposed into functional modules and sent as commands to the low-level multi-sensor control environment (such as to monitor the image and perform visual servoing with one DOF etc). These parameters were also sent to the knowledge database, and to the task-based selection module. The fault detection thresholds represent the branch points for the task-based decision-making algorithm unit. The task-based neural network fusion model is shown in Fig 6.
 
Fig.6 Task-based neural network fusion model

To explain this model, consider the process of a peg-in-hole task, which includes automated position estimation of the positions recognize in the scene. Here, the neural network training input includes: a) 8 tracked object coordinates calculated from visual information, b) 3 finger-tip contact signals from digital optical fiber, c) 1 hand approaching distance from analog optical fiber, d) 3 tactile forces from dexterous hand finger sensor, e) 9 finger joint angles from dexterous hand potentiometers , f) 6 robotic joint angles, g) 3 wrist forces from sensor input, h) 3 wrist torque from sensor input, amounts to 36 inputs. The task-based selection mechanism and knowledge database choose relevant inputs into equivalent task stages. Training output includes [ 0, 0, 0, 0, 1], [ 0, 0, 0, 1, 0], [ 0, 0, 1, 0, 0], [ 0, 1, 0, 0, 0], and [ 1, 0, 0, 0, 0] vectors, represents five different peg/hole positions in the inserting recognition process shown in Fig.7. The autonomous control program must be trained and validated before it is available to the task allocation stage. In subsequent tasks, the algorithm dynamically drives the arm/hand movements, and meets the training output according with position 1 (only position 1 is the ideal peg/hole inserting position in this training process).

Fig. 7 Five different peg/hole positions in the inserting recognition process
We made use of a neural network Backpropagation algorithm in the Neural Network Fusion Center in order to improve training quality and avoid local extremum. We optimized according to `mc', the momentum coefficient in the training process, where K is the number of training iterations. The momentum coefficient training is judged by the condition as follows:

Moreover, the training speed h is adjusted in self-adapt algorithm, and the self-adapt learning speed is based on this regulation:
 
In the training process, initial h(0) is random noise. Successful accuracy rates in the peg-in-hole assembly experiments were improved to 99% by using the task-based neural network fusion module and optimized Neural Network BP algorithm. The detailed multi-sensor fusion algorithms and experiments can be referred to in [15].

VI Predictive Simulation and Overlay

Predictive simulation is a valuable component in the human-machine interface, an is an interesting concept in teleoperation research. During long-distance teleoperation, time delays in telecommunication can not be avoided, and it is the important factor causing system's unstable. In addition, Internet time-delay t is not constant: , it is a function of network traffic and time t, so it varies dramatically. Our approach is based on network time-delay measurement model [15], which are fed into our formulation of China's Internet time-delay statistics:

T(t)=Tc+ Tl(t)+Td(t)+Tb
where Tc is the constant time-delay influenced by hardware environment, here we provide its experience formulation:

where h represents physical cable time-delay/km, mean value is 5 ms/km; N represents a router node constant (usually it spans from 8 to 20; typically a value of 13); D represents the real distance from server to client; B is a coefficient obtained from delay statistics based on millions of packet counts (for illustration, we set B=10, which represents a typical value).

Tl (t) is the time-varied time-delay related to Internet load at present, it depends on the Internet condition. We adopt probability analysis here in order to segment surfing into different time period.

Thereinto t represents the time duration in one day's 24 hour. The experimental coefficients we choose in our system are:


 Where Td is the time-varied time-delay related to Internet disturb at present, and it varies from time to time. By millions of experiments, we found it meets gauss noise chorology, here are its mathematical expectation and variance.

The experience values measured in our experiments are: E = 0.05 (s), and .
Where Tb is the time-delay related to Internet Bandwidth output. For instance, if you use a 56K Modem accessing Internet, the transmission speed of bit flow is:


In our experiments, the time-delay model is calibrated online using Matlab Simulink software[16]. In this fashion, we can estimate the mean time delays that will be experienced in the teleoperative loop. This parameter is used as an input parameter for local autonomous control loops of the Arm-Hand system, for example. The mean time delay is needed in order to derive a predictive model of the autonomous control loop filter functions, which we use to address the problem of network time delay. At the same time, predictive simulation in client terminals is introduced to overcome the influence of time delay, and to guide the teleoperator's action. So he can safely control the predictive 3-D virtual robot without time-delay in our system.

6. 1 Concepts and Advantages of Predictive simulation
The simulation environment is built in the client terminal to simulate the remote robotic working space. The real robot's location and moving status are associated with the robot model in the simulation. So moving the model to finish working tasks and then getting the data, the teleoperator can use such data to control the remote robot and hand to finish the same task. The simulation environment is located on the local terminal, and the local operations have no measurable time-delay. The operator can control simulation model to finish the tasks accurately, identifying the feasibility and security of the tasks, and then transferring the simulation data to the remote robot after justifying the tasks, finally guiding the robot to finish the tasks. In our applications, the movement of simulation model forecasts the forthcoming movement in the remote robotic scene, so we call it as "predictive simulation". In the functions of predictive simulation, provided on the human-machine interface, the visibility of the sensor is provided to the tele-operator as visual information and contact forces are provided using haptic force-feedback. We find that operators can effectively switch control modes between autonomous control of remote robotics system and local teleoperation.

6. 2 The implementation of predictive simulation
In our experiments, we use WTK (World Tool Kit) to construct our simulation environment. WTK can be used in Windows platform, so it provides the possibility of carrying out graphics simulation in the PC with inexpensive hardware. A typical simulation implementation is illustrated in Fig 8.

Fig.8 Simulation implementation process

6. 3 Internet-Based Teleoperation

In one scenario, our system was used to control a mechanical arm using a data glove constructed in the laboratory. The 6 DOF manipulandum is used to control a PUMA robot and the data glove is used to control the dexterous hand. From a software/communications standpoint, the application program uses WINSOCKs to connect with the remote robot control server, transferring control data and multimedia data via Internet. The predictive manipulation interface is as Fig.9. With combined visual and audio feedback from the robotic workspace, an Internet client anywhere on the world wide web can make use of the integrated teleoperation predictive simulation system.
 
Fig.9 Predictive simulation interface

6. 4 Graphical overlay on video

In our system, the video overlay was realized to address problems associated with network time-delay and to facilitate the human operator in attaining more natural perceptual-motor control. We use the real images, which come from the remote camera, as the background, and the image is changing in real-time so it can respond to the remote robotic movement status as seen in the video stream. The graphics display images, which come from the predictive simulation, is displayed in the foreground. The two images are overlapped to allow the tele-operator to perceive and correct the commanded motions. As the Fig.10, wherein an opaque solid shape model is presented, the model is not at the same position as the real one in order that the real one can be displayed. But if we use the wire frame model as the Fig.11, we can overlap the simulation model and the real one at the same position, and for the operator it is easier to manipulate consecutively by comparison of the two coincident objects.

Fig.10 Overlay of predictive display and actual view Fig.11 Overlay of Wire-frame graphics and actual view

VII Virtual Reality Control Equipment
7. 1 BH-III 3 finger 9-DOF dexterous hand
In the process of remote teleoperation, systems equipped with a dexterous hand can perform very interesting tasks. Our lab has developed three dexterous hand since 1990. In this system, we use the newest one, the BH-III. It complements the dexterity of the PUMA robot and harmonizes the wide-range movement of the arm and the micro-range movement of the hand, making the robot autonomous system reliable and precise. The dexterous hand is illustrated in Fig.12:

Fig.12 The dexterous hand's overview

We chose a set of micro motors which have small size, and centralized all of them into the middle of the palm. So BH-3 dexterous hand is small (Open status: 22´12´8 cm), Weight (1.2 kg), easy to be integrated to the end of robot, and no limit to PUMA560 robot working space. It also fits for the requirement of PUMA load, and the maximum grasping weight is 1.2 kg. Furthermore, it blends gear and steel strand drive in order to guarantees the precision of grasping, making the system have good flexibility at the same time. The sensors on the hand, location sensor, tactile sensor, and so on, have a high reliability, and their precision can meet the requirement. In the design process, each finger is independent, and each finger's module is composed of the same parts. The construction is modular, so the initial configuration of the fingers can be rearranged to suit the task.
 
7. 2 Mechanical Arm and Data Glove
In a typical laboratory scenario, a mechanical arm and data glove can be used in many exercises relevant to teleoperation and Virtual Reality. So far we have produced two types of mechanical arm and two types of data glove. They are: Robotarm-BH1 6-DOF mechanical arm, Robotarm-BH2 5-DOF force-feedback mechanical arm, BHG-I 3-finger 9-DOF data glove and BHG-II 5-finger 11-DOF tactile data glove.
To enhance the teleoperation ability and let operators feel more visual and realistic, we use WTK to build up a simulation system in client, connecting the system with mechanical arm and data glove. The arm and the glove can control the simulation system as well as the real robotic system. In this system, we integrate Robotarm-BH1 robot arm and BHG-II data glove as the teleoperated equipment.
 
7.2.1 BH1 6-DOF Mechanical Arm
Robot-arm BH1 6-DOF mechanical arm is used as equipment for human-machine interaction, measuring the location of space point. We the method of detecting minimal power to measure the point's location. It is anthropomorphic, composed of a forearm and elbow. The 2-DOF of the hand and the rotating joint of the forearm ensure the arm can get to the target in different status. The maximum location error of the arm is 1.83 (mm2), the average error is 1.28(mm2), and covariance is 0.65800 (mm2). Fig.13 is the BHI mechanical arm overview.

Fig.13 BH I Mechanical Arm overview
7.2.2 BHG-II 5 finger 11-DOF data glove:
BHG-II data glove is designed for VR programming and manufacturing. It is constructed with mechanical main body, A/D data collecting card and simulation software, and is suitable for different human hands. It detects the micro movement of five fingers, and use graphics simulation software to realize man-machine interaction at the same time. The range of the joint movement that it can detect is 20°- 90°, the average resolution ratio is 0.49°, and the average excursion is 0.045v (4.8°). Fig.14 is BHG-II data glove wearing overview and work status. Fig.15 and fig.16 are BHG-II glove sensor distribution and single finger structure.

Fig.14 Data glove wearing overview and controlling the simulation graphics

Fig.15 BHG-II Glove Sensor Distribution Fig.16 BHG-II Single Finger Structure
 
The BHG-II glove is inexpensive and reliable, suitable for the requirements of classroom-based teleoperation. The connection of arm and hand with the teleoperation system depends on RS-232 communication, using SCM to control the data collection and treatment under Windows 9x and Windows NT platform.

VIII Real-time Internet Visual/Audio interaction
8.1 Socket-Based Communication Protocol
Real-time image, sound and data stream communication via Internet is very important in our teleoperated robotic system because it ties all subsystems together via the teleoperative link. To date, all of the Internet programs were based on TCP/IP (Transfer Control Protocol/Internet Protocol). There are many advantages to this socket-based model of connectivity. First, It is transparent from the perspective of the applications-level programmer, in dealing with underlying complex services and structure in network environment through an API. Second, it is a convenient way to implement a flexible data exchange, which supports multi-protocol, multi-process, and multi-application. Moreover, It greatly simplifies the complicated network program by providing the Client/Server-Orientated programming model and possesses varied API for Client and Server. If an application is compatible with Windows Sockets standard, we call it a `Sockets' application, to distinguish it from other transfer protocols such as HTTP.
In another experiment, we control the robotic Arm/Hand system by simulation graphs and VR equipment located in SGI Graphic workstation on another lab via local network. The SGI workstation is Unix system. The second-grade server located in robot autonomous sub-system is Windows NT operating system. The communications based on Socket perfectly solve the multi-platform data transfer between UNIX and Windows NT operating systems.

8.2 Design and implementation for robotic command transfer protocol

The communication criterions of the robotic commands are reliability, accuracy, and speediness. Because the few bandwidths are needed by robotic command transfer via Internet, we used second-grade server for the design of command module. The second-grade server was added into between Internet-based server and robotic system. It acts as a buffer center to receive the high level command or simulating data, perform multi-sensor data fusion, feed the useful low-level information back, control robotic autonomy, and limit dexterous hand finger force and angle under a certain circumstances.
Considering the characteristics of robotic command transfer, we adopt the connection-oriented TCP to design the command transfer protocol among client, Internet-based server, and simulation workstation. Meanwhile we adopt canonical serial communication criterion RS-232 between second-grade server and robotic system to guarantee reliable and speedy communication with robotic physical device.

 

8. 3 Design and Implementation for visual Transfer
The real-time image transfer method applied in this system is different from traditional methods that are used to transfer data which need significant transfer time. The consecutive and steady data stream from the server is necessary for the client to receive the image data and play the image sequence in real time. Moreover, the image refresh rate (frame/second) and image quality should be sufficient to allow the teleoperator to monitor the work scene to be vigilant for errors and catastrophic events. The image should reflect the scene detail and quickly reflect environmental changes.
The real-time image communication protocol includes RDTP (Real-time Data Transmission Protocol) and RCP (Real-time Control Protocol). In our system, RDTP occupies an odd transfer port, and RCP uses an even transfer port. The Internet-based image transfer frame of Client and Server in RDTP and RCP is shown in Fig17.
 
Fig.17 Internet-based Image Transfer Frame of Client and Server
As seen from Fig.17, the consecutive image data streams are extracted, grouped, packed, and embedded with TS (Time Stamp) in the sequence of image frame in Internet-base server, then the client computer receives the stream via Internet and plays it. The design object is orientated in PSTN (Public Service Telephone Network) model, so the communication errors may occur in crowded bandwidth. Considering the image quality needed for most tasks, we found that the client computer could discard dropped image frames, rather than requesting that the server resends them, as is typical for standard protocols in teleoperation.

RCP design is a very important section in the process of system implement. It performs Server/Client control, including visual play, pause, bright and color adjustment etc. The most importance is that RCP is in charge of reporting network status for server to adjust data output/input, and controlling data quantity and competition in network time-delay calculation, package calculation, and image compressed calculation. Otherwise, Client sends the connection request to Server through RCP, and consults a service quality and quantity to receive and play image in terms of the answer of server. The image transfer rate, the size of image, and bit map quality can be adjusted in client computer. Once the network condition has been changed during the course of visual play, the client can sent the new quality request to server for refreshing the image again.
8.4 The network-based data transfer Implementation
In real-time transfer design of image server, collection and buffer of image data are put in a thread named service thread, which responds the client request and answers it. The connection request from client is responded by service thread in building a new thread named client thread. The image processing and encoding for transmission are performed in the client thread. The critical segment is used here to avoid the conflicts when client thread and image collection thread search for the public buffer. Moreover, the important RCP thread carries out the control on image output and play. Every client thread maintains an information table, which comprises current thread status (play, pause, or stop), the rate of output (Bit/second), image quality (level). RCP modifies relative parameters in the information table to control visual image output, in turn perform subjective visual play and control.
8.5 The implementation of audio communication

Because the bandwidth requirements of the audio stream are much less than for the video stream, and for our applications the audio quality requirements are only moderate, we provide only an overview of the implementation used for audio communication. Our Internet-based audio program is made of three parts:
1. Capture audio data - There are three characteristics:
1.1 It can save the audio data directly in the memory; or save the data in the hard disk.
1.2 It can control the quality of audio data, including the frequency of sampling, and it can also choose the
stereo mode or not.
1.3 It can directly cache the data stream in memory.
2. Compressing audio data - The compression of data is programmed in C++ Builder, and the program can also decompress the data in the memory or in the file.
3. Transfer audio data - As explained in the former, we perform the transfer of audio data in Socket. In one time-slice cycle, all the data in the memory is transferred, at the same time, the audio capture is being carried out. The audio data stream of full duplex transfers the both voices via Internet. From the following processes, you can see the schematic implementation of audio program:
Capture the audio data to the buffer in the memory.
3.2 Acquire the data from the buffer and compress it.
4. Send the compressed data by socket.
5. Acquire the compressed data in another side, decompress it and put it into the buffer.
6. Acquire the data from buffer and play it.
 
IX Experiments
Recently a series of teleoperated experiments were performed and applied in remote education system. Our project was exhibited at the Chinese Education Achievement Exhibit (Beijing) and Tangshan Science and Technology exposition (Hebei Province) because of its success. Fig. 18 and Fig.19 are the client and server system organization in teleoperated exhibition. In the remote teleoperated experiment of robotic education system, the clients use PSTN dialing into Internet. Under the guidance of visual/audio information, they wear data glove and mechanical arm to control robot arm/hand to perform a set of typical experimental task, such as grasping a cup, pouring water, twisting a bulb, peg-in-hole assembly etc.
 
Fig.18 Client organization in teleoperated exhibition

Fig. 19 Server organization in teleoperated exhibition

In addition, a tele-operator at the Client site can interact with the 3-d robotic simulation platform, effectively `dragging' the model using the mouse, in order to plan simple motion sequences. Once the task is verified, the operator can press the "S" key to send this set of commands to the remote robot. In server site, robotic autonomous system can real-timely listen and perform remote commands, and feed the useful information back to the autonomous control level where safety thresholds are maintained locally at the remote robotic site. Fig.20 gives some teleoperated experimental photos.
 
Fig.20 Experimental tasks and Simulation Planning

In our experiments, with a bandwidth restriction of 28.8Kbps, the rate of QCIF (176*144) image transfer is up to 6-7 frames. The images compressed and decoded by H.263 are consecutive, stable, and speedy. If the bandwidth can be improved to ISDN (64Kbps or higher), the rate of QCIF image transfer can reach 12-15 frames, and the effect will be very good. During the period of practical application, the visual image and sound via Internet are consecutive, and the action delay of robotic system is less than 1 second. The teleoperated tasks, such as pegging hole, twisting bulb, pushing button, fastening cup and pouring water, can be completed very well by teleoperator under multimedia interaction, 3-D simulation, predictive display and overlay. In our experiments, the moving images need to be compressed before they are sent to visual encoder. This step is very important because high image compressed rate is necessary to meet the limited network bandwidth. As far, compressing tools such as H.261, H.263, MPEG-2, MPEG-4 are good choices in Internet. According to PSTN or ISDN, the bandwidth which can be used on the Internet is very limited (28.8Kbs PSTN, 33.6Kbps PSTN, etc.) and unstable in China. The visual decoder restricts directly the quality of visual transfer, so we adopted a better decoder and compressed tool H.263 and developed a good application program. It is time-tested in experiment that performs our object very well via Internet. Before the image package was sent out, the compressed image data from the decoder was packed with data header together. The structure of data header was compact enough to ensure expeditious command packet transfers, while ensuring sufficient specificity of robotic control commands:
Structure RDTP Header
{
BYTE byPayloadType; // useful load type
DWORD dwBitsPerSec; // transfer bit rate
DWORD dwPackSize; // the size of package
DWORD dwPackSeqence; // the sequence of package
DWORD dwTime; // time stamp
};
X Conclusion
The remote teleoperation via public internet is a challenging and promising field for applications development. At present, high-speed networks are not readily available in China, and so the experimental results of real-time visual image and multimedia information communication via Internet are far from ideal. We have demonstrated how to overcome a number of issues related to tele-operation over such networks. Our contributions are setting up a teleoperated robotic system platform with 3-D predictive simulation and planning, providing remote monitor/teleoperation based on multi-sensor's information feedback on working site, resolving the teleoperated control and visual/audio transfer in limited network condition, and performing typical Internet-based robotic tasks with undefined time-delay. In typical scenarios, a web-based client can manipulate the robotic equipment at a remote site under the guidance of natural predictive simulations, and multi-media real-time interaction using a combined visual, haptic, and audio stream. This paper introduces a low cost internet-based teleoperation system, and demonstrate the design and implementation of a suite of applications based on our own facilities. After testing the teleoperated system for live remote robotic education at expositions in China, the potential is extremely encouraging.

XI References
  1. Lynn Conway, Richard A. Volz et al, "Teleautonomous Systems: Projecting and Coordinating Intelligent Action at a Distance", IEEE Transactions on Robotics & Automation, Vol.6, No 2, pp146-158, April,1990
  2. Paul G.Backes and Kam S.Tso, "Mars Pathfinder Mission Internet-Based Operations Using WITS", Proc. Of the 1998 IEEE Inter. Conf. On Robotics & Automation, Belgium, pp284-291.
  3. Gerd Hirzinger, Bernhard Brunner et al, "Sensor-Based Space Robotics-ROTEX and Its Telerobotic Features", IEEE Transactions on Robotics and Automation, Vol.9,No,5, pp649-662, Oct.1993
  4. Alberto Rovetta, Francesca Cosmi and Lorenzo Molinari Tosatti, "Teleoperator Response in a Touch Task with Different Display Conditions", IEEE Transactions on Systems. Man, and Cybernetics, Vol.25, No.5. pp 878-881, May 1995
  5. Robert J. Anderson, SMART: A Modular Control Architecture for Telerobotics", IEEE Robotics and Automation Society Magazine, pp.10-18, Sep. 1995
  6. Y. Wakita, S. Hirai, K. Machida et al. "Applications of Intelligent Monitoring for Super Long Distance Teleoperation", IROS Proceedings. Osaka, Japan. pp1031-1037, November 1996
  7. DLR - Institute for Robotics and System dynamics. Prof. Dr.-Ing. G.Hirzinger. "Space Robotics Activities - A Survey." 1987-1992 Scientific Report.
  8. Ning Xi, T.J Tarn et al, "Intelligent Planning and Control for Multi-Robot Coordination: An Event-Based Approach", IEEE Transactions on Robotics and Automation. Vol. 12. No.3, pp439-452, June 1996
  9. Kevin Brady, Tzyh-Jong Tarn, "Internet-Based Remote Teleoperation" Proc. of the 1998 IEEE Int. Conf . on Robotics &Automation, pp65-70, May 1998
  10. Mohamed Shaheen, Kenneth A. Loparo et al, "Remote Laboratory Experiment", Proc. Of the American Control Conference, Pennsylvania, pp1326-1329, June 1998
  11. http://ranier.hq.nasa.gov/telerobotics_page/telerobotics.shtm
  12. Song You et al. "Teleoperated Architecture of Spatial Robotic via Network", Journal of High Technology Letters, pp.71-75, Jan. 2000
  13. Song You et al, "Shared Control in Intelligent Arm/Hand Teleoperated System", Proc. Of the 1999 IEEE International Conference on Robotics and Automation, pp2489-2494, Detroit, May 1999,
  14. R.Alami, R.Chatila et al, "An Architecture for Autonomy", The International Journal of robotics Research, Vol.17, No.4, pp315-337, April 1998
  15. Gunter Niemeyer and Jean-Jacques E.Slotine, "Towards Force-Reflecting Teleoperation Over the Internet", Proc. Of the 1998 IEEE Inter. Conf. On Robotics & Automation, Belgium, pp1909-1915.
  16. Song You, "An Internet-Based Telerobotics System", Doctoral Dissertation, Beijing University of Aeronautics and Astronautics, June 2000.
  17. Song You et al, "Research on Internet Telerobotic System With Time-Delay". Proc. of 2000 Chinese Congress on Robotics, Chang Sha. Vol. 31, pp 552-556, Oct 2000.

    Go To Top