Showing posts with label EC ENGINEERING. Show all posts
Showing posts with label EC ENGINEERING. Show all posts

STEAM POWERED ROBOTS

DOWNLOAD



                                   One of the most significant challenges in the development of an autonomous human-scale robot is the issue of power supply. Perhaps the most likely power supply/actuator candidate system for a position or force actuated human-scale robot is an electrochemical battery and dc motor combination. This type of system, however, would have to carry an inordinate amount of battery weight in order to perform a significant amount of work for a significant period of time.

A state-of-the-art example of a human scale robot that utilizes electrochemical batteries combined with dc motor/harmonic drive actuators is the Honda Motor Corporation humanoid robot model P3. The P3 robot has a total mass of 130 kg (285lb), 30 kg (66 lbs) of which are nickel-zinc batteries. These30 kg of batteries provides sufficient power for approximately 15–25 min of operation, depending on its workload. Operation times of this magnitude are common in self-powered position or force controlled human-scale robots, and represent a major technological roadblock for designing actuated mobile robots that can operate power-autonomously for extended periods of time.

1.1. Figure of Merit

Assuming that a given power supply and actuation system can deliver the requisite average and peak output power at a bandwidth required by a power-autonomous robot, three parameters are of primary interest in providing optimal energetic performance. These are the mass-specific energy density of the power source (Es), the efficiency of converting energy from the power source to controlled mechanical work(n) , and the maximum mass-specific power density of the energy conversion and/or actuation system(Ps) . A simple performance index is proposed by forming the product of these parameters

A.P = Es*n*Ps                    ------------- (1)

where A.P is called the actuation potential. Such a figure of merit is justified by the fact that a system with high power-source energy density, high conversion efficiency, and high actuator power density will be the lightest possible system capable of delivering a given amount of power and energy.    

 In the case of a battery-powered dc-motor-actuated robot, the energy density of the power source would be the electrical energy density of the battery, the conversion efficiency would be the combined efficiency of the (closed-loop controlled) dc motor and gear head, and the power density of the energy conversion and actuation system would be the rated output power of the motor/gear head divided by its mass. In the case of a gasoline-engine-powered hydraulically actuated system, the energy density of the power source would be the thermodynamic energy density of gasoline; the conversion efficiency would be the combined efficiency of the internal combustion engine(converting thermodynamic energy to shaft energy), hydraulic pump (converting shaft energy to hydraulic energy), and the hydraulic actuation system (converting hydraulic energy to controlled mechanical work); and finally, the power density of the energy conversion and actuation system would be the maximum output power of the hydraulic actuation system, divided by the combined mass of the engine, pump, accumulator, valves, cylinders, reservoir, and hydraulic fluid of the hydraulic system.

 With regard to this figure of merit, batteries and dc motors capable of providing the requisite power for a human scale robot offer reasonable conversion efficiency, but provide relatively low power-source energy density and a similarly low actuator/gear head power density. A gasoline-engine-powered hydraulically-actuated human-scale robot would provide a high power-source energy density, but a relatively low conversion efficiency and actuation system power density.

1.2. A Monopropellant Powered Approach

Liquid chemical fuels can provide energy densities significantly greater than power-comparable electrochemical batteries. The energy from these fuels, however, is released as heat, and the systems required to convert heat into controlled, actuated work are typically complex, heavy, and inefficient. One means of converting chemical energy into controlled, actuated work with a simple conversion process is to utilize a liquid monopropellant to generate a gas, which in turn can be utilized to power a pneumatic actuation system. Specifically, monopropellants are a class of fuels (technically propellants since oxidation does not occur) that rapidly decompose (or chemically react) in the presence of a catalytic material. Unlike combustion reactions, no ignition is required, and therefore the release of power can be controlled continuously and proportionally simply by controlling the flow rate of the liquid propellant. This results in a simple, low-weight energy converter system, which provides a good solution to the design tradeoffs between fuel energy density and system weight for the scale of interest.

Monopropellants, originally developed in Germany during World War II, have since been utilized in several applications involving power and propulsion, most notably to power gas turbine and rocket engines for underwater and aerospace vehicles. Modern day applications include torpedo propulsion, reaction control thrusters on a multitude of space vehicles, and auxiliary power turbo pumps for aerospace vehicles. This seminar describes the design of a monopropellant-powered actuation system appropriate for human-scale self-powered robots, and presents theoretical and experimental results that indicate the strong potential of this system for high energy density human-scale robot applications. Specifically, with regard to the figure of merit described before .The proposed approach is projected to provide a significantly greater power-source energy density and actuation power density relative to batteries and dc motors, and is projected to provide a higher conversion efficiency and significantly greater actuation system power density relative to a gasoline-powered hydraulic system.

2. DESCRIPTION OF MONOPROPELLANT ACTUATION SYSTEM

The monopropellant-powered actuation system is similar in several respects to a typical pneumatically actuated system, but rather than utilize a compressor to maintain a high-pressure reservoir, the proposed system utilizes the decomposition of hydrogen peroxide (H2O2) to pressurize a reservoir. Peroxide decomposes upon contact with a catalyst. This decomposition is a strongly exothermic reaction that produces water and oxygen in addition to heat. The heat, in turn, vaporizes the water and expands the resulting gaseous mixture of steam and oxygen. Since the liquid peroxide is stored at a high pressure, the resulting gaseous products are similarly at high pressure, and mechanical work can be extracted from the high-pressure gas in a standard pneumatic actuation fashion.

The conversion of stored chemical energy to controlled mechanical work takes place as follows. The liquid H2O2 is stored in a tank pressurized with inert gas (called a blow down tank) and metered through a catalyst pack by a solenoid-actuated control valve. Upon contact with the catalyst, the peroxide expands into oxygen gas and steam. The flow of peroxide is controlled to maintain a constant pressure in the reservoir, from which the gaseous products are then metered through a voice-coil-actuated four-way proportional spool valve to the actuator. Once the gas has exerted work on its environment, the lower energy hot gas mixture is exhausted to atmosphere.

3. MONOPROPELLANT ACTUATOR PROTOTYPE

3.1. Hardware

A prototype of the monopropellant-powered actuation system depicted in Fig. 1 was fabricated and integrated into a single degree-of-freedom manipulator, as shown in Fig. 2.
The primary objective of building the prototype was to demonstrate tracking control and to conduct experiments characterizing the actuation potential described by (1). The propellant is stored in a stainless steel blow down propellant tank, and is metered through a two-way solenoid-actuated fuel valve through a catalyst pack and into a stainless-steel reservoir. The catalyst pack consists of a 5-cm-long (2 in), 1.25-cm-diameter (0.5 in) stainless-steel tube packed with catalyst material. A pressure sensor measures the reservoir pressure for purposes of pressure regulation. The high-pressure hot gas is metered into and out of a 2.7 cm (1-1/16 in) inner diameter, 10 cm (3.9 in) stroke double-acting single-rod cylinder by a four-way spool valve, modified for proportional operation by replacing the solenoid actuator with a thermally isolated voice coil. The valve spool displacement is measured with a differential variable reluctance transducer (DVRT) in order to enable closed-loop control of the valve spool position. The pneumatic cylinder is kinematically arranged to produce a bicep-curling motion upon extension of the piston, as illustrated in Fig. 3.

3.2. Control

Control of the system is achieved using three separate control loops. The first and simplest is the pressure regulation of the reservoir. Pressure feedback from the pressure sensor switches the solenoid fuel valve with a thermostat-type on-off controller that regulates the reservoir pressure to 1515 kPa (220 psig). The second control loop provides a high-bandwidth (i.e., approximately10 Hz) position control of the valve spool. Finally, the valve spool position is commanded by an outer control loop, which controls the angular motion of the single-degree-of-freedom manipulator. The outer control loop utilizes a rotary potentiometer to provide arm angle measurement for a position, velocity, acceleration (PVA) feedback controller, which commands the valve spool position.

4. EXPERIMENTS

4.1. Load Profile

Since the actuator relies on gas as an energetic medium, and since the actuation system is not designed to utilize energy resulting from condensation of the steam (steam quality less than 100%), the energy required to vaporize the water will not be recovered and as a result the conversion efficiency is lower than if actuation system included partial condensation. The best possible efficiency would occur when partial condensation is allowed to occur within the actuator and also when the load profile of the piston is designed to allow isentropic expansion from high pressure down to the lowest pressure possible (atmospheric pressure). In particular the most efficient load profile is such that the expansion of the peroxide reaction products is isobaric until all propellant mass is in the actuator, at which point the expansion becomes isentropic and continues as such until the cylinder pressure reaches atmospheric. Partial condensation occurs as a result of this load profile, leaving 70% quality steam in the actuator. This load profile would yield a theoretical efficiency of 39 %( calculated theoretically) for the 70% peroxide solution at a supply pressure of 220 psig.

4.2. Uninsulated Experiments

Experiments were conducted to measure the previously calculated conversion efficiency. A 70% peroxide solution was used as the propellant to maintain acceptable temperatures for commercially available components. For these experiments, the single-degree-of-freedom manipulator was commanded to move the 11 kg mass through a 30-degree amplitude, 1-Hz sinusoidal motion. The work output was computed indirectly by measuring the angle and, in post-processing, computing the actuation torque using a model of the load. The instantaneous power and average power could then be calculated. The propellant mass consumption was measured indirectly by recording the pressure of the nitrogen gas in the blow down tank, assuming an isothermal process inside the constant-volume tank, and calculating the volume occupied by the nitrogen from the ideal gas equation, which in turn yields the volume of propellant in the tank. Since the propellant is a liquid, the mass of propellant used is easily computed from the known volume and density. The conversion efficiency is then computed over an integer number of cycles with the heat of decomposition of 70% hydrogen peroxide solution.

 Based on these measurements, the experimentally determined conversion efficiency was found to be 6.6%. Note that the electrical power required to operate the valves was neglected in this analysis. The measured average combined electrical power required by the fuel and gas valves was approximately 2 W. Since this is only about 3% of the average work delivered by the actuator, this electrical power can be legitimately omitted from the analysis. The significant discrepancy between the measured conversion efficiency of 6.6% and the calculated upper bound of 16% is due to two major factors. The first is inefficiency in control and the second is heat loss. Specifically, the thermodynamic model assumed that no gas was exhausted during a given monotonic segment of the trajectory, and that no energy was lost as heat. Regarding the former, any overshoot of the desired trajectory will violate the assumed monotonicity of the trajectory, and therefore will result in an intermittent exhaust of hot gas and a resulting decrease in the efficiency. The existence of such intermittent exhaust is evident in the oscillations exhibited in the power delivered to the load which is shown in Fig. 4 plotted against the theoretically required power .Regarding inefficiency due to heat loss, the external surfaces of the catalyst pack, reservoir, and actuator were hot during the experiments, thus indicating the presence of heat flow. In order to more quantitatively assess the degree of heat loss, the prototype was instrumented with thermocouples so that the rate of heat loss could be estimated from surface temperature measurements referenced to tables associated with heat loss from uninsulated steam piping . This measurement yielded an estimated heat loss rate of 140W. Note that the average measured mechanical power output was approximately 60 W. The prototype lost twice as much energy in the form of heat as it delivered in the form of work. Taking into account this heat loss, the conversion efficiency of the prototype was recalculated to be 10 %

4.3. Insulated Experiments

In order to improve the measured conversion efficiency, the catalyst pack, reservoir, and actuator were wrapped in insulating tape, as shown in Fig. 5, and measurement of the conversion efficiency was repeated. For the insulated case, the experimentally determined conversion efficiency was found to be 9 %.Thermocouple measurement of the surface temperatures, as previously described, yielded an estimated heat loss rate of 73 W, approximately half of the uninsulated case. Using this heat loss rate, the theoretically calculated efficiency was 12 %, the difference presumably due to control inefficiency (i.e., intermittent exhausts).

4.4. Experimentally Determined Actuation Potential

Having measured the conversion efficiency, the mass-specific power density of the actuator and the mass-specific energy density of the power source need to be determined in order to calculate the actuation potential (1). The former is found by determining the mass and the maximum output power of the energy conversion and actuation system. Though finding the mass is a trivial task, characterizing the maximum deliverable power is not as straightforward due to the dependence upon several factors, including the supply pressure, the valve flow coefficient of the proportional valve, and the nature of the load, among others. In order to base the actuator power density solely on measured data the maximum deliverable power was estimated by using the peak power consistently measured during the previously described efficiency experiments. As evidenced by the data in Fig. 4, the actuator can consistently generate peak power of 150W, as indicated by the dashed line overlaid on the plot. The mass of the actuation system was obtained by weighing the components of the actuator shown in Fig. 2. The mass of each component is summarized in Table 1

As indicated in the table, the total actuation system mass is 1.5 Kg, thus resulting in an actuation system power density of 100 W/Kg. This would increase for a multi-degree-of-freedom system, since such a system would only include a single fuel valve, catalyst pack, pressure reservoir, and pressure sensor. Having determined the actuator power density, only the power-source energy density need be found in order to calculate the actuator potential. As previously mentioned, the heat of decomposition of 70% hydrogen peroxide propellant is 2.0 MJ/Kg. The propellant must be stored, however, in a pressurized blow down propellant tank, and as such a legitimate characterization of the energy density should include the mass of a tank. Based on available data for a composite over wrapped propellant tank, the mass of a propellant tank for a volume on the order of 10 liters would conservatively decrease the mass-specific energy density of 70% peroxide from approximately 2.0 MJ/Kg to approximately 1.7 MJ/Kg. Based on this and the measured values of conversion efficiency and actuator power density previously described, the actuation potential for this single-degree-of-freedom system, as given by (1), would be 15.3 KJ KW/Kg2. As previously mentioned, the power density will increase for a multi-degree-of-freedom system, and thus so will the actuation potential. For a six-degree-of-freedom system, for example, the total actuation system mass would be 5.2 Kg, or 870 g per actuator. The reservoir used in the single-degree-of-freedom experiment was oversized, and is appropriately sized for a power-comparable six-degree-of-freedom system. The actuation system power density would therefore increase to 172 W/Kg, and the corresponding actuation potential to 26.4 KJ KW/Kg2 for the six-degree-of-freedom system.

 For purposes of comparison, the best commercially available rechargeable batteries have energy densities of approximately180 KJ/Kg (e.g., Evercel M40-12 nickel zinc, or SAFT 27 10 LAS silver zinc). A rare-earth permanent-magnet dc motor with a harmonic drive gear head with output characteristics capable of achieving the trajectory specified by Table I, has a power density of approximately 48 W/Kg. Note that this remains invariant, regardless of the number of degrees of freedom. Finally, one can assume that the overall conversion efficiency would be the combined efficiencies due to pulse width-modulation (PWM) control, the motor, and the gear head. The PWM efficiency was estimated to be 95%, the motor efficiency calculated for the desired trajectory to be 90% (i.e., the resistive power loss in the motor windings was calculated given the desired torque), and the harmonic drive gear head efficiency was estimated based on manufacturer data to be 65%. The resulting actuation potential for this type of system would therefore be 4.8 KJ KW/Kg. The poorly insulated single-degree-of-freedom experimental setup with 70% peroxide therefore exhibited an actuation potential more than three times a state-of-the-art battery/dc motor system. A similar six-degree-of-freedom system would exhibit an actuation potential over fives times the battery/dc motor system.

4.5. Projected Performance for High-Test Propellant

Though improvements can clearly be made with improved insulation and control performance, the most obvious means of improving the actuation system performance is to substitute a fully concentrated version of the propellant (i.e., 100%hydrogen peroxide) in place of the 70% solution used in the previously described experiments. Though procedurally quite simple, such experiments cannot be performed on commercially available pneumatic components, due to the high decomposition temperatures. Specifically, the adiabatic decomposition temperature of 100% peroxide is approximately 1000 0 C (1800 F), compared to approximately 230 0 C (450 F) for a 70% solution. Rather than conducting experiments using 100% peroxide, one can obtain a reasonable estimate of performance with projections based upon the experiments conducted with 70% solution. Upon replacing 70% propellant with 100% (technically 99.6%), at least two of the three parameters forming the actuation potential figure of merit would be expected to increase. Specifically, since the propellant contains more peroxide per unit mass, the heat of decomposition increases by a factor of 1.45 from 2.0 MJ/Kg to 2.9 MJ/Kg.

 Additionally, the relatively low conversion efficiencies observed earlier were primarily due to the heat required to vaporize the water in the reaction product. Since the 100% propellant contains less water, less energy is invested in vaporizing the reaction product. Recalculating the expected efficiencies accounting for the reduced water content, the conversion efficiency scales by a factor of 1.56. Assuming that the actuation system power density remains invariant (i.e., that it does not increase with the 100% propellant), the single-degree-of-freedom system shown in Fig. 2 with 100%propellant would be expected to have an actuation potential of 35 KJ KW/Kg2 , which is 7.3 times greater than the battery /dc motor system. A similar six-degree-of-freedom system would exhibit an actuation potential of 60.4 KJ KW/Kg2, more than an order of magnitude greater than the battery/dc motor system. The promise of such performance, which would presumably be further improved with better insulation and light weight components, justifies the fabrication of custom high-temperature pneumatic components.

5. CHALLENGES OF DESIGN

The biggest challenge in using monopropellant as a power source is providing adequate insulation to prevent the heat loss from the system. We have seen from the experimental results that the heat loss exceeds the power output obtained from the actuator. Finding a suitable method to contain this heat loss is the first and the biggest challenge in designing a monopropellant based power supply.

Another problem is the non availability of parts that can withstand the heat produced on the decomposition of 100% hydrogen peroxide. Due to this reason, a hydrogen peroxide solution of lower strength has to be used. Materials more resistant to heat are to be used to make the parts of the system so that it can withstand higher temperatures. This will aid the use of higher concentrations of peroxide thereby increasing the actuation potential of the system.

Monopropellants are highly reactive materials and are even toxic to humans. It has a tendency to catch fire if spilt on clothing. So the persons handling the fuel should be extremely cautious in order to avert possible danger of explosion and intoxication. The conventional power systems do not have such problems. Hence extra care must be taken while selecting the materials to be used in the system.

The selection or rejection of a proposed design depends heavily on its economic aspect. Hydrogen peroxide powered system when compared with battery operated power system is very costly in running condition. Also due to the presence of valves, maintenance costs of the system are high as well. The choice of valves can also influence the reliability of the system. The next challenge in designing monopropellant powered systems is attaining proper co ordination between the different control loops which controls the operation of the system. The emission of hot steam might be an inconvenience to other human workers if such robots are used along with humans. So it is better to limit the use of such robots to places inaccessible to humans. Controlling the emissions from the robot can make it usable along with humans. This can also improve the efficiency of the system.

6. CONCLUSION

A power supply and actuation system appropriate for a position or force controlled human-scale robot was proposed. The proposed approach utilizes a monopropellant as a gas generant to power pneumatic-type hot gas actuators. Experiments were performed that characterize the energetic behavior of the proposed system and offer the promise of an order-of-magnitude improvement in actuation potential relative to a battery powered dc-motor-actuated approach. Experiments also demonstrated good tracking and adequate bandwidth of the proposed actuation concept.

Steam powered robots are a possibility in the future provided the limitations of the existing prototype is done away with. A better actuation potential can be obtained by providing better insulation to the prototype thereby reducing the heat loss. Another challenge before researchers is to manufacture parts that can withstand the high temperatures generated on decomposition of 100% H2O2 .With the introduction of better controls, fuel and insulation, the se robots could function effectively and economically.

The proposed power supply was found to be a feasible solution to the problem of providing a long lasting power supply to robots that can actually work. Moreover the power output could be easily adjusted by controlling the rate of flow of the monopropellant. Although a full size human scale robot powered by a monopropellant is yet to be made, the experimental results obtained from a single degree of freedom manipulator proves the feasibility of such a system.

NUCLEAR REACTOR

DOWNLOAD



                           Conventional thermal power stations use oil or coal as the source as the source of energy. The reserves of these fuels are becoming depleted in many countries and thus there is a tendency to seek alternative sources of energy. In a nuclear power station instead of a furnace there is a nuclear reactor, in which heat is generated by splitting atoms of radioactive material under suitable conditions. For economical use in a power system a nuclear power station generally has to be large and where large units are justifiable.

Nuclear power plants provide about 17 percent of the world's electricity. Some countries depend more on nuclear power for electricity than others. In France, for instance, about 75 percent of the electricity is generated from nuclear power, according to the International Atomic Energy Agency
so that . In the United States, nuclear power supplies about 15 percent of the electricity overall, but some states get more power from nuclear plants than others. There are more than 400 nuclear power plants around the world, with more than 100 in the United States.

Nuclear power station in India:
In India, it was Dr. H. J. Bhabha who put India on the road to nuclear research, more than two decades ago. At present India have   four nuclear power plants.
·        Tarapur
·        Rana Pratap Sagar
·        Kalpakkam
·        Narora


Tarapur: This is the first power plant of India. It has two boiling water reactors each of 200 Me W capacity and each uses enriched U as fuel.

Rana Pratap Sagar: It is situated at Rajasthan.

Kalpakkam: It is situated at Tamil Nadu.

Narora: It is at U. P.

Main parts of a nuclear power station:

The main parts of a nuclear power station are
·        Nuclear reactor
·        Heat exchanger
·        Steam turbine
·        Condenser
·        Generator

Working:
In a reactor heat is produced by the fissioning or splitting of uranium atoms. A cooling medium takes up this heat and delivers it to the heat exchanger, where steam for the turbine is raised. When the uranium atoms split, there is radiation as well, the reactor and its cooling circuit must be heavily shielded against radiation hazards.

Large electrical generating plants which provide most of our electricity all work on the same principle - they are giant steam engines. Power plants use heat supplied by a fuel to boil water and make steam, which drives a generator to make electricity. A generating plant's fuel, whether it is coal, gas, oil or uranium, heats water and turns it into steam. The pressure of the steam spins the blades of a giant rotating metal fan called a turbine. That turbine turns the shaft of a huge generator. Inside the generator, coils of wire and magnetic fields interact - and electricity is produced.

Parts of Nuclear Reactor:
1.     nuclear fuel
2.     reactor core
3.     moderator
4.     control rods
5.     reflector
6.     reactor vessel
7.     biological shielding
8.     coolant

Nuclear fuel:
Fuel of a reactor should be fissionable material which can be defined as a fissionable material which can be defined as an element or isotope whose nuclei can be caused to undergo nuclear fission nuclear bombardment and to produce a fission chain reaction.
The fuels used are: U238, U235, U 234, UO2
Fertile materials, those which can be transformed into fissile materials, cannot sustain chain reactions. When  a fertile material is hit by neutrons and absorbs some of them, it is converted to fissile material.U238 and Th 232 are examples of fertile materials used for reactor purposes.
Reactor core:
This contains a number of fuel rods made of fissile material.

Moderator:
This material in the reactor core is used to moderate or to reduce the neutron speeds to a value that increases the probability of fission occurring.

Control rods:
The energy inside the reactor is controlled by the control rod. These are in cylindrical or sheet form made of boron or cadmium.
These rods can be moved in and out of the holes in the reactor core assembly.

Reflector:
This completely surrounds the reactor core within the thermal shielding arrangement and helps to bounce escaping neutrons back into the core. This conserves the nuclear fuel.

Reactor vessel:
It is a strong walled container housing the core of the power reactor. It contains moderate, reflector, thermal shielding and control rods.

Biological shielding:
Shielding helps in giving protection from the deadly α- and β-particle radiations and γ-rays as well as neutrons given off by the process of fission within the reactor.

Coolant:
This removes heat from the core produced by nuclear reaction. The types of coolants used are carbon dioxide, air, hydrogen, helium, sodium or sodium potassium.

Principle of reactor control:
When a nucleus captures a neutron the resulting compound nucleus is unstable. It splits into two fragments, releases energy and ejects some neutrons. If conditions are favorable, neutrons ejected by the first fission may be captured by other nuclei and the chain reaction begins. If the energy output from a reactor is to be maintained constant, one neutron and not more than one from each fission must split another nucleus(multiplication factor, k=1)
Otherwise control of chain reaction will not be possible.
 The principal law of nuclear energy is   E = mc2
                             Where   W-Energy (joules)                      
                                m- Mass (kilograms)
                                c- Speed of light (3*108m/sec)

The main reactions inside a reactor are
238U92    +   1n0     à   239U92   +  γ
239U92   has a half life period of 23.5 min only and hence it is    unstable.
239U92    +   0e-1       à   239Np93
239Np93 again has a short half life and emits β-particles.
239Np93   +   0e-1     à  239Pu94

Types of reactors:
1.     boiling water reactor
2.     pressurized water reactor
3.     pressurize heavy water reactor
4.     gas cooled reactor
5.     advanced gas cooled reactor
6.     light water graphite reactor
7.     fast breeder reactor
8.     high temperature gas cooled reactor
9.     CANDU type reactor

What types of reactors are there?
All nuclear reactors operate on the same basic principle, but various designs are in use throughout the world.

Choice of cycle conversion:
1.     A well established method of conversion of heat due to nuclear reaction to electric power by the direct use of the coolant. The reactor heat is transferred to the coolant, which heats water to produce steam for driving the turbine or other heat engine.
2.     Another method for conversion of heat produced in the reactor to electric power is the direct use of liquid or as that cools the reactor to drive the turbine or other heat engine, which in turn drives the electric generator.
3.     Direct generation of electric current from the heat produced during the nuclear reaction. An example of this type of conversion is the production of electric current by means of thermocouples.
4.     Direct generation of electric current from electrons produced during a nuclear reaction.

Advantages of Nuclear Power Plant:
1.     Space requirement of a nuclear power plant is less as compared to other conventional power plants of equal size.
2.     A nuclear power plant consumes very small quantity of fuel. Thus fuel transportation cost is less and large fuel storage facility is not needed.
3.     There is increased reliability of operation.
4.     Nuclear power plants are not affected by adverse weather   conditions.
5.     Nuclear power plants are well suited to meet large power demands. They give better performance at higher load factors (80-90%).
6.     Materials expenditure on metal structures, piping, storage mechanisms are much lower for a nuclear power plant than a coal burning power plant.
7.     It does not require large quantity of water.

Disadvantages:
1.     Initial cost of nuclear power plant is higher as compared to hydro or steam power plant.
2.     Nuclear power plants are not well suited for varying load conditions.
3.     Radioactive wastes if not disposed carefully may have bad effect on the health of workers and other population.
4.     Maintenance cost of the plant is high.
5.     It requires trained personnel to handle nuclear power plants.

Nuclear and Chemical Accidents
1952
Dec. 12, Chalk River, nr. Ottawa, Canada: a partial meltdown of the reactor's uranium fuel core resulted after the accidental removal of four control rods. Although millions of gallons of radioactive water accumulated inside the reactor, there were no injuries.

1953
Love Canal, nr. Niagara Falls, N.Y.: was destroyed by waste from chemical plants. By the 1990s, the town had been cleaned up enough for families to begin moving back to the area.


1957
Oct. 7, Windscale Pile No. 1, north of Liverpool, England: fire in a graphite-cooled reactor spewed radiation over the countryside, contaminating a 200-square-mile area.
South Ural Mountains: explosion of radioactive wastes at Soviet nuclear weapons factory 12 mi from city of Kyshtym forced the evacuation of over 10,000 people from a contaminated area. No casualties were reported by Soviet officials.

1976
nr. Greifswald, East Germany: radioactive core of reactor in the Lubmin nuclear power plant nearly melted down due to the failure of safety systems during a fire.

1979
March 28, Three Mile Island, nr. Harrisburg, Pa.: one of two reactors lost its coolant, which caused overheating and partial meltdown of its uranium core. Some radioactive
later and gases were released. This was the worst accident in U.S. nuclear-reactor history

1984
Dec. 3, Bhopal, India: toxic gas, methyl isocyanate, seeped from Union Carbide insecticide plant, killed more than 2,000, injured about 150,000.

1986
April 26, Chernobyl, nr. Kiev, Ukraine: explosion and fire in the graphite core of one of four reactors released radioactive material that spread over part of the Soviet Union, eastern Europe, Scandinavia, and later western Europe. 31 claimed dead. Total casualties are unknown. Worst such accident to date.

1987
Sept. 18, Goiânia, Brazil: 244 people contaminated with cesium-137 from a cancer-therapy machine that had been sold as scrap. Four people died in worst radiation disaster in Western Hemisphere.

1999
Sept. 30, Tokaimura, Japan: uncontrolled chain reaction in a uranium-processing nuclear fuel plant spewed high levels of radioactive gas into the air, killing two workers and seriously injuring one other.

2004
Aug. 9, Mihama, Japan: non-radioactive steam leaked from a nuclear power plant, killing four workers and severely burning seven others.

Conclusion:
Widely used nuclear energy can be of great benefit for mankind. It can bridge the gap caused by inadequate coal and oil supply. It should be used to as much extent as possible to solve power problem. With further developments, it is likely that the cost of nuclear power stations will be lowered and that they will soon be competitive. With the depletion of fuel reserves and the question of transporting fuel over  long distances, nuclear power stations are taking an important place in the development of the power potentials of the nations of the world today in the context of” the changing pattern of power ”.

3G Technology

DOWNLOAD



                         3G refers to the next generation of wireless communications technology; it is a ‘catch all’ name which encompasses everything from the technology to the branding of mobile communication devices.  The aim of 3G (third generation) is to deliver the capability of much higher data rates to mobile communications devices over a large geographical area. Data rates of up to 2megabits per second will be capable in some areas.It is also the aim of 3G to unify the wireless devices the world over, so a user from the UK, can travel Europe, and the US, and use the same, highspeed data links, seamlessly as they travel the globe.  3G is a packet switched suite of protocols, a technology which was originally developed for the internet, it also uses techniques such as Code Division Multiple Access (originally developed by the military) to allow efficient, fast, and secure communications over the wireless medium. To the end user, 3G means fast World Wide Web browsing, file transfers, emailing, even video phoning and video conferencing from their mobile phone, PDA, or laptop. With coverage over all of Europe, the USA, China, Japan, and the rest of the world, with seamless integration between all of these countries and more. Although 3G is relatively an infant, the technology is growing fast, with more and more wireless technology companies developing devices with 3G capabilities, such as Nokia, Siemens and Sony Ericsson.On the horizon is 4G, a technology which will truly integrate the internet, and mobile telecommunications.


Evolution towards 3G

                                  Being called 3G, or third generation, there is, inevitably, a first and second generation.

                                  1G refers to the original analogue mobile phones, which resembled a brick. They were large, and very heavy, due to the weight of the battery, they were also very expensive. However, they paved the way for something that was soon to become a revolution in the technological world, phones would soon start to be smaller, lighter, cheaper, and better. Operating time increased while battery weight dropped, this was due to advancements in battery technology, as well as circuit design which allowed for much lower power consumption.
                                          2G saw the birth of the digital mobile phone, and a standard which is the greatest success story in the history of the mobile phone to date. The Global System for Mobile Communications (GSM) is a standard that unified Europe’s mobile phone technologies, it allows one phone to be used throughout Western Europe. Using TDMA (Time division multiple access), the GSM standard allowed millions of users throughout Europe to travel freely and still be able to use there phone. Although Europe enjoyed a unified standard, in America, three standards still exist, from three different companies. Because of this mobile communications haven’t become nearly as popular in the States, as they have done in Europe.2G worked well for voice communications, it provided data rates of up to 9.6Kbps, good enough for voice, but no where near enough for bandwidth demanding modern day media, such as Video and file transfers. Something which the world was screaming out for, and to provide this, 3G was developed.
                                          Due to the nature of 3G, and its incredible complexity and expensive, the move from 2G to 3G wasn’t going to happen over night, so the 2.5G standard was developed.
                                       The 2.5G standard had a major technically different feature compared to its predecessor, it used Packet Switching technology to transmit data. The General Packet Radio Service (GPRS) replaced GSM as the 2.5G standard. GPRS actually overlays a packet switched technology onto the original GSM circuit switched network. Data rates of 2.5G can reach 50kbps, some may think this is a waste of time, and service provides should have gone straight to the goal and implemented 3G, however, the 2.5G standard is a much needed step, as it gives service providers experience of running packet switched networks, and charging on a data bases, rather than a time basis.
                           Other than GPRS, another standard called EDGE is another upgrade option from GSM, and is three times faster with a maximum transfer rate of 150Kbps as opposed to GPRS’s 50Kbps. EDGE also can be an upgrade from TDMA networks, so some American operators may go this route.

How does 3G work?

                      3G is a packet switched technology, much like the internet. There are some basic principles of Radio Transmission Technologies (RTT’s) we need to understand before we can understand how 3G works, these are:

Simplex & Duplex, TDD & FDD, Symmetric & Asymmetric transmission, TDMA & FDMA, Circuit switching & packet switching and 3G geographical cells.

Simplex and Duplex

                        In a simplex transmission, information can only flow one way at one time, this is because there is only one frequency being used to communicate on. The easiest way of explaining this is to use walkie-talkies as an example. With a set of walkie-talkies, only one person can talk to the other at any given time, for the other person to transmit, they must wait until the other person has stopped.

                        In a duplex transmission, two data transmissions can be sent at any one time, this is how mobile phones work, it allows both people to speak at the same time, without any delay. If more than two data transmissions can happen at any one time, this is called multiplex.

TDD and FDD

                        Up until the recent developments of mobile phones, FDD (frequency division duplex) was used, this is where several frequencies are used, one for the upstream (signals going from the phone to the base station), and one for the downstream (the opposite, from the base station to the phone). A “guard band” is also needed, which sits in between the frequencies to separate them and provide isolation.
                        Although FDD works, it is very wasteful, as it uses several frequencies in total, and not to there full potential. This is why TDD was developed.

                        TDD means Time Division Duplex, and as the name suggests, this uses time, rather than frequency to do the duplexing, hence saving valuable frequencies. It works by switching the signals very rapidly. First the upstream transmits, then the downstream transmits and this continues to cycle, this happens so quick, it seems like the upstream and downstream are permanently connected. This gives the same end product as FDD, but uses much less frequencies. As with FDD, this also requires some sort of guard, but as we are duplexing in the time domain, it uses a guard time, rather than a guard frequency.

Symmetric and Asymmetric Transmission

                         A symmetric transmission is where the upstream, and downstream are the same speed, or data rate. Things such as voice on mobile phones use symmetric transmission, as the data rate needed to transmits your voice is the same as receiving another persons.
                         For things like video broadcasts, internet surfing etc, a lot more downstream bandwidth is required, as you will mostly be receiving data. Typically the only things being sent upstream in that case is requests (for instance, you clicking on a link in your wap/internet browser), or packet acknowledgments. A typical example of an Asymmetric connection is ADSL broadband, the A, which coincidently enough stands for Asymmetric, usually has 256Kbps of upstream, and 512+kbps on the downstream bandwidth.

 TDMA vs. CDMA

                        We have considered how a mobile phone can send and receive calls at the same time (via an uplink and a downlink). Now we will examine how many users can be multiplexed into the same channel (i.e., share the channel) without getting interference from other users, a capability called multiple access. For 3G technology, there are basically two competing technologies to achieve multiple access: TDMA and CDMA.

TDMA is Time Division Multiple Access. It works by dividing a single radio frequency into many small time slots. Each caller is assigned a specific time slot for transmission. Again, because of the rapid switching, each caller has the impression of having exclusive use of the channel.

CDMA is Code Division Multiple Access. CDMA works by giving each user a unique code. The signals from all the users can then be spread over a wide frequency band. The transmitting frequency for any one user is not fixed but is allowed to vary within the limits of the band. The receiver has knowledge of the sender's unique code, and is therefore able to extract the correct signal no matter what the frequency.

                        This technique of spreading a signal over a wide frequency band is known as spread spectrum. The advantage of spread spectrum is that it is resistant to interference - if a source of interference blocks one frequency, the signal can still get through on another frequency. Spread spectrum signals are therefore difficult to jam, and it is not surprising that this technology was developed for military uses.

                        Finally, let's consider another robust technology originally developed by the military which is finding application with 3G: packet switching.

Circuit Switching vs. Packet Switching

                        Traditional connections for voice communications require a physical path connecting the users at the two ends of the line, and that path stays open until the conversation ends. This method of connecting a transmitter and receiver by giving them exclusive access to a direct connection is called circuit switching.
                        Most modern networking technology is radically different from this traditional model because it uses packet data. Packet data is information which is:

o chopped into pieces (packets),
o given a destination address,
o mixed with other data from other sources,
o transmitted over a line with all the other data,
o reconstituted at the other end.

Packet-switched networks chop the telephone conversation into discrete "packets" of data like pieces in a jigsaw puzzle, and those pieces are reassembled to recreate the original conversation. Packet data was originally developed as the technology behind the Internet.
A data packet.

The major part of a packet's contents is reserved for the data to be transmitted. This part is called the payload. In general, the data to be transmitted is arbitrarily chopped-up into payloads of the same size. At the start of the packet is a smaller area called a header. The header is vital because the header contains the address of the packet's intended recipient. This means that packets from many different phone users can be mixed into the same transmission channel, and correctly sorted at the other end. There is no longer a need for a constant, exclusive, direct channel between the sender and the receiver.
                   Packet data is added to the channel only when there is something to send, and the user is only charged for the amount of data sent. For example, when reading a small article, the user will only pay for what's been sent or received. However, both the sender and the receiver get the impression of a communications channel which is "always on".
On the downside, packets can only be added to the channel where there is an empty slot in the channel, leading to the fact that a guaranteed speed cannot be given. The resultant delays pose a problem for voice transmission over packet networks, and is the reason why internet pages can be slow to load.

3G geographical cells

              The 3G network has a hierarchal network of different sized cells. These are:

                       
Ø  A Macro cell this is the biggest of the three areas, coverage is normally around the size of a city.
Ø  A Micro cell this cell has the coverage, of about the size of city centre.
Ø  A Pico cell The smallest coverage, perhaps a office complex, hotel, or airport. A Pico cell is often known as a “hot spot”.
                        The reason for the above division of regions is simple, shorter range communications are faster, and allow for a higher amount of users. This is why a Pico cell, or hot spot., is located to a small geographical area which is a very busy area, such as an airport.

                        TDD isn’t good in transmitting long distances, this is because of the delay. If you think, TDD uses time to duplex signals onto the same frequency. The further the mobile phone is away from the base station, the longer it takes a signal to travel, because it takes longer, there is more of a delay, so because of this the switching between time slots cannot happen so quick, so the useable bandwidth decreases.

2G Standards

                        The existing mobile phone market is referred to as the "second generation" of digital mobile communications, or "2G" (analogue mobile phones were "1G"). The European market is controlled by the Global System for Mobile communications (GSM) digital wireless standard. This uses TDMA as its radio transmission technology (RTT). GSM has proven to be the great success story of mobile standards as it has become the unifying standard in Europe - it is possible to use one phone throughout Western Europe. Because of the number of wireless users are in Europe this has greatly strengthened GSM's position as the basis for a potential global standard. The hegemony of GSM has resulted in Finland's Nokia and the UK's Vodafone becoming the powerhouses of the wireless economy.
                              In North America the situation is not nearly so unified. The situation is divided three-ways between GSM, a TDMA-based system from AT&T Wireless (IS-136), and a CDMA system called CDMAone (IS-95A) from Sprint and Verizon. This confusion of standards has resulted in the reduced popularity of cellphones in the US. CDMAone has perhaps the strongest grip on the American market, as well as being popular in Asia.

2G data transmission rates do not exceed 9.6Kbps (kilobits per second). This is not nearly fast enough to achieve complex 3G functionality.

2.5G Standards

                        The transition from 2G to 3G is technically extremely challenging (requiring the development of radically new transmission technologies), and highly expensive (requiring vast capital outlay on new infrastructure). For both of these reasons it makes sense to move to 3G via intermediate 2.5G standards.
                        2.5G radio transmission technology is radically different from 2G technology because it uses packet switching. GPRS (General Packet Radio Service) is the European 2.5G standard, the upgrade from GSM. GPRS overlays a packet-switched architecture onto the GSM circuit-switched architecture. It is a useful evolutionary step on the road to 3G because it gives telecoms operators experience of operating packet networks, and charging for packet data. Data transfer rates will reach 50Kbps.
                        EDGE (Enhanced Data for Global Evolution) is another 2.5G upgrade path from GSM. EDGE is attractive for American operators as it is possible to upgrade to EDGE from both TDMA (IS-136) networks as well as from GSM. You might see the full EGDE standard referred to as UWC-136.
                        EDGE data rates are three times faster than GPRS. Realistically, the maximum rate that EDGE will be able to achieve will be 150Kbps. Even so, EDGE might be used for some pseudo-3G networks (the minimum cut-off data rate for 3G systems is 144Kbps) though this is not generally regarded as a bona fide 3G solution.
                        As EDGE would be cheaper than a full-blown 3G solution, this makes it attractive, especially for operators which cannot afford a licence for the full 3G radio spectrum. Most notably, AT&T has announced it is to use EDGE. AT&T has claimed a maximum data rate of 384Kbps for EDGE, although experts point out that "this is based on the ideal scenario of one person using the network standing next to a base station". AT&T's wireless division, after receiving a $9.8 billion stake from Japan's NTT DoCoMo i-mode service, plans to overlay the 3G standard, W-CDMA, onto their EDGE networks in the American market.
                        Deploying EDGE might prove surprisingly complex - it's more than just a software upgrade. It may require additions to the hardware subsystems of base stations, changes to base station antennas, and possibly require the construction of new base stations. For these reasons, some GSM operators might not adopt EDGE but might migrate from GSM or GPRS directly to the 3G standard (W-CDMA).
                        The 2.5G upgrade from CDMAone (IS-95A) is to CDMAone (IS-95B) which adds packet-switched capability. It offers data rates up to 115Kbps.

3G Standards

                        The 3G standard was created by the International Telecommunication Union (ITU) and is called IMT-2000. The aim of IMT-2000 is to harmonize worldwide 3G systems to provide global roaming. However, as was explained in the introduction to this section, harmonizing so many different standards proved extremely difficult. As a result, what we have been left with is five different standards grouped together under the IMT-2000 label:

W-CDMA
CDMA2000
TD-CDMA/TD-SCDMA
DECT
UWC-136

At this point, the definition of what is and what isn't "3G" becomes somewhat murky. Of these five standards, only three allow full network coverage over macro cells, micro cells and pico cells and can thus be considered as full 3G solutions: W-CDMA, CDMA2000, and TD-SCDMA. Of the remainder, DECT is used for those cordless phones you have in the house, and could be used for 3G short-range "hot-spots" (hence, it could be considered as being "part of a 3G network"), but it does not allow full network coverage so is not considered further here. And UWC-136 is another name for EDGE which is generally considered to be a 2.5G solution and was considered in the previous section.
So that leaves W-CDMA, CDMA2000, and TD-SCDMA - the bona fide 3G solutions.

W-CDMA

                        The 3G standard that has been agreed for Europe and Japan (very important markets) is known as UMTS. UMTS is an upgrade from GSM via GPRS or EDGE. UMTS is the European vision of 3G, and has been sold as the successor to the ultra-successful GSM.
                        The terrestrial part of UMTS (i.e., non-satellite) is known as UTRA (UMTS Terrestrial Radio Access). The FDD component of UTRA is based on the W-CDMA standard  (UTRA FDD). This offers very high (theoretical!) data rates up to 2Mbit/sec.The TDD component of UTRA is called TD-CDMA (or UTRA TDD) and will be considered later.
                         The standardisation work for UMTS is being carried-out under the supervision of the Third Generation Partnership Project (3GPP).W-CDMA has recently been renamed 3GSM.

CDMA 2000

                       The chief competitor to Europe's UMTS standard is San Diego-based Qualcomm's CDMA2000.The standardisation work for CDMA2000 is being carried-out under the supervision of the Third Generation Partnership Project 2, (3GPP2). The CDMA Development Group offers advice to 3GPP2.
                        Even though "W-CDMA" and "CDMA2000" both have "CDMA" in their names, they are completely different systems using different technologies. However, it is hoped that mobile devices using the two systems will be able to talk to each other.
                        CDMA2000 has two phases: phase one is 1XRTT (144 Kbps) (also known as 1X). The next evolutionary step is to the two CDMA2000 1X EV ("EV" = "Evolution") standards. CDMA2000 1X EV-DO ("Data Only") will use separate frequencies for data and voice. The following step is to CDMA2000 1X EV-DV ("Data and Voice") which will integrate voice and data on the same frequency band.
                        South Korea's SK Telecom launched the world's first 3G system in October 2000. Their system is based on CDMA2000 1X. They were followed by LG Telecom and KT Freetel (both Korean). Operational 3G systems based on CDMA2000 1X are now appearing around the world.
                        In the USA, Sprint has launched its nationwide CDMA2000 1X service called “Sprint Power Vision”. With Sprint PCS Vision Multimedia Services, customers get streaming audio and video content from familiar sources, including ABC News Now, NFL Network, Fox Sports, ESPN, NBC Discovery Channel, and many more. Sprint offer a range of multimedia phones including the Fusic.

TD-CDMA/TD-SCDMA

                        The UMTS standard also contains another radio transmission standard which is rarely mentioned: TD-CDMA (TDD UTRA because it is the TDD component of UTRA). TD-CDMA was developed by Siemens. While W-CDMA is an FDD technology (requiring paired spectrum), TD-CDMA is a TDD technology and thus can use unpaired spectrum. TDD is well-suited to the transmission of internet data.
                        China has more mobile phone users than any other country in the world, so anything China does in 3G cannot be ignored. The Chinese national 3G standard is a TDD standard similar to TD-CDMA: TD-SCDMA. TD-SCDMA was developed by the China Academy of Telecommunications Technology (CATT) in collaboration with Siemens. TD-SCDMA eliminates the uplink/downlink interference which affects other TDD methods by applying "terminal synchonisation" techniques (the "S" in TD-SCDMA stands for "synchronisation"). Because of this, TD-SCDMA allows full network coverage over macro cells, micro cells, and pico cells. Hence, TD-SCDMA stands alongside W-CDMA and CDMA2000 as a fully-fledged 3G standard. The 3GPP have extended the TD-CDMA standard to include TD-SCDMA as an official IMT-2000 standard.
                        Unfortunately, TD-SCDMA has performed poorly in trials, and Chinese network operators may prefer W-CDMA over TD-SCDMA.

3G Applications:

Ø Wireless Internet
Ø Audio on demand
Ø Electronic postcards
Ø Video conferencing
Ø Secure mobile commerce transactions
Ø Traffic and traveling information - location specific
Ø Information services:
o Games
o E-mail
o Sports
o News Public transport
o Entertainment/gambling
o Job adverts
Ø Video telephony: Point-to-point video services
Ø On-line game:
o Download
o Rentals
o Review and tips/cheats
Ø Live & archive video:
o Short clips
o Information
o Entertainment

The 3G Performance Advantage :
      Time to download a 1 MB file:

Fixed line modem: 3 minutes
GSM cell phone: 15 minutes
Enhanced GSM phone: 1-5 minutes
3G phone (outdoor): 21 seconds
3G phone (indoor): 4 seconds
Bandwidth and speed:
                        3G promises increased bandwidth, up to 384 Kbps when a device is stationary or moving at pedestrian speed, 128 Kbps in a car, and 2 Mbps in fixed applications. It is expected that IMT-2000 will provide higher transmission rates: a minimum speed of 2Mbit/s and maximum of 14.4Mbit/s for stationary users, and 348 kbit/s in a moving vehicle

The Future of 3G:
                        There’s no doubt what is wanted for the future of 3G, and that’s convergence. Leading 3G figureheads around the world want a convergence of the phone networks, to unite the world as a whole with a wireless technology that is compatible across the globe.

                        There’s a good chance this will happen, as it has already begun to. And it possible won’t be far off that we see perhaps a sub-standard introduced that converges the different 3G standards into one global roaming capable standard.

                        On the horizon is 4G, which promises to bring true convergence of internet’s IP protocol technology to mobiles. By the time 4G is distributed, IPv6 will be well on its way, and the possibilities will be endless. Ever thought about texting your boiler to tell it to get the heating on just as you leave work?

3G Summary :
                        3G mobile is a major opportunity for business, commerce and consumers.Brings together the two fastest growing market sectors - Mobile and Internet Market.Services and standards evolving from 2G to 3G Significant opportunities for value added content and service providers.

Ad hoc networks


Recent advances in portable computing and wireless technologies are opening up exciting possibilities for the future of wireless mobile networking. A mobile ad hoc network (MANET) is an autonomous system of mobile hosts connected by wireless links. Mobile networks can be classified into infrastructure networks and mobile ad hoc networks according to their dependence on fixed infrastructures. In an infrastructure mobile network, mobile nodes have wired access points (or base stations) within their transmission range.
The access points compose the backbone for an infrastructure network. In contrast, mobile ad hoc networks are autonomously self-organized networks without infrastructure support. In a mobile ad hoc network, nodes move arbitrarily, therefore the network may experiences rapid and unpredictable topology changes. Additionally, because nodes in a mobile ad hoc network normally have limited transmission ranges, some nodes cannot communicate directly with each other. Hence, routing paths in mobile ad hoc networks potentially contain multiple hops, and every node in mobile ad hoc networks has the responsibility to act as a router.
Mobile ad hoc networks originated from the DARPA Packet Radio Network (PRNet) and SURAN project. Being independent on pre-established infrastructure, mobile ad hoc networks have advantages such as rapid and ease of deployment, improved flexibility and reduced costs.




DOWNLOAD


Multicasting is the transmission of datagrams to a group of hosts identified by a single destination address and hence is intended for group-oriented computing. In MANETs, multicasting can support a variety of applications such as conferences, meetings, lecturers, traffic control, search and rescue, disaster recovery, and automated battlefield. In ad hoc networks, each host must act as a router since routes are mostly multihop.
Figure 1-1 shows an example of using MANET to hold conference meeting in a company. A group of mobile device users set up a meeting outside their normal office environment where the business network infrastructure is missing. The mobile devices automatically construct a mobile ad hoc network through wireless links and   communicate with one another. The figure shows topology of the network and the available wireless links at a certain time. Suppose Susan wants to send data to Jerry. According to the network topology, Jerry’s PDA is not in the immediate radio transmission range of Susan’s laptop. The routing software on Susan’s laptop finds a route Susan �� Tommy �� Jerry and sends the data packets to Tommy’s laptop. Then Tommy’s laptop forwards the packets to the destination, Jerry’s PDA. If the network topology changes and the wireless link between Susan and Tommy become broken, the routing software on Susan’s laptop will try to find another route.

Well established routing protocols do exist to offer efficient multicasting service in conventional wired networks. These protocols, having been designed for fixed networks, may fail to keep up with node movements and frequent topology changes in a MANET. As nodes become increasingly mobile, these protocols need to evolve to provide efficient service in the new environment. Therefore, adopting existing wired multicast protocols as such to a MANET, which completely lacks infrastructure, appears less promising. So new protocols need to be proposed and investigated so that they take issues like topological change. Ad hoc network may consist of unidirectional links as well as bidirectional links. Moreover, wireless channel bandwidth is limited. The scarce bandwidth decreases even further due to the effects of multiple access, signal interference, and channel fading. Securing Adhoc routing presents challenges because the constrains in Adhoc networks usually arise due to low computational and bandwidth capacity of the nodes, mobility of intermediate nodes in an established path and absence of routing infrastructure. All these limitations and constraints make multihop mobile ad hoc network research more challenging.

   1.2 Challenges in Routing and Multicasting
         Routes in ad hoc networks are multihop because of the limited propagation range of wireless nodes. Since nodes in the network move freely and randomly, routes often get disconnected. Routing protocols are thus responsible for maintaining and reconstructing the routes in a timely manner as well as establishing the robust routes. Furthermore, routing protocols are required to perform all the above tasks without generating excessive control message overhead. Control packets must be utilized efficiently to deliver data packets, and be generated only when necessary. Reducing the control overhead can make the routing protocol efficient in bandwidth and energy consumption.
Multipoint communications have emerged as one of the most researched areas in the field of networking. As the technology and popularity of Internet grow, applications, such as online gaming, video conferencing, that require multicast support are become more widespread. In a typical ad hoc environment, network hosts work in groups to carry out a given task. Therefore, multicast plays an important role in ad hoc networks. As we can see, providing efficient multicasting over MANET faces many challenges, including dynamic group establishment and constant update of delivery path due to node movement. Multicast protocols used in static networks like Distance Vector Multicast Routing Protocol (DVMRP), Multicast Open Shortest Path First (MOSPF), and Core Based Tree (CBT) do not perform well in wireless ad hoc networks because multicast tree structures are fragile and must be readjusted as connectivity changes. Hence, the tree structures used in static networks must be modified, or a different topology between members (i.e., mesh) need to be deployed for efficient multicasting in wireless mobile ad hoc networks [1]. Undoubtedly, multicast communication is an efficient means of supporting group oriented applications. This is especially true in MANETs where nodes are energy-and bandwidth limited. In these resources constrained environments, reliable point –to-point protocols can get prohibitively expensive. The Convergence of multiple requests to a single node typically causes intolerable congestion, violating the time constraints of a critical mission, and may drain the node’s battery, cutting short the network’s lifetime. Despite the fact that reliable multicasting is vital to the success of mission critical applications in MANETs.

1.2.1          Dynamic Topologies
Wireless nodes in an ad-hoc network are free to move about at will. As such, the topology of the network, which is typically multi-hop, is highly dynamic, changing randomly at unpredictable intervals in unpredictable ways. Because of wireless radio propagation effects, such as interference, links may be either bidirectional or unidirectional.

1.2.2 Bandwidth-constrained, variable capacity links
The bandwidth capacity of wireless networks will remain significantly below that of their wired counterparts. The realizable throughput of wireless links above the data link layer, due to effects such as noise, fading, interference, and the inability to use collision detection for media access control is often significantly less than the radio’s maximum throughput at the physical layer. The effects of this, and given that users of ad-hoc networks will demand similar high-bandwidth services to those on wired networks, means that congestion on wireless networks will be much more common than in wired networks.

1.2.3 Energy-constrained operation
Wireless networks will typically operate on laptop computers, hand-held computers and other battery-powered devices. As such, ad-hoc routing protocols must be designed with the conservation of the device’s energy in mind. There is a conflict between the requirements that nodes in an ad-hoc network must be willing to offer their services for forwarding packets for other nodes, versus the desirability from an energy conservation perspective that nodes sleep when they are not actively being used.

1.2.4   Limited physical security
There is an increased possibility of eaves-dropping, spoofing, and denial-of-service attacks on wireless networks, due in part to their relative lack of physical security in relation to their wired counterparts. Security enhanced versions of ad-hoc routing protocols could be used to ensure the operation of the routing protocol remains unaffected by attempts to forge or alter routing protocol control messages. Care must be taken when transferring sensitive data across an ad-hoc network. This could be achieved by conventional encryption. However, Public Key Infrastructure (PKI), or more basic key exchange techniques are difficult in an ad hoc network due to the lack of authorities of trust and appropriate network infrastructure.

1.2.5 Zero Configurations
Another desirable property of ad-hoc networks is that they should require little or no administrative overhead for their operation. It is desirable that when a group of Wireless nodes come together, they can negotiate all the relevant networking parameters automatically without manual intervention. In IP-enabled ad-hoc networks, the most important parameter is a node’s Internet Protocol (IP) address. This issue of assigning unique IP addresses to nodes in an ad-hoc network is another area of substantial research. Traditional wired networks typically use a centralized solution to the problem in the form of the Dynamic Host Configuration Protocol (DHCP). Given the lack of a central administrative body in an ad-hoc network, a distributed approach is required. It is likely the solution will involve nodes selecting their IP address at random, and using some means, such as examining Address Resolution Protocol (ARP) traffic from other nodes, to prevent or resolve issues where collisions have occurred.


1.3   Contributions

The accomplishments, which are elaborated throughout this dissertation, can be broadly listed as follows:

A performed simulation of up to 50 seeds where each seed contains 100 nodes and evaluated ad hoc routing protocol scalability. Several schemes were introduced to improve the protocol performance in large networks. This work is the first to conduct a simulation study of such size using Qualnet 4.0 and NS 2.27.
Proposed the On-Demand Multicast Routing Protocol (ODMRP). ODMRP builds the mesh structure on demand to provide multiple paths among multicast members. The mesh makes the protocol robust to mobility. ODMRP can function as multicast and unicast.  The protocol was implemented in simulation platform using GloMoSim and Qualnet 4.0.
Applied various techniques to enhance the performance of ODMRP. Theses enhancements include mobility predictions, reliable packet delivery, and elimination of the route acquisition latency.
Studied the various QoS requirements with multicast routing in ad hoc networks and proposed a new cross-layer framework which provides QoS guarantee to multicast routing in MANETs. These QoS parameters include call-admission, bandwidth reservation and delay constraint.
Developed and implemented an Adhoc QoS multicasting (AQM) algorithm and proposed a cross layer framework to support QoS multicasting. This work is the first to provide multicast quality of service in mobile ad hoc networks.
Proposed ReAct transport layer protocol on top of the multicast zone routing protocol to provide reliable services. Achieved good scalability and high throughput by employing local recovery mechanism. The protocol was implemented in simulation platform using Qualnet 4.0.
The problem of constructing energy-efficient key distribution schemes for securing multicast communication in wireless ad hoc networks was studied. The network topology, the power proximity and the path loss characteristics of the medium were incorporated in the key distribution tree design to conserve energy. The algorithm has been developed for homogeneous environment.
1.5   Related Work
 1.5.1   Classification of Ad-hoc Routing Protocols
Ad-hoc routing protocols can broadly be classified into proactive, reactive and hybrid protocols. The approaches involve a trade-off between the amount of overhead required to maintain routes between node pairs (possibly pairs that will never communicate), and the latency involved in discovering new routes as needed.

1.5.1.1   Proactive Protocols
Proactive protocols, also known as table-driven protocols, involve attempting to maintain routes between nodes in the network at all times, including when the routes are not currently being used. Updates to the individual links within the networks are propagated to all nodes or a relevant subset of nodes, in the network such that all nodes in the network eventually share a consistent view of the state of the network. The advantage of this approach is that there is little or no latency involved when a node wishes to begin communicating with an arbitrary node that it has not yet been in communication with. The disadvantage is that the control message overhead of maintaining all routes within the network can rapidly overwhelm the capacity of the network in very large networks, or situations of high mobility. Examples of pro-active protocols include the Destination Sequenced Distance Vector (DSDV), and Optimized Link State Routing (OLSR).

1.5.1.2   Reactive Protocols
Reactive protocols, also known as on-demand protocols, involve searching for routes to other nodes only as they are needed. A route discovery process is invoked when a node wishes to communicate with another node for which it has no route table entry. When a route is discovered, it is maintained only for as long as it is needed by a route maintenance process. Inactive routes are purged at regular intervals. Reactive protocols have the advantage of being more scalable than table-driven protocols. They require less control traffic to maintain routes that are not in use than in table-driven methods. The disadvantage of these methods is that an additional latency is incurred in order to discover a route to a node for which there is no entry in the route table. Dynamic Source Routing (DSR) and the Ad-hoc On-demand Distance Vector Routing (AODV)  protocol are examples of on-demand protocols.

1.5.1.3   Hybrid Protocols
There exists another class of ad-hoc routing protocols, such as the Zone Routing Protocol (ZRP), which employs a combination of proactive and reactive methods. The Zone Routing Protocols maintains groups of nodes in which routing between members within a zone is via proactive methods, and routing between different groups of nodes is via reactive methods.

1.5.1.4   Multicast Routing Protocols
One Straightforward way to provide multicast in a MANET is through flooding. With this approach, data packets are sent throughout the MANET, and every node that receives this packet broadcasts it to all its immediate neighbor nodes exactly once. It is suggested that in a highly mobile ad hoc network, flooding of the whole network may be a feasible alternative for reliable multicast. However, this approach has considerable overhead since a number of duplicated packets are sent and packet collisions do occur in a multiple access based MANET . Furthermore, multicast routing protocols are classified into four categories based on how routes are created to the members of the group:
      Tree-based approaches
      Mesh-based approaches
      Stateless multicast
      Hybrid approaches
Tree-based multicast is a very well established concept in wired networks. Protocols such as Ad hoc Multicast Routing Protocol Utilizing Increasing ID Numbers (AMRIS), Multicast Ad hoc On-Demand Distance Vector (MAODV), Lightweight Adaptive Multicast (LAM), and Location Guided Tree Construction Algorithm for Small Group Multicast are belonging to this category.
In contrast to a tree-based approach, mesh-based multicast protocols may have multiple paths between any source and receiver pair. Protocols such as On-Demand Multicast Routing Protocol (ODMRP), Dynamic Source Routing (DSR), and Temporarily Ordered Routing Algorithm (TORA) are belonging to this category.
Tree and Mesh based approaches have an overhead of creating and maintaining the delivery tree/mesh with time. In a MANET environment, frequent movement of mobile nodes considerably increases the overhead in maintaining the delivery tree/mesh. To minimize the effect of such a problem, stateless multicast is proposed wherein a source explicitly mentions the list of destinations in the packet header. Stateless multicast focuses on small group multicast. Protocol Differential Destination Multicast (DDM) is belongs to this approach .
The tree-based approaches provide high data forwarding efficiency at the expense of low robustness, whereas mesh based approaches provide better robustness at the expense of higher forwarding overhead and increased network load. The hybrid approach combines the advantage of both approaches. Protocols such as Ad hoc Multicast Routing Protocol (AMRoute), Multicast Core Extraction Distributed Ad hoc Routing (MCEDAR) are belonging to this category.

Delay Analysis and Optimality of Scheduling Policies for Multi-Hop Wireless Networks


We analyze the delay performance of a multi-hop wireless network in which the routes between source-destination pairs are fixed. We develop a new queue grouping technique to handle the complex correlations of the service process resulting from the multi-hop nature of the flows and their mutual sharing of the wireless medium. A general set based interference model is assumed that imposes constraints on links that can be served simultaneously at any given time. These interference constraints are used to obtain a fundamental lower bound on the delay performance of any scheduling policy for the system.
We present a systematic methodology to derive such lower bounds. For a special wireless system, namely the clique, we design a policy that is sample path delay optimal. For the tandem queue network, where the delay optimal policy is known, the expected delay of the optimal policy numerically coincides with the lower bound. The lower bound analysis provides useful insights into the design and analysis of optimal or nearly optimal scheduling policies.


DOWNLOAD

EXISTING SYSTEM:
       
                    A large number of studies on multi-hop wireless networks have been devoted to system stability while maximizing metrics like throughput or utility. These metrics measure the performance of a system over a long time-scale. The delay performance of wireless networks, however, has largely been an open problem. This problem is notoriously difficult even in the context of wire line networks, primarily because of the complex interactions in the network (e.g., superposition, routing, departure, etc.) The problem is further exacerbated by the mutual interference inherent in wireless networks which, complicates both the scheduling mechanisms and their analysis.

 Disadvantages:
1.     Simultaneity
2.     Throughput
3.     Resource allocation


PROPOSED SYSTEM:
             
               We analyze a multi-hop wireless network with multiple source-destination pairs, given routing and traffic information. Each source injects packets in the network, which traverses through the network until it reaches the destination. A packet is queued at each node in its path where it waits for an opportunity to be transmitted. The delay performance of any scheduling policy is primarily limited by the interference, which causes many bottlenecks to be formed in the network. We develop new analytical techniques that focus on the queuing due to the (K, X)-bottlenecks. One of the techniques, which we call the “reduction technique”, simplifies the analysis of the queuing upstream of a (K, X)-bottleneck to the study of a single queue system with K servers.

Advantages:
1.     Single Queue System
2.     Back Pressure Policy
3.     Saving s of time


MODULES:
   
1.     Characteristics of Bottlenecks
2.     Reduction Technique
3.     Reduced System
4.     Bound on expected delay
5.     Design of delay efficient policies

1. Characteristics of bottlenecks in the system
             
                    Link interference causes certain bottlenecks to be formed in
the system. We define a (K, X)-bottleneck to be a set of links
X ⊂ L such that no more than K of its links can be scheduled
Simultaneously.


2. Reduction Technique

                    We describe our methodology to derive lower bounds on the average size of the queues corresponding to the flows that pass through a (K, X)-bottleneck.


3. Reduced System
               
                   Consider a system with a single server and AX(t) as the input. The server serves at most K packets from the queue. Let QX(t) be the queue length of this system at time t. Flows II, IV and VI pass through an exclusive set using two, three and two hops of the exclusive set respectively. The corresponding G/D/1 system is fed by the exogenous arrival streams 2AII (t), 3AIV (t) and 2AV I (t).
4. Bound on Expected Delay
 
               
                 We now present a lower bound on the expected delay of the flows passing through the bottleneck as a simple function of the expected delay of the reduced system. In the analysis, we use above Theorem to bound the queuing upstream of the bottleneck and a simple bound on the queueing downstream of the bottleneck. Applying Little’s law on the complete system, we derive a lower bound on the expected delay of the flows passing through the bottleneck.

5. Design of delay efficient policies

               
                 A scheduler must satisfy the following properties.      
                 
• Ensure high throughput:  This is important because if the scheduling policy does not guarantee high throughput then the delay may become infinite under heavy loading.

• Allocate resources equitably: The network resources must be shared among the flows so as not to starve some of the flows. Also, non-interfering links in the network have to be scheduled such that certain links are not starved for service. Starvation leads to an increase in the average delay in the system.  
                           
CONCLUSION:
             
             Thus, new approaches are required to address the delay problem in multi-hop wireless systems. To this end, we develop a new approach to reduce the bottlenecks in a multi-hop wireless to single queue systems to
Carry out lower bound analysis.
                    The analysis is very general and admits a large class of arrival processes. Also, the analysis can be readily extended to handle channel variations. The main difficulty however is in identifying the bottlenecks in the system. The lower bound not only helps us identify near-optimal policies, but may also help in the design of a delay-efficient policy