LATEST TECHNOLOGY

Showing posts with label Seminar Topic. Show all posts
Showing posts with label Seminar Topic. Show all posts

Monday, January 10, 2011

Internet Access Through Cable TV Network

Monday, January 10, 2011
1 comments
Internet Access via Cable TV Network

Internet is a network of networks in which various computers connect each other through out the world. The connection to other computers is possible with the help of ISP (Internet Service Provider). Each Internet users depend dialup connections to connect to Internet. This has many disadvantages like very poor speed, may time cut downs etc. To solve the problem, Internet data can be transferred through Cable networks wired to the user computer. Different type connections used are PSTN connection, ISDN connection and Internet via Cable networks. Various advantages are High availability, High bandwidth to low cost, high speed data access, always on connectivity etc.

The number of household getting on the Internet has increased exponentially in the recent past. First time internet users are amazed at the internet’s richness of content and personalization, never before offered by any other medium. But this initial awe last only till they experienced the slow speed of internet content deliver. Hence the popular reference “World Wide Wait”(not world wide web). There is a pent-up demand for the high-speed (or broad band) internet access for fast web browsing and more effective telecommuting.



Seminar Topics****Seminar Topics *****Seminar Topics ******Seminar Topics Seminar Topics****Seminar Topics****Seminar Topics *****Seminar Topics Seminar Topics****  Seminar Topics ****Seminar Topics *****Seminar Topics Seminar Topics ****Seminar Topics *****Seminar Topics *****Seminar Topics Seminar Topics**** Seminar Topics

read more

Saturday, September 25, 2010

Space Mouse

Saturday, September 25, 2010
0 comments
Every day of your computing life, you reach out for the mouse whenever you want to move the cursor or activate something. The mouse senses your motion and your clicks and sends them to the computer so it can respond appropriately. An ordinary mouse detects motion in the X and Y plane and acts as a two dimensional controller. It is not well suited for people to use in a 3D graphics environment.

Space Mouse is a professional 3D controller specifically designed for manipulating objects in a 3D environment. It permits the simultaneous control of all six degrees of freedom - translation rotation or a combination. . The device serves as an intuitive man-machine interface

The predecessor of the spacemouse was the DLR controller ball. Spacemouse has its origins in the late seventies when the DLR (German Aerospace Research Establishment) started research in its robotics and system dynamics division on devices with six degrees of freedom (6 dof) for controlling robot grippers in Cartesian space. The basic principle behind its construction is mechatronics engineering and the multisensory concept. The spacemouse has different modes of operation in which it can also be used as a two-dimensional mouse.

How does computer mouse work?
Mice first broke onto the public stage with the introduction of the Apple Macintosh in 1984, and since then they have helped to completely redefine the way we use computers. Every day of your computing life, you reach out for your mouse whenever you want to move your cursor or activate something. Your mouse senses your motion and your clicks and sends them to the computer so it can respond appropriately

Inside a Mouse
The main goal of any mouse is to translate the motion of your hand into signals that the computer can use. Almost all mice today do the translation using five components:



SPACEMOUSE

Spacemouse is developed by the DLR institute of robotics and mechatronics.
DLR- Deutsches Zenturum far Luft-und Raumfahrt

4.1 Why 3D motion?

In every area of technology, one can find automata and systems controllable up to six degrees of freedom- three translational and three rotational. Industrial robots made up the most prominent category needing six degrees of freedom by maneuvering six joints to reach any point in their working space with a desired orientation. Even broader there have been a dramatic explosion in the growth of 3D computer graphics.

 Already in the early eighties, the first wire frame models of volume objects could move smoothly and interactively using so called knob-boxes on the fastest graphics machines available. A separate button controlled each of the six degrees of freedom. Next, graphics systems on the market allowed manipulation of shaded volume models smoothly, i.e. rotate, zoom and shift them and thus look at them from any viewing angle and position. The scenes become more and more complex; e.g. with a "reality engine" the mirror effects on volume car bodies are updated several times per second - a task that needed hours on main frame computers a couple of years ago.

Parallel to the rapid graphics development, we observed a clear trend in the field of mechanical design towards constructing and modeling new parts in a 3D environment and transferring the resulting programs to NC machines. The machines are able to work in 5 or 6 degrees of freedom (dot). Thus, it is no surprise that in the last few years, there are increasing demands for comfortable 3D control and manipulation devices for these kinds of systems. Despite breathtaking advancements in digital technology it turned out that digital man- machine interfaces like keyboards are not well suited for people to use as our sensomotory reactions and behaviors are and will remain analogous forever.

4.2 DLR control ball, Magellan's predecessor

At the end of the seventies, the DLR (German Aerospace Research Establishment) institute for robotics and system dynamics started research on devices for the 6-dof control of robot grippers .in Cartesian space. After lengthy experiments it turned out around 1981 that integrating a six axis force torque sensor (3 force, 3 torque components) into a plastic hollow ball was the optimal solution. Such a ball registered the linear and rotational displacements as generated by the forces/ torques of a human hand, which were then computationally transformed into translational / rotational motion speeds.

The first force torque sensor used was based upon strain gauge technology, integrated into a plastic hollow ball. DLR had the basic concept centre of a hollow ball handle approximately coinciding with the measuring centre of an integrated 6 dof force / torque sensor patented in Europe and US.

     From 1982-1985, the first prototype applications showed that DLR's control ball was not only excellently suited as a control device for robots, but also for the first 3D-graphics system that came onto the market at that time. Wide commercial distribution was prevented by the high sales price of about $8,000 per unit. It took until 1985 for the DLR's developer group to succeed in designing a much cheaper optical measuring system.

4.2.1 Basic principle

The new system used 6 one-dimensional position detectors. This system received a worldwide patent. The basic principle is as follows. The measuring system consists of an inner and an outer part. The measuring arrangement in the inner ring is composed of the LED, a slit and perpendicular to the slit on the opposite side of the ring a linear position sensitive detector (PSD). The slit / LED combination is mobile against the remaining system. Six such systems  (rotated by 60 degrees each) are mounted in a plane, whereby the slits alternatively are vertical and parallel to the plane. The ring with PSD's is fixed inside the outer part and connected via springs with the LED-slit-basis. The springs bring the inner part back to a neutral position when no forces / torque are exerted: There is a particularly simple and unique. This measuring system is drift-free and not subject to aging effects.

The whole electronics including computational processing on a one-chip-processor was already integrable into the ball by means of two small double sided surface mount device (SMD) boards, the manufacturing costs were reduced to below $1,000, but the sales price still hovered in the area of $3,000.

The original hopes of the developers group that the license companies might be able to redevelop devices towards much lower manufacturing costs did not materialize. On the other hand, with passing of time, other technologically comparable ball systems appeared on the market especially in USA. They differed only in the type of measuring system. Around 1990, terms like cyberspace and virtual reality became popular. However, the effort required to steer oneself around in a virtual world using helmet and glove tires one out quickly. Movements were measured by electromagnetic or ultrasonic means, with the human head having problems in controlling translational speeds. In addition, moving the hand around in free space leads to fairly fast fatigue. Thus a redesign of the ball idea seemed urgent.


Details Via:seminarsonly.com
Space Mouse

read more

Virtual Retinal Display

The Virtual Retinal Display (VRD) is a personal display device under development at the University of Washington's Human Interface Technology Laboratory in Seattle, Washington USA. The VRD scans light directly onto the viewer's retina. The viewer perceives a wide field of view image. Because the VRD scans light directly on the retina, the VRD is not a screen based technology.

The VRD was invented at the University of Washington in the Human Interface Technology Lab (HIT) in 1991. The development began in November 1993. The aim was to produce a full color, wide field-of-view, high resolution, high brightness, low cost virtual display. Microvision Inc. has the exclusive license to commercialize the VRD technology. This technology has many potential applications, from head-mounted displays (HMDs) for military/aerospace applications to medical society.

The VRD projects a modulated beam of light (from an electronic source) directly onto the retina of the eye producing a rasterized image. The viewer has the illusion of seeing the source image as if he/she stands two feet away in front of a 14-inch monitor. In reality, the image is on the retina of its eye and not on a screen. The quality of the image he/she sees is excellent with stereo view, full color, wide field of view, no flickering characteristics.

Our window into the digital universe has long been a glowing screen perched on a desk. It's called a computer monitor, and as you stare at it, light is focused into a dime-sized image on the retina at the back of your eyeball. The retina converts the light into signals that percolate into your brain via the optic nerve.

Here's a better way to connect with that universe: eliminate that bulky, power-hungry monitor altogether by painting the images themselves directly onto your retina. To do so, use tiny semiconductor lasers or special light-emitting diodes, one each for the three primary colors-red, green, and blue-and scan their light onto the retina, mixing the colors to produce the entire palette of human vision. Short of tapping into the optic nerve, there is no more efficient way to get an image into your brain. And they call it the Virtual Retinal Display, or generally a retinal scanning imaging system.

The Virtual Retinal Display presents video information by scanning modulated light in a raster pattern directly onto the viewer's retina. As the light scans the eye, it is intensity modulated. On a basic level, as shown in the following figure, the VRD consists of a light source, a modulator, vertical and horizontal scanners, and imaging optics (to focus the light beam and optically condition the scan).

The resultant imaged formed on the retina is perceived as a wide field of view image originating from some viewing distance in space. The following figure illustrates the light raster on the retina and the resultant image perceived in space.

In general, a scanner (with magnifying optics) scans a beam of collimated light through an angle. Each individual collimated beam is focused to a point on the retina. As the angle of the scan changes over time, the location of the corresponding focused spot moves across the retina. The collection of intensity modulated spots forms the raster image as shown above

Details Via:seminarsonly.com

read more

Wireless USB

The Universal Serial Bus (USB), with one billion units in the installed base, is the most successful interface in PC history. Projections are for 3.5 billion interfaces shipped by 2006. Benefiting from exceptionally strong industry support from all market segments, USB continues to evolve as new technologies and products come to market. It is already the de facto interconnect for PCs, and has proliferated into consumer electronics (CE) and mobile devices as well.

The Wireless USB is the first the high speed Personal Wireless Interconnect. Wireless USB will build on the success of wired USB, bringing USB technology into the wireless future. Usage will be targeted at PCs and PC peripherals, consumer electronics and mobile devices. To maintain the same usage and architecture as wired USB, the Wireless USB specification is being defined as a high-speed host-to-device connection. This will enable an easy migration path for today's wired USB solutions.

This paper takes a brief look at the widely used interconnect standard, USB and in particular, at the emerging technology of Wireless USB and its requirements and promises.

USB Ports

Just about any computer that you buy today comes with one or more Universal Serial Bus connectors on the back. These USB connectors let you attach everything from mice to printers to your computer quickly and easily. The operating system supports USB as well, so the installation of the device drivers is quick and easy, too. Compared to other ways of connecting devices to your computer (including parallel ports, serial ports and special cards that you install inside the computer's case), USB devices are incredibly simple!

Anyone who has been around computers for more than two or three years knows the problem that the Universal Serial Bus is trying to solve -- in the past, connecting devices to computers has been a real headache!
" Printers connected to parallel printer ports, and most computers only came with one. Things like Zip drives, which need a high-speed connection into the computer, would use the parallel port as well, often with limited success and not much speed.
" Modems used the serial port, but so did some printers and a variety of odd things like Palm Pilots and digital cameras. Most computers have at most two serial ports, and they are very slow in most cases.
" Devices that needed faster connections came with their own cards, which had to fit in a card slot inside the computer's case. Unfortunately, the number of card slots is limited and you needed a Ph.D. to install the software for some of the cards.
The goal of USB is to end all of these headaches. The Universal Serial Bus gives you a single, standardized, easy-to-use way to connect up to 127 devices to a computer.
Just about every peripheral made now comes in a USB version. In fact almost all the devices manufactured today are designed to be interfaced to the computer via the USB ports.
USB Connections
Connecting a USB device to a computer is simple -- you find the USB connector on the back of your machine and plug the USB connector into it. If it is a new device, the operating system auto-detects it and asks for the driver disk. If the device has already been installed, the computer activates it and starts talking to it. USB devices can be connected and disconnected at any time.

USB Features
The Universal Serial Bus has the following features:
" The computer acts as the host.
" Up to 127 devices can connect to the host, either directly or by way of USB hubs.
" Individual USB cables can run as long as 5 meters; with hubs, devices can be up to 30 meters (six cables' worth) away from the host.
" With USB 2.,the bus has a maximum data rate of 480 megabits per second.
" A USB cable has two wires for power (+5 volts and ground) and a twisted pair of wires to carry the data.
" On the power wires, the computer can supply up to 500 milliamps of power at 5 volts.
" Low-power devices (such as mice) can draw their power directly from the bus. High-power devices (such as printers) have their own power supplies and draw minimal power from the bus. Hubs can have their own power supplies to provide power to devices connected to the hub.
" USB devices are hot-swappable, meaning you can plug them into the bus and unplug them any time.
" Many USB devices can be put to sleep by the host computer when the computer enters a power-saving mode

Details Via:seminarsonly.com

read more

Wireless LAN Security

Wireless local area networks (WLANs) based on the Wi-Fi (wireless fidelity) standards are one of today's fastest growing technologies in businesses, schools, and homes, for good reasons. They provide mobile access to the Internet and to enterprise networks so users can remain connected away from their desks. These networks can be up and running quickly when there is no available wired Ethernet infrastructure. They can be made to work with a minimum of effort without relying on specialized corporate installers.

Some of the business advantages of WLANs include:
" Mobile workers can be continuously connected to their crucial applications and data;
" New applications based on continuous mobile connectivity can be deployed;
" Intermittently mobile workers can be more productive if they have continuous access to email, instant messaging, and other applications;
" Impromptu interconnections among arbitrary numbers of participants become possible.
" But having provided these attractive benefits, most existing WLANs have not effectively addressed security-related issues.

THREATS TO WLAN ENVIRONMENTS

All wireless computer systems face security threats that can compromise its systems and services. Unlike the wired network, the intruder does not need physical access in order to pose the following security threats:

Eavesdropping

This involves attacks against the confidentiality of the data that is being transmitted across the network. In the wireless network, eavesdropping is the most significant threat because the attacker can intercept the transmission over the air from a distance away from the premise of the company.

Tampering

The attacker can modify the content of the intercepted packets from the wireless network and this results in a loss of data integrity.

Unauthorized access and spoofing

The attacker could gain access to privileged data and resources in the network by assuming the identity of a valid user. This kind of attack is known as spoofing. To overcome this attack, proper authentication and access control mechanisms need to be put up in the wireless network.

Denial of Service

In this attack, the intruder floods the network with either valid or invalid messages affecting the availability of the network resources. The attacker could also flood a receiving wireless station thereby forcing to use up its valuable battery power.

Other security threats

The other threats come from the weakness in the network administration and vulnerabilities of the wireless LAN standards, e.g. the vulnerabilities of the Wired Equivalent Privacy (WEP), which is supported in the IEEE 802.11 wireless LAN standard.

Details Via:seminarsonly.com

read more

Tele-Immersion

A Tele-Immersion is a new medium that enables a user to share a virtual space with remote participants. The user is immersed in a 3D world that is transmitted from a remote site. This medium for human interaction, enabled by digital technology, approximates the illusion that a person is in the same physical space as others, even though they may be thousands of miles distant.

In a tele-immersive environment computers recognize the presence and movements of individuals and objects, track those individuals and images, and then permit them to be projected in realistic, multiple, geographically distributed immersive environments on stereo-immersive surfaces.

Tele-immersion techniques can be viewed as the building blocks of the office of tomorrow, where several users from across the country will be able to collaborate as if they're all in the same room. Scaling up, transmissions could incorporate larger scenes, like news conferences, ballet performances, or sports events. With mobile rather than stationary camera arrays, viewers could establish tele-presence in remote or hazardous situations...

Details Via:seminarsonly.com

read more

Sensors on 3D Digitization

Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].

Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

AUTOSYNCHRONIZED SCANNER

The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene

Details Via:seminarsonly.com

read more

Rover Technology

Rover Technology adds a user's location to other dimensions of system awareness, such as time, user preferences, and client device capabilities. The software architecture of Rover systems is designed to scale to large user populations.
Consider a group touring the museums in Washington, D.C. The group arrives at a registration point, where each person receives a handheld device with audio, video, and wireless communication capabilities. an off-the-shelf PDA available in the market today. A wireless-based system tracks the location of these devices and presents relevant information about displayed objects as the user moves through the museum. Users can query their devices for maps and optimal routes to objects of interest. They can also use the devices to reserve and purchase tickets to museum events later in the day. The group leader can send messages to coordinate group activities.

The part of this system that automatically tailors information and services to a mobile user's location is the basis for location-aware computing. This computing paradigm augments the more traditional dimensions of system awareness, such as time-, user-, and device-awareness. All the technology components to realize location-aware computing are available in the marketplace today. What has hindered the widespread deployment of location-based systems is the lack of an integration architecture that scales with user populations.

ROVER ARCHITECTURE
Rover technology tracks the location of system users and dynamically configures application-level information to different link-layer technologies and client-device capabilities. A Rover system represents a single domain of administrative control, managed and moderated by a Rover controller. Figure 1_ shows a large application domain partitioned into multiple administrative domains, each with its own Rover system - much like the Internet's Domain Name System" 2

End users interact with the system through Rover client devices- typically wireless handheld units with varying capabilities for processing, memory and storage, graphics and display, and network interfaces. Rover maintains a profile for each device, identifying its capabilities and configuring content accordingly. Rover also maintains end-user profiles, defining specific user interests and serving content tailored to them.

A wireless access infrastructure provides connectivity to the Rover clients. In the current implementation, we have defined a technique to determine location based on certain properties of the wireless access infrastructure. Although Rover can leverage such properties of specific air interfaces,1 its location management technique is not tied to a particular wireless technology. Moreover, different wireless interfaces can coexist in a single Rover system or in different domains of a multi-Rover system. Software radio technology3 offers a way to integrate the different interfaces into a single device. This would allow the device to easily roam between various Rover systems, each with different wireless access technologies.

A server system implements and manages Rover's end-user services. The server system consists of five components:
The Rover controller is the system's "brain." It manages the different services that Rover clients request, scheduling and filtering the content according to the current location and the user and device profiles.

The location server is a dedicated unit that manages the client device location services within the Rover system. Alternatively, applications can use an externally available location service, such as the Global Positioning System (GPS).
The streaming-media unit manages audio and video content streamed to clients. Many of today's off-the-shelf streaming-media units can be integrated with the Rover system.

Details Via:seminarsonly.com

read more

Self Defending Networks

As the nature of threats to organizations continues to evolve, so must the defense posture of the organizations. In the past, threats from both internal and external sources were relatively slow-moving and easy to defend against. In today's environment, where Internet worms spread across the world in a matter of minutes, security systems - and the network itself - must react instantaneously.

The foundation for a self-defending network is integrated security - security that is native to all aspects of an organization. Every device in the network - from desktops through the LAN and across the WAN - plays a part in securing the networked environment through a globally distributed defense. Such systems help to ensure the privacy of information transmitted and to protect against internal and external threats, while providing corporate administrators with control over access to corporate resources. SDN shows that the approach to security has evolved from a point product approach to this integrated security approach

These self-defending networks will identify threats, react appropriately to the severity level, isolate infected servers and desktops, and reconfigure the network resources in response to an attack. The vision of the Self-Defending Network brings together Secure Connectivity, Threat Defense and Trust and Identity Management System with the capability of infection containment and rouge device isolation in a single solution.

SELF DEFENDING NETWORKS

To defend their networks, IT professionals need to be aware of the new nature of security threats, which includes the following:

Shift from internal to external attacks Before 1999, when key applications ran on minicomputers and mainframes, threats typically were perpetrated by internal users with privileges. Between 1999 and 2002, reports of external events rose 250 percent, according to CERT.

Shorter windows to react. When attacks homed in on individual computers or networks, companies had more time to understand the threat. Now that viruses can propagate worldwide in 10 minutes, that "luxury" is largely gone. Antivirus solutions are still essential but are not enough: by the time the signature has been identified, it is too late. With self-propagation, companies need network technology that can autonomously take action against threats.

More difficult threat detection. Attackers are getting smarter. They used to attack the network, and now they attack the application or embed the attack in the data itself, which makes detection more difficult.An attack at the network layer, for example, can be detected by looking at the header information. But an attack embedded in a text file or attachment can only be detected by looking at the actual payload of the packet--something a typical firewall doesn't do.The burden of threat detection is shifting from the firewall to the access control server and intrusion detection system.Rather than single-point solutions, companies need holistic solutions.

A lowered bar for hackers. Finally, a proliferation of easy-to-use hackers' tools and scripts has made hacking available to the less technically-literate. The advent of 'point-and-click' hacking means the attacker doesn't have to know what's going on under the hood in order to do damage.

These trends in security are what have lead to the advent of SDNs or Self Defending Networks as the latest verson in security control.

Details Via:seminarsonly.com

read more

Rain Technology

Rainfinity's technology originated in a research project at the California Institute of Technology (Caltech), in collaboration with NASA's Jet Propulsion Laboratory and the Defense Advanced Research Projects Agency (DARPA). The name of the original research project was RAIN, which stands for Reliable Array of Independent Nodes. The goal of the RAIN project was to identify key software building blocks for creating reliable distributed applications using off-the-shelf hardware. The focus of the research was on high-performance, fault-tolerant and portable clustering technology for space-borne computing. Two important assumptions were made, and these two assumptions reflect the differentiations between RAIN and a number of existing solutions both in the industry and in academia:

1. The most general share-nothing model is assumed. There is no shared storage accessible from all computing nodes. The only way for the computing nodes to share state is to communicate via a network. This differentiates RAIN technology from existing back-end server clustering solutions such as SUNcluster, HP MC Serviceguard or Microsoft Cluster Server.
2. The distributed application is not an isolated system. The distributed protocols interact closely with existing networking protocols so that a RAIN cluster is able to interact with the environment. Specifically, technological modules were created to handle high-volume network-based transactions. This differentiates it from traditional distributed computing projects such as Beowulf.

In short, the RAIN project intended to marry distributed computing with networking protocols. It became obvious that RAIN technology was well-suited for Internet applications. During the RAIN project, key components were built to fulfill this vision. A patent was filed and granted for the RAIN technology. Rainfinity was spun off from Caltech in 1998, and the company has exclusive intellectual property rights to the RAIN technology. After the formation of the company, the RAIN technology has been further augmented, and additional patents have been filed.

The guiding concepts that shaped the architecture are as follows:

1. Network Applications

The architecture goals for clustering data network applications are different from clustering data storage applications. Similar goals apply in the telecom environment that provides the Internet backbone infrastructure, due to the nature of applications and services being clustered.

2. Shared-Nothing

The shared-storage cluster is the most widely used for database and application servers that store persistent data on disks. This type of cluster typically focuses on the availability of the database or application service, rather than performance. Recovery from failover is generally slow, because restoring application access to disk-based data takes minutes or longer, not seconds. Telecom servers deployed at the edge of the network are often diskless, keeping data in memory for performance reasons, and tolerate low failover time. Therefore, a new type of share-nothing cluster with rapid failure detection and recovery is required. The only way for the shared-nothing cluster to share is to communicate via the network.

3. Scalability

While the high-availability cluster focuses on recovery from unplanned and planned downtimes, this new type of cluster must also be able to maximize I/O performance by load balancing across multiple computing nodes. Linear scalability with network throughput is important. In order to maximize the total throughput, load load-balancing decisions must be made dynamically by measuring the current capacity of each computing node in real-time. Static hashing does not guarantee
an even distribution of traffic.

4. Peer-to-Peer

A dispatcher-based, master-slave cluster architecture suffers from scalability by introducing a potential bottleneck. A peer-to-peer cluster architecture is more suitable for latency-sensitive data network applications processing shortlived sessions. A hybrid architecture should be considered to offset the need for more control over resource management. For example, a cluster can assign multiple authoritative computing nodes that process traffic in the round-robin order for each network interface that is clustered to reduce the overhead of traffic forwarding

Details Via:seminarsonly.com

read more

PLASMA DISPLAYS

PLASMA DISPLAYS

This technology is the cathode ray tube. In CRT televisions, a gun fires a beam of electrons into a large glass tube.

The electrons send phosphor atoms to an excited state that causes them to light up. They have good images, but they also have one big problem. They take up a lot of space and are very heavy.

Now scientists wanted to find a better way to fit a big television in a small room. They came up with the plasma flat panel display.

They still come in large sizes, but are only about six inches thick. Plasma televisions illuminate tiny colored fluorescent lights to form an image. Each pixel is made up of three fluorescent lights. A red, green, and blue light.

The plasma display varies the intensities of the different lights to produce a full range of colors like the CRT televisions. plasma displays, a small electric current stimulates an inert gas sandwiched between glass panels, including one coated with phosphors that emit light in various colors.
While just 8 cm (3 in) thick, plasma screens can be more than 150 cm (60 in) diagonally.

read more

PHANTOM

PHANTOM, means Personal HAptic iNTerface Mechanism, was developed at MIT as a relatively low cost force feedback device for interacting with virtual objects. Phantom device is a robot arm that is attached to a computer and used as a pointer in three dimensions, like a mouse is used as a pointer in two dimensions.

ABOUT PHANTOM

The PHANToM interface's novelty lies in its small size, relatively low cost and its simplification of tactile information. Rather than displaying information from many different points, this haptic device provides high-fidelity feedback to simulate touching at a single point. It just like closing your eyes, holding a pen and touching everything in your office. You could actually tell a lot about those objects from that single point of contact. You'd recognize your computer keyboard, the monitor, the telephone, desktop and so on.

A Phantom device and the Phantom Force Feedback extension can also be used to trace paths and/or move models in the absence of volume data. Although there will not be force feedback in such cases, the increased degrees of freedom provided by the device as compared to a mouse can be very helpful. The Phantom Force Feedback extension of Chimera allows a Phantom device to be used to guide marker placement within volume data. It is generally used together with Volume Viewer and Volume Path Tracer. SensAble Technologies manufactures several models of the Phantom. The device is only supported on SGI and Windows platforms. SensAble Technologies has announced that in summer of 2002 it will add support for Linux and drop support for SGI. The least expensive model (about $10,000 in 2001), the Phantom Desktop, is described here.

To integrate the PHANToM with a projection screen virtual environment several obstacles need to be overcome. First, the PHANToM is essentially a desktop device. To use it in a larger environment the PHANToM must be made mobile and height adjustable to accommodate the user.

For this we use

Phantom Stand

The phantom stand was designed to permit positioning, height adjustment, and stable support of the PHANToM in the virtual environment. To avoid interference with the magnetic tracking system used in the environment, the phantom stand was constructed out of bonded PVC plastic and stainless steel hardware.

KEY BENEFITS

o High fidelity, 3D haptic feedback
o The ability to operate in an office/desktop environment
o Compatibility with standard PCs and UNIX workstations
o A universal design for a broad range of applications
o Low cost device
o Used to trace paths

Details Via:seminarsonly.com

read more

Diamond chip

Electronics without silicon is unbelievable, but it will come true with the evolution of Diamond or Carbon chip. Now a day we are using silicon for the manufacturing of Electronic Chip's. It has many disadvantages when it is used in power electronic applications, such as bulk in size, slow operating speed etc. Carbon, Silicon and Germanium are belonging to the same group in the periodic table. They have four valance electrons in their outer shell. Pure Silicon and Germanium are semiconductors in normal temperature. So in the earlier days they are used widely for the manufacturing of electronic components. But later it is found that Germanium has many disadvantages compared to silicon, such as large reverse current, less stability towards temperature etc so in the industry focused in developing electronic components using silicon wafers

Now research people found that Carbon is more advantages than Silicon. By using carbon as the manufacturing material, we can achieve smaller, faster and stronger chips. They are succeeded in making smaller prototypes of Carbon chip. They invented a major component using carbon that is "CARBON NANOTUBE", which is widely used in most modern microprocessors and it will be a major component in the Diamond chip

WHAT IS IT?
In single definition, Diamond Chip or carbon Chip is an electronic chip manufactured on a Diamond structural Carbon wafer. OR it can be also defined as the electronic component manufactured using carbon as the wafer. The major component using carbon is (cnt) Carbon Nanotube. Carbon Nanotube is a nano dimensional made by using carbon. It has many unique properties.

HOW IS IT POSSIBLE?
Pure Diamond structural carbon is non-conducting in nature. In order to make it conducting we have to perform doping process. We are using Boron as the p-type doping Agent and the Nitrogen as the n-type doping agent. The doping process is similar to that in the case of Silicon chip manufacturing. But this process will take more time compared with that of silicon because it is very difficult to diffuse through strongly bonded diamond structure. CNT (Carbon Nanotube) is already a semi conductor.

ADVANTAGES OF DIAMOND CHIP

1 SMALLER COMPONENTS ARE POSSIBLE

As the size of the carbon atom is small compared with that of silicon atom, it is possible to etch very smaller lines through diamond structural carbon. We can realize a transistor whose size is one in hundredth of silicon transistor.

2 IT WORKS AT HIGHER TEMPERATURE

Diamond is very strongly bonded material. It can withstand higher temperatures compared with that of silicon. At very high temperature, crystal structure of the silicon will collapse. But diamond chip can function well in these elevated temperatures. Diamond is very good conductor of heat. So if there is any heat dissipation inside the chip, heat will very quickly transferred to the heat sink or other cooling mechanics.

3 FASTER THAN SILICON CHIP

Carbon chip works faster than silicon chip. Mobility of the electrons inside the doped diamond structural carbon is higher than that of in he silicon structure. As the size of the silicon is higher than that of carbon, the chance of collision of electrons with larger silicon atoms increases. But the carbon atom size is small, so the chance of collision decreases. So the mobility of the charge carriers is higher in doped diamond structural carbon compared with that of silicon.

4 LARGER POWER HANDLING CAPACITY

For power electronics application silicon is used, but it has many disadvantages such as bulk in size, slow operating speed, less efficiency, lower band gap etc at very high voltages silicon structure will collapse. Diamond has a strongly bonded crystal structure. So carbon chip can work under high power environment. It is assumed that a carbon transistor will deliver one watt of power at rate of 100 GHZ. Now days in all power electronic circuits, we are using certain circuits like relays, or MOSFET inter connection circuits (inverter circuits) for the purpose of interconnecting a low power control circuit with a high power circuit .If we are using carbon chip this inter phase is not needed. We can connect high power circuit direct to the diamond chip

read more

4G Wireless Systems

Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2  ]8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all  ]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which are currently under development.

read more

3D SEARCHING



From computer-aided design (CAD) drawings of complex engineering parts to digital representations of proteins and complex molecules, an increasing amount of 3D information is making its way onto the Web and into corporate databases.

 
Because of this, users need ways to store, index, and search this information. Typical Web-searching approaches, such as Google's, can't do this. Even for 2D images, they generally search only the textual parts of a file, noted Greg Notess, editor of the online Search Engine Showdown newsletter.

However, researchers at universities such as Purdue and Princeton have begun developing search engines that can mine catalogs of 3D objects, such as airplane parts, by looking for physical, not textual, attributes. Users formulate a query by using a drawing application to sketch what they are looking for or by selecting a similar object from a catalog of images. The search engine then finds the items they want. The company must make it again, wasting valuable time and money



3D SEARCHING



Advances in computing power combined with interactive modeling software, which lets users create images as queries for searches, have made 3Dsearch technology possible.

Methodology used involves the following steps

" Query formulation



" Search process



" Search result
  QUERY FORMULATION


True 3D search systems offer two principal ways to formulate a query: Users can select objects from a catalog of images based on product groupings, such as gears or sofas; or they can utilize a drawing program to create a picture of the object they are looking for. or example, Princeton's 3D search engine uses an application to let users draw a 2D or 3D representation of the object they want to find.

The above picture shows the query interface of a 3D search system.



SEARCH PROCESS



The 3D-search system uses algorithms to convert the selected or drawn image-based query into a mathematical model that describes the features of the object being sought. This converts drawings and objects into a form that computers can work with. The search system then compares the mathematical description of the drawn or selected object to those of 3D objects stored in a database, looking for similarities in the described features.

The key to the way computer programs look for 3D objects is the voxel (volume pixel). A voxel is a set of graphical data-such as position, color, and density-that defines the smallest cubeshaped building block of a 3D image. Computers can display 3D images only in two dimensions. To do this, 3D rendering software takes an object and slices it into 2D cross sections. The cross sections consist of pixels (picture elements), which are single points in a 2D image. To render the 3D image on a 2D screen, the computer determines how to display the 2D cross sections stacked on top of each other, using the applicable interpixel and interslice distances to position them properly. The computer interpolates data to fill in interslice gaps and create a solid image.

read more

Friday, September 24, 2010

MPEG CONVERTION

Friday, September 24, 2010
0 comments
MPEG is the famous four-letter word which stands for the "Moving Pictures Experts Groups.
To the real word, MPEG is a generic means of compactly representing digital video and audio signals for consumer distributionThe essence of MPEG is its syntax: the little tokens that make up the bitstream. MPEG's semantics then tell you (if you happen to be a decoder, that is) how to inverse represent the compact tokens back into something resembling the original stream of samples.

These semantics are merely a collection of rules (which people like to called algorithms, but that would imply there is a mathematical coherency to a scheme cooked up by trial and error….). These rules are highly reactive to combinations of bitstream elements set in headers and so forth.

MPEG is an institution unto itself as seen from within its own universe. When (unadvisedly) placed in the same room, its inhabitants a blood-letting debate can spontaneously erupt among, triggered by mere anxiety over the most subtle juxtaposition of words buried in the most obscure documents. Such stimulus comes readily from transparencies flashed on an overhead projector. Yet at the same time, this gestalt will appear to remain totally indifferent to critical issues set before them for many months.

It should therefore be no surprise that MPEG's dualistic chemistry reflects the extreme contrasts of its two founding fathers: the fiery Leonardo Chairiglione (CSELT, Italy) and the peaceful Hiroshi Yasuda (JVC, Japan). The excellent byproduct of the successful MPEG Processes became an International Standards document safely administered to the public in three parts: Systems (Part), Video (Part 2), and Audio (Part 3).

Pre MPEG
Before providence gave us MPEG, there was the looming threat of world domination by proprietary standards cloaked in syntactic mystery. With lossy compression being such an inexact science (which always boils down to visual tweaking and implementation tradeoffs), you never know what's really behind any such scheme (other than a lot of the marketing hype).
Seeing this threat… that is, need for world interoperability, the Fathers of MPEG sought help of their colleagues to form a committee to standardize a common means of representing video and audio (a la DVI) onto compact discs…. and maybe it would be useful for other things too.

MPEG borrowed a significantly from JPEG and, more directly, H.261. By the end of the third year (1990), a syntax emerged, which when applied to represent SIF-rate video and compact disc-rate audio at a combined bitrate of 1.5 Mbit/sec, approximated the pleasure-filled viewing experience offered by the standard VHS format.

After demonstrations proved that the syntax was generic enough to be applied to bit rates and sample rates far higher than the original primary target application ("Hey, it actually works!"), a second phase (MPEG-2) was initiated within the committee to define a syntax for efficient representation of broadcast video, or SDTV as it is now known (Standard Definition Television), not to mention the side benefits: frequent flier miles

Details Via:seminarsonly.com

read more

Cooperative Linux

0 comments
Cooperative Linux utilizes the rather underused concept of a Cooperative Virtual Machine (CVM), in contrast to traditional VMs that are unprivileged and being under the complete control of the host machine. The term Cooperative is used to describe two entities working in parallel, e.g. coroutines [2]. In that sense the most plain description of Cooperative Linux is turning two operating system kernels into two big coroutines. In that mode, each kernel has its own complete CPU context and address space, and each kernel decides when to give control back to its partner.

However, only one of the two kernels has control on the physical hardware, where the other is provided only with virtual hardware abstraction. From this point on in the paper I'll refer to these two kernels as the host operating system, and the guest Linux VM respectively. The host can be every OS kernel that exports basic primitives that provide the Cooperative Linux portable driver to run in CPL0 mode (ring 0) and allocate memory. The special CPL0 approach in Cooperative Linux makes it significantly different than traditional virtualization solutions such as VMware, plex86, Virtual PC, and other methods such as Xen. All of these approaches work by running the guest OS in a less privileged mode than of the host kernel. This approach allowed for the extensive simplification of Cooperative Linux's design and its short earlybeta development cycle which lasted only one month, starting from scratch by modifying the vanilla Linux 2.4.23-pre9 release until reaching to the point where KDE could run.

The only downsides to the CPL0 approach is stability and security. If it's unstable, it has the potential to crash the system. However, measures can be taken, such as cleanly shutting it down on the first internal Oops or panic. Another disadvantage is security. Acquiring root user access on a Cooperative Linux machine can potentially lead to root on the host machine if the attacker loads specially crafted kernel module or uses some very elaborated exploit in case which the Cooperative Linux kernel was compiled without module support.

One Most of the changes in the Cooperative Linux patch are on the i386 tree-the only supported architecture for Cooperative at the time of this writing. The other changes are mostly additions of virtual drivers: cobd (block device), conet (network), and cocon (console). Most of the changes in the i386 tree involve the initialization and setup code. It is a goal of the Cooperative Linux kernel design to remain as close as possible to the standalone i386 kernel, so all changes are localized and minimized as much as possible.

2. USES

Cooperative Linux in its current early state can already provide some of the uses that User Mode Linux[1] provides, such as virtual hosting, kernel development environment, research, and testing of new distributions or buggy software. It also enabled new uses:

Relatively effortless migration path from Windows.
In the process of switching to another OS, there is the choice between installing another computer, dualbooting, or using a virtualization software. The first option costs money, the second is tiresome in terms of operation, but the third can be the most quick and easy method-especially if it's free. This is where Cooperative Linux comes in. It is already used in workplaces to convert Windows users to Linux.

Adding Windows machines to Linux clusters.
The Cooperative Linux patch is minimal and can be easily combined with others such as the MOSIX or Open-MOSIX patches that add clustering capabilities to the kernel. This work in progress allows to addWindows machines to super-computer clusters, where one illustration could tell about a secretary workstation computer that runs Cooperative Linux as a screen saver-when the secretary goes home at the end of the day and leaves the computer unattended, the office's cluster gets more CPU cycles for free.

Running an otherwise-dual-booted Linux system from the other OS.
The Windows port of Cooperative Linux allows it to mount real disk partitions as block devices. Numerous people are using this in order to access, rescue, or just run their Linux system from their ext3 or reiserfs file systems.

Using Linux as a Windows firewall on the same machine.
As a likely competitor to other out-of-the-box Windows firewalls, iptables along with a stripped-down Cooperative Linux system can potentially serve as a network firewall.

read more

HALO

0 comments
Passage of the 1996 Telecommunications Act and the slow growth of infrastructure for transacting multimedia messages (those integrating voice, text, sound, images, and video) have stimulated an intense race to deploy non-traditional infrastructure to serve businesses and consumers at affordable prices. The game is new and the playing field is more level than ever before. Opportunities exist for entrepreneurs to challenge the market dominance enjoyed for years by incumbents. New types of service providers will emerge.

An electronic "information fabric" of a quilted character-including space, atmospheric, and terrestrial data communications layers-will emerge that promises to someday link every digital information device on the planet. Packet-switched data networks will meld with connection-oriented telephony networks. Communications infrastructures will be shared more efficiently among users to offer dramatic reductions in cost and large increases of effective data rates. An era of inexpensive bandwidth has begun which will transform the nature of commerce.

The convergence of innovative technologies and manufacturing capabilities affecting aviation, millimeter wave wireless, and multi-media communications industries enables Angel Technologies Corporation and its partners to pursue new wireless broadband communications services. The HALO™ Network will offer ubiquitous access to any subscriber within a "super metropolitan area" from an aircraft operating at high altitude. The aircraft will serve as the hub of the HALO™ Network serving tens to hundreds of thousands of subscribers. Each subscriber will be able to communicate at multi-megabit per second data rates through a simple-to-install subscriber unit. The HALO™ Network will be steadily evolved at a pace with the emergence of data communications technology world-wide. The HALO™ Network will be a universal wireless communications network solution. It will be deployed globally on a city-by-city basis.


Wireless Broadband Communications Market

There are various facts that show the strong interest in wireless communications in the United States:
" 50 million subscribers to wireless telephone service
" 28 million dollars annual revenue for wireless services
" 38,000 cell sites with 37 billion dollars cumulative capital investment
" 40% annual growth in customers
" 25 million personal computers sold each year
" 50 million PC users with Internet access
"The demand for Internet services is exploding and this creates a strong demand for broadband, high data rate service. It is expected that there will soon be a worldwide demand for Internet service in the hundreds of millions". (Lou Gerstner, IBM, April 1997) The growth in use of the World Wide Web and electronic commerce will stimulate demand for broadband services.
Broadband Wireless Metropolitan Area Network

An airplane specially designed for high altitude flight with a payload capacity of approximately one ton is being developed for commercial wireless services. It will circle at high altitudes for extended periods of time and it will serve as a stable platform from which broadband communications services will be offered. The High Altitude Long Operation (HALO™) Aircraft will maintain station at an altitude of 52 to 60 thousand feet by flying in a circle with a diameter of about 5 to 8 nautical miles. Three successive shifts on station of 8 hours each can provide continuous coverage of an area for 24 hours per day, 7 days per week. Such a system can provide broadband multimedia communications to the general public.

Details Via:seminarsonly.com

read more

Digital Scent Technology

0 comments


 Technology has till date be able to use our sense of site and sound quite successfully in bringing virtual reality and nearer to reality. Consequently you have realistic-looking games, and graphic cards that are capable of rendering them; mice that let you experience the terrain you are traversing, whether in an application, on the internet, or on a CD-ROM; and sound and music, thanks to MP3 and the like, which bring alive your experience in the virtual world. Virtual reality has, since the onset several decades ago, been dominated by visual stimuli, with tactile and auditory information research and added to the sense in the latter years.

Olfactory information has been mainly ignored as input to the virtual environment participant, in spite of the fact that olfactory receptors provide such a rich source of information to the human. To enhance this virtual experience, technology now targets on nose and tongue for the experience of smell and taste. That is, you will soon the able to smell and taste the virtual world's offerings, and not just see or hear them.

Now with the digital scent technology we are able to sense, transmit and receive a smell-trough the internet, smell a perfume online before buying them, check to see if food you are buying is fresh, smell burning rubber in your favorite racing game, or sent scented e-cards from scent enable websites. As this technology gains mass appeal , there is no stopping it from entering into all areas of virtual world. Imagine being able to smell things using a device that connects to your computer. Digital scent technologies is making this a reality.

There is a complete software and hardware solution for scenting digital media and user. It includes a personal scent synthesize for reproducing and electronic nose for the detection of the smell. These two peripheral components connected to the computer and the communication network for the transmission of the digitized smell data does comprise the digital scent technology and communication.

Digital scent technology digitizes the scent by digitizing the scent along two parameters the chemical make and its place in scent spectrum then digitized into a small file which can be transmitted to the internet attached to the enhanced web content. Then with the help of digital synthesizer connected to the computer the transmitter scent can be reproduce from the palette of primary odors following the guidelines of the digital file.

Digital scent technologies find its wide range of applications in scentertinment-movies, music and games, in communication which includes websites which is enhanced with scent. It also has its relevance in E-commerce which will make online-shopping compelling and fun. This technology also helps in advertising fields in making the advertisement more memorable and engaging. Many companies working in the field of digital scent technologies are developing a new technology for identifying dementing brain disorders, including Alzheimer's, Hunting tone's, and parkingson's and for differentiating them from other mental disorders. This method is based on detecting the olfactory deficits that are diagnostic of the deminating diseases.


Physiological Aspects Of Smell

Olfaction is defined as the act of smelling, where as to smell is to receive the scent of something by means of the olfactory nerves. Odorants are substances who characteristics can be determined by chemical analysis. A person's olfactory system operates in a fashion similar to other sensing processes in the body. Air bone molecules of volatile substances come in contact with membranes of receptor sells located in the upper part of the nostrils. The olfactory epithelium, the smell organ covers a 4-10 cm^2 area and consists of 6-10 million olfactory hairs, cilia, that detect different smells of compounds. Excited receptors send pulses to the olfactory bulbs, a part of cortex with a pattern of receptors activity indicating a particular scent.

Because the airways are bent and thus the airflow past the receptors normally is low, we sniff something to get a better sensation. In addition to the cilia, the fifth cranial nerve (trigeminal)has free nerve endings distributed through out the nasal cavity. These nerve endings serve as a chemoreceptor and react to irritating and burning sensations. The trigeminal nerve connects to different region of the brain and provides the pathway for initiation of protective reflexes such as sneezing and interruption of inhalation. If the concentration is high enough both the olfactory and trigeminal sensors will be triggered by most odorants.


Details Via:seminarsonly.com

read more

Computer Intelligence Applications

0 comments
The word 'robot' evokes many different thoughts and images, perhaps conflicting ones. Some may think of a metal humanoid, others of an industrial arm, and yet more may think, unfortunately, of a lost job. In the field of medical robotics, the word robot is just as fuzzily defined, with many different applications. These range from simplistic laboratory robots, to highly complex surgical robots that can either aid a human surgeon or execute operations by themselves.

The idea of robotics in surgery got its start in the military. The idea was to develop technology where a surgeon could perform an operation from a remote location on an injured soldier in the battlefield. This concept has evolved into robotics to enhance surgical performance. In this instance, a robotic arm called Endowrist performs the procedure with the surgeon guiding the robotic arm from a location in or adjacent to the operating room. The surgeon sits at a station peering at a monitor that shows a magnified view of the surgical field. A computer mimics and enhances his hand movements. The computer in this instance makes the movements more precise by dampening even a tiny tremor in the surgeon's hands, which might increase the difficulty in performing procedures under high power microscopic magnification. Examples of such procedures now being performed that were extremely difficult if not impossible before this technology are fallopian tube repair in women, microsurgery on the fetus, and minimally invasive coronary bypass surgery. The Zeus robot made by Computer Motion and a similar device, the Endowrist made by Intuitive Surgical are now in clinical trials for the above-mentioned procedures. Even with the robot to enhance the surgeon's ability, a great deal of practice is required to master the technique.

The reasons behind the interest in the adoption of medical robots are multitudinous. There is a great analogy to be found with the automation involved in the manufacturing industry. That is not to say that the issues of medical robotics are the same, but that the advantages to be gained are similar. Robots provide industry with something that is, to them, more valuable than even the most dedicated and hard-working employee - namely speed, accuracy, repeatability, reliability, and cost-efficiency. A robotic aid, for example, one that holds a viewing instrument for a surgeon, will not become fatigued, for however long it is used. It will position the instrument accurately with no tremor, and it will be able to perform just as well on the 100th occasion as it did on the first. The use of robotics and computers in minimally invasive spine surgery has resulted in more accurate surgical procedures, shortened operative time and fewer complications. It is expected that Computer Enhanced Image Guidance Systems will improve the precision of these procedures as a result of real time 3-D imaging at the time of the surgery. Diagnostic studies will be digitally transmitted to the operating room and projected to monitors to further aid the surgeon in performing the correct procedure with minimal trauma to the patient
SURGICAL NAVIGATION SYSTEM

A surgical navigation system has been built that is currently used regularly for neurosurgical cases such as tumor resection at Brigham and Women's Hospital. The system consists of a portable cart containing a Sun UltraSPARC workstation and the hardware to drive the laser scanner and Flashpoint tracking system (Image Guided Technologies, Boulder, CO). On top of the cart is mounted an articulated extendible arm to which a bar is attached to house the laser scanner and Flashpoint cameras. The three linear Flashpoint cameras are inside the bar. The laser is attached to one end of the bar, and a video camera to the other. The joint between the arm and scanning bar has three degrees-of-freedom to allow easy placement of the bar in desired configurations

Details Via:seminarsonly.com

read more
Related Posts with Thumbnails