Senin, 29 Juli 2019

e- lockers and detectors for application the space and time AMNIMARJESLOW GOVERNMENT 91220017 XI Xie Lock and Time space 0209010014 LJBUSAW __ Thanks for Lord Jesus Give and Give for Our Thanksgiving ___ Gen. Mac Tech e- A/D/S Tour in Locker time and space detecting S H I N to A / D / S up going G-Lock Astronomy Celluloid



                                                   Gambar terkait

                       Electronic Re-Charging Station.jpg


In the world of electronics and mechanics as well as informatics a locking device is needed so that a system works so that the system can be controlled properly, especially the control using detector techniques that exist both in terms of the A / D / S Tour in Space Time.



                                                                       Love in  S H I N to A/D/S Tour  



                     
                                                                   Hasil gambar untuk electronic detector circuit



                                                         ( Gen . Mac Tech Zone e- Locking and Detection ) 




   
   
                             Hasil gambar untuk g-lock in astronomy
   
                               Gambar terkait

                                                         Dark Detector




                                      II0  Code Lock Circuit using Transistor 

 
one-transistor-code-lock-circuit


electronic circuit is one of the simplest code lock circuit one can make easily in their home.This circuit uses one transistor a relay and a few passive components in it.The logic of this circuit is also very simple.Even this circuit is simple it works really fine and efficient for a simple cupboard or shelf lockers.


WORKING OF CODE LOCK CIRCUIT:

The working of this circuit is very simple it just uses a transistor as a switch with a relay at its collector as load.Five switches (S0 to S4) is arranged in series with a current limiting resistor R2 is connected across it.Another five switches (S5 to S9) is connected across the base of the transistor and the ground.So this circuit uses transistor as a switch and the transistor will ON only when all the switches from S0 to S4 was in ON state and S5 to S9 was in OFF state.This was the primary logic for this circuit.Lets see how to design it as per our wish.
Now coming to the design of this circuit we should shuffle arrange the switches in the panel in such a way so that the password will be too  tough to guess.For example if your password is 58901 you should arrange it in the circuit that the keypad 5 will be your switch S0 then 8 as S1,9 as S2,0 as S3 and 1 as S4.Since it was in series the voltage will not pass through unless you pressed the keys in right combination. 


Thus if correct combination of the key was pressed the transistor will be switched ON and it activates the relay circuit thus it opens the lock.If even on key from S5 to S9 was pressed then there will be no voltage to transistor thus making it to remain in OFF state.The device used to controlled using lock circuit can be connected through the relay terminals.The transformer T1,bridge D1 and capacitor C2 forms the power supply of the circuit.And diode D2 is a freewheeling diode which was used with relay circuit.

UPDATE:

The above Circuit diagram have a small bug, the resistor R1 should not be pulled down to ground. It should be connected in such a way to connect the switch S9 and base of the transistor Q1 in order to bias it for switching the relay ON.


                             
                         Gambar terkait 



 

                Electronic Component of Transducers, sensors, detectors

  1. Transducers generate physical effects when driven by an electrical signal, or vice versa.
  2. Sensors (detectors) are transducers that react to environmental conditions by changing their electrical properties or generating an electrical signal.
  3. The transducers listed here are single electronic components (as opposed to complete assemblies), and are passive (see Semiconductors and Tubes for active ones). Only the most common ones are listed here.

                                                 Electronic Combination Lock


    

                     II0  II0  Importance and Classification of Electronic Security System 


Electronic security system refers to any electronic equipment that could perform security operations like surveillance, access control, alarming or an intrusion control to a facility or an area which uses a power from mains and also a power backup like battery etc. It also includes some of the operations such as electrical, mechanical gear. Determination of a type of security system is purely based on area to be protected and its threats.

 

Importance of Electronic Security System:

The electronic security systems are broadly utilized within corporate work places, commercial places, shopping centers and etc. These systems are also used in railway stations, public places and etc. The systems have profoundly welcomed, since it might be worked from a remote zone. And these systems are also utilized as access control systems, fire recognition and avoidance systems and attendance record systems. As we are know that the crime rates are increasing day by day so most of the people are usually not feeling comfort until they provide a sure for their security either it may be at office or home.  So we should choose a better electronic system for securing purpose.

Classification of security system can be done in different ways based on functioning and technology usage, conditions of necessity accordingly. Based on functioning categorizing electronic security system as follows:


  • CCTV Surveillance Security System
  • Fire Detection/Alarming System
  • Access Control/Attendance System 


CCTV Surveillance Systems:

It is the process of watching over a facility which is under suspicion or area to be secured; main part of the surveillance electronic security system consists of camera or CCTV cameras which forms as eyes to surveillance system. System consists of different kinds of equipment which helps in visualizing and saving of recorded surveillance data. The close-circuits IP cameras and CCTVS transfers image information to a remote access place. The main feature of this system is that, it can use any place where we watch the human being actions. Some of the CCTV surveillance systems are cameras, network equipments, IP cameras and monitors. In this system, we can detect the crime through the camera, the alarm rings after receiving the signal from the cameras which are connected CCTV system; to concern on the detection of interruption or suspicion occurrence on a protected area or capability, the complete operation is based on the CCTV surveillance system through internet. The figure below is representing the CCTV Surveillance Systems.



CCTV Surveillance System
                          CCTV Surveillance System



P Surveillance System:

The IP-Surveillance system is designed for security purpose, which gives clients capability to control and record video/audio using an IP PC system/network, for instance, a LAN or the internet. In a simple way, the IP-Surveillance system includes the utilization of a system Polaroid system switch, a computer for review, supervising and saving video/audio, which shown in figure below.
In an IP-Surveillance system, a digitized video/audio streams might be sent to any area even as far and wide as possible if wanted by means of a wired or remote IP system, empowering video controlling and recording from anyplace with system/network access.



IP Surveillance Network
                                IP Surveillance Network

Attendance and Access Control Systems:

System which provides a secured access to a facility or another system to enter or control it can be called as an access control system. It can also act as attendance providing system which can play a dual role. According to user credentials and possessions access control system is classified, what a user uses for access makes system different, user can provide different types like pin credentials, biometrics or smart card. System can even use all possessions from user for a multiple access controls involved.  Some of the attendance and access control systems are:
  • Access Control System
Access Control System
  • RF based access control and attendance system:



Finger Print Attendance-Access Control System
Finger Print Attendance-Access Control System
RF based Card access control and attendance system
         RF based Card access control and attendance system

Electronic security system extends its applications in various fields like home automation, Residential (homes and apartments), commercial (offices, banks lockers), industrial, medical, and transportations. Some of the applications using electronic security system are electronic security system for railway compartment, electronic eye with security, electronic voting system are the most commonly used electronic security system.

One of the examples related to electronic security system:

 From the block diagram, the system is mainly designed based on Electronic eye (LDR sensor); we use this kind of systems in bank lockers, jewelry shops. When the cash box is closed, the neither buzzer nor the binary counter/divider indicates that box is closed. If anyone tries to open the locker door then automatically a light falls on the LDR sensor then the resistance decreases slowly this cause buzzer to alert the customer. This process continues until the box is closed.
Electronic Eye Controlled Security System                                  Electronic Eye Controlled Security System


   
                         Hasil gambar untuk electronic lockers circuit 

                                          Locker Guard Project

   

                  Intelligent Electronic Lock Circuit 

securing anything is through mechanical locks, which operate with a specific key or a few keys; but, for locking a large area many locks are necessary. However, conventional locks are heavy and do not offer the desired protection as they can be easily broken down by using some tools. Therefore, security breaching problems are associated with the mechanical locks.However to decide the electronic security system problems that are associated with the mechanical locks.

                           Intelligent Electronic Lock  


Nowadays, many devices’ operations are based on digital technology. For example, digital based door lock systems for auto door opening and closing, token-based-digital-identity devices are all based on digital technology. These locking systems are controlled by a keypad and are installed at the side hedge of the door.  intelligent electronic security lock system offers freedom from physical and mental stress faced by a person while moving away from their home.

1. Intelligent Electronic Lock Circuit Diagram:

The below shown circuit represents an intelligent electronic lock , which is built using transistors only. To open this electronic lock, one has to press switches S1 through S4 serially. For dishonesty, you may explain these switches with different numbers on the keypad. For example, if you want to use 10 switches 0 to 9 on the keypad, use any four arbitrary numbers out of these switches and remaining 6 numbers may be explained on the leftover switches. These switches may be wired in parallel to disable S6 switch. When four password digits are mixed with remaining 6 digits, which are connected across disable switch terminals, energization of the RL1 relay by unknown person is prohibited.

                                          Circuit Diagram of Intelligent Electronic Lock

For authorized persons or known persons, a four-digit password is very easy to remember. To strengthen the relay RL1, one has to press the switches S1 to S4 in sequence within six seconds. Each of the switches will take 0.75 to 1.25 seconds time duration. The relay will not work if time duration is less than 0.75 Sec or above 1.25Sec. A special characteristic of this electronic lock circuit is the pressing of any switch wired across the switch S6 which will guide to disable of the whole circuit for about one minute. This circuit comprises sequential switching, relay latch up sections and disabling. The disabling section consists of Transistors T1, T2 and Zener diode ZD5. The function of the disabling section is such that- when the disable switch S6 is pressed, it cuts off the positive supply to the sequential switching and the relay latches up sections for one minute.
During idle state, the C1 capacitor is discharged and the voltage is less than 4.7V. Thus, T1 transistor and Zener diode are in non-conduction state. So the collector voltage of the T1 transistor is higher than transistor T2. Therefore, +12V is extended to the relay latch up and sequential switching sections. The sequential switching includes Transistors: T3, T4, T5; Zener diodes ZD1, ZD2, ZD3; Tactile switches S1 to S4; and, Timing capacitors: C2 to C4. In this electronic switch, when the tactile switches are activated, then the timing capacitors are charged through resistors. Thus, while activating tactile switches sequentially, transistors T3, T4 and T5 remain in conduction for a few seconds (T3 for 6 seconds, T4 for 3 seconds, and T5 for 1.5 seconds). 

To activate the tactile switches, the time taken is greater than 6 seconds, and the T3 transistor stops performing due to the time lapse. Thus, Sequential switching is not achieved and it is not possible to energize the relay RL1. However, on correct operation of sequential switches S1, S2, S3 and S4, the capacitor C5 is charged through R9 resistor, and the voltage across it increases above 4.7 volts. Next the  transistors T6, T7, T8 as well as the Zener diode start conducting and the RL1 relay gets energized. Next, if you turn on the reset switch S5 for a moment, the C5 capacitor is instantly discharged through the R8 resistor, and the voltage across it falls below 4.7 volts. Therefore the transistors T6, T7, T8 and the Zener diode ZD4 stop conducting again and the RL1 relay de-energizes.

2. Password Based Door Locking System:

In this password based door locking system , a keypad is arranged to open and close the door. After entering a password, if it matches with the stored one, then the door will unlock for a limited period of time. After extending the unlocking process for a fixed period of time, the relay energizes, and then the door gets locked again. If any unauthorized person enters a wrong password in an attempt to open the door, then this system immediately switches a buzzer

The working of this project can be described by the above block diagram. It consists of blocks as a microcontroller, a keypad, a buzzer, an LCD, a stepper motor and a motor driver.

Block Diagram of Password Based Door Locking System
Block Diagram of Password Based Door Locking System

The keypad is an input device which helps to enter a password to open the door. Then, it gives the entered code signals to the microcontroller. The LCD and buzzer are the indicating devices for alarming and displaying the information. The stepper motor moves the door to open and close and the motor driver drives the motor after receiving the code signals from the microcontroller.
The microcontroller which is used in this project is from 8051 families and that is programmed with the Keil software. When a person enters a password through a keypad, then the microcontroller reads the data and contrasts it with the stored data. If the entered password matches with the stored data, then the microcontroller sends the information to the LCD, which displays this information: the code is valid. Also, it sends the command signals to the motor driver to rotate the motor in a particular direction such that the door opens. After some time, the spring system with a particular time delay closes its relay, and then the door gets to its normal position,
If a person challenging to open the door enters a wrong password, then the microcontroller switches the buzzer for further action. In this way, a simple door-electronic-lock system can be implemented with the use of a microcontroller

3. ATmega Based Garage Door Opening:


ATmega Based Garage Door Opening by Edgefxkits.com
ATmega Based Garage Door Opening by Edgefxkits.com

This is an advanced project compared to the above project. This project uses android technology instead of a keyboard for opening and closing the door. Hence, users can use their Android mobiles for opening and closing the door.
The main intention of this project is to unlock a garage door with an Android-OS-based device such as mobile or tablet by entering a single password through the Android application. This system uses a microcontroller, a Bluetooth modem, a buzzer, an Android mobile, a relay driver, lamps and relays for attaining the remote-controlled operations of the door.

ATmega Based Garage Door Opening by Edgefxkits.com
ATmega Based Garage Door Opening by Edgefxkits.com

The Android based device is connected to this system through a Bluetooth device. The Bluetooth device is arranged to the microcontroller which  is programmed with a particular password for opening and closing the garage door.
Before sending this information to the microcontroller, the Bluetooth on the phone is attached to the control device which is paired to the Bluetooth modem. After entering the password in the android device, it sends the data to the microcontroller through a Bluetooth. Then it compares that data with the password stored in the microcontroller. If the two passwords match, then the microcontroller sends the control signals to the relay driver.
Then, the relay performs mechanical operations to open and close the garage door through the motor. Here, the motor is replaced with the lamp for visualization purpose. If the entered password is wrong, then the system generates an alarm.
Thus, this is all about intelligent electronic lock and basic procedure based on electronic door lock system.


   II0 II0 II0  Space-Time Ripples: How Scientists Could Detect of Lockers Gravity Waves


For years, scientists have been trying — and failing — to detect theoretical ripples in space-time called gravitational waves.
Four gravitational wave detectors are currently in operation. 

Gravitational waves, predicted by Einstein's theory of general relativity, are thought to be created by some of the most violent events in the universe, such as the collision of two neutron stars.
 Neutron stars are extremely dense dead stars left over after supernova explosions. When two merge into each other, they are predicted to release strong gravitational waves that should be detectable on Earth.

A new way of seeing the universe
Einstein's theory of general relativity describes how objects with mass bend and curve space-time. Imagine holding out a taut bedsheet and placing a football in the center. Just as the bed sheet curves around the football, space-time curves around objects with mass.
And like the ripples moving across a lake, the distortion in space-time caused by accelerating objects gradually decreases in strength, so by the time they finally reach Earth, they are very hard to detect. Hard, but not impossible. detecting gravitational waves opens up a new way of investigating the universe.
The waves could also help researchers probe some other mysterious and powerful cosmic events.
"Gravitational waves have great penetrating power, so they will allow us to see directly to the center of the systems responsible for supernova explosions, gamma-ray bursts and a wealth of other systems so far hidden from view. we need additional detector for looking for energy in 4 gravitation .

                                           

Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915) which describes gravity not as a force, but as a consequence of the curvature of spacetime caused by the uneven distribution of mass. 

Albert Einstein called gravity a distortion in the shape of space-time. ... Newton's theory says this can occur because of gravity, a force attracting those objects to one another or to a single, third object. Einstein also says this occurs due to gravity -- but in his theory, gravity is not a force.

According to Newton , gravity is a force expressed mutually between two objects by the virtue of their masses(heavier the objects, more the gravity)…he considered gravity a pull. According to Einstein , gravity was a curvature in the 4-dimensional space-time fabric proportional to objects masses.

“Imagination is more important than knowledge,” Einstein would say. It's no coincidence that around the same time, Einstein began to use thought experiments that would change the way he would think about his future experiments. ... His work on gravity was influenced by imagining riding a free-falling elevator. we have Illustration about 240 keypad as like as electronic lock switch in Input for e- S H I N to A / D / S tour and then to detecting gravitation energy to out space in space and time .
 

In space, it is possible to create "artificial gravity" by spinning your spacecraft or space station. ... Technically, rotation produces the same effect as gravity because it produces a force (called the centrifugal force) just like gravity produces a force.

              Gambar terkait
The gravitational pull of the moon pulls the seas towards it, causing the ocean tides. Gravity creates stars and planets by pulling together the material from which they are made. Gravity not only pulls on mass but also on light. Albert Einstein discovered this principle.

Does the influence of gravity extend out forever? ... As you get farther away from a gravitational body such as the sun or the earth (i.e. as your distance r increases), its gravitational effect on you weakens but never goes completely away; at least according to Newton's law of gravity

to summarize, general relativity says that matter bends spacetime, and the effect of that bending of spacetime is to create a generalized kind of force that acts on objects. However, it isn't a force as such that acts on the object, but rather just the object following its geodesic path through spacetime

The better news is that there is no science that says that gravity control is impossible. First, we do know that gravity and electromagnetism are linked phenomena. ... Historically, gravity has been studied in the general sense, but not very much from the point of view of seeking propulsion breakthroughs .


Earth's gravity is what keeps you on the ground and what makes things fall. ... So, the closer objects are to each other, the stronger their gravitational pull is. Earth's gravity comes from all its mass. All its mass makes a combined gravitational pull on all the mass in your body . 


Gravity from Earth keeps the Moon and human-made satellites in orbit. It is true that gravity decreases with distance, so it is possible to be far away from a planet or star and feel less gravity. But that doesn't account for the weightless feeling that astronauts experience in space .


It is a common misconception that astronauts in orbit are weightless because they have flown high enough to escape the Earth's gravity. In fact, at an altitude of 400 kilometres (250 mi), equivalent to a typical orbit of the ISS, gravity is still nearly 90% as strong as at the Earth's surface.

 Einstein said it is impossible, but as Jennifer Ouellette explains some scientists are still trying to break the cosmic speed limit – even if it means bending the laws of physics. "It is impossible to travel faster than light, and certainly not desirable, as one's hat keeps blowing off."


Also, under Einstein's theory of general relativity, gravity can bend time. Picture a four-dimensional fabric called space-time. When anything that has mass sits on that piece of fabric, it causes a dimple or a bending of space-time.

Albert Einstein, in his theory of special relativity, determined that the laws of physics are the same for all non-accelerating observers, and he showed that the speed of light within a vacuum is the same no matter the speed at which an observer travels.

The special theory of relativity implies that only particles with zero rest mass may travel at the speed of light. Tachyons, particles whose speed exceeds that of light, have been hypothesized, but their existence would violate causality, and the consensus of physicists is that they cannot exist.

The faster the relative velocity, the greater the time dilation between one another, with the rate of time reaching zero as one approaches the speed of light (299,792,458 m/s). This causes massless particles that travel at the speed of light to be unaffected by the passage of time.

 The force of gravity is the weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles.

 In string theory, believed to be a consistent theory of quantum gravity, the graviton is a massless state of a fundamental string. If it exists, the graviton is expected to be massless because the gravitational force is very long range and appears to propagate at the speed of light.

 The gravitational constant, called G in physics equations, is an empirical physical constant. It is used to show the force between two objects caused by gravity. The gravitational constant appears in Newton's universal law of gravitation. G is about 6.67408 ×1011 N⋅m2/kg2, and is denoted by letter G

  
                 Gambar terkait

 Wormholes ( virtual point matrix )  are consistent with the general theory of relativity, but whether wormholes actually exist remains to be seen. A wormhole could connect extremely long distances such as a billion light years or more, short distances such as a few meters, different universes, or different points in time. 


According to the current understanding of physics, an object within space-time cannot exceed the speed of light, which means an attempt to travel to any other galaxy would be a journey of millions of earth years via conventional flight. 

Real world and Retro is possible , Because of the vastness of those distances, interstellar travel would require a high percentage of the speed of light; huge travel time, lasting from decades to millennia or longer. The speeds required for interstellar travel in a human lifetime far exceed what current methods of spacecraft propulsion can provide.

              Gambar terkait 


A causal loop is a paradox of time travel that occurs when a future event is the cause of a past event, which in turn is the cause of the future event. Both events then exist in spacetime, but their origin cannot be determined. 


However, making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology. As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible


With an estimated light-travel distance of about 13.4 billion light-years (and a proper distance of approximately 32 billion light-years (9.8 billion parsecs) from Earth due to the Universe's expansion since the light we now observe left it about 13.4 billion years ago), astronomers announced it as the most distant ...

 Tachyons. In special relativity, it is impossible to accelerate an object to the speed of light, or for a massive object to move at the speed of light. However, it might be possible for an object to exist which always moves faster than light.


A spacecraft equipped with a warp drive may travel at speeds greater than that of light by many orders of magnitude. ... The problem of a material object exceeding light speed is that an infinite amount of kinetic energy would be required to travel at exactly the speed of light.

 In astronomy, the interstellar medium (ISM) is the matter and radiation that exists in the space between the star systems in a galaxy. This matter includes gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interstellar space and blends smoothly into the surrounding intergalactic space


Explanations of why ships can travel faster than light in hyperspace vary; hyperspace may be smaller than real space and therefore a star ship's propulsion seems to be greatly multiplied, or else the speed of light in hyperspace is not a barrier as it is in real space. 

                               
                                    Hasil gambar untuk g-lock in astronomy 

                 g-lock in astronomy celluloid

              
                                           Gambar terkait

g-force. A force acting on a body as a result of acceleration or gravity, informally described in units of acceleration equal to one g. For example, a 12 pound object undergoing a g-force of 2g experiences 24 pounds of force. See more at acceleration of gravity.

 
G-force induced loss of consciousness (abbreviated as G-LOC, pronounced 'JEE-lock') is a term generally used in aerospace physiology to describe a loss of consciousness occurring from excessive and sustained g-forces draining blood away from the brain causing cerebral hypoxia.

 The acceleration that causes blackouts in fighter pilots is called the maximum g-force. Fighter pilots experience this force when accelerating or decelerating quickly. At high g's the pilots blood pressure changes and the flow of oxygen to the brain rapidly decreases.


As objects accelerate through the air toward (or away) from the ground, gravitational forces exert resistance against human bodies, objects, and matter of all kinds. ... As we accelerate faster and faster and fly higher and higher, the gravitational impact on our bodies grows greater.

 RPM stands for "Revolutions per minute." This is how centrifuge manufacturers generally describe how fast the centrifuge is going. ... RCF (relative centrifugal force) is measured in force x gravity or g-force. This is the force exerted on the contents of the rotor, resulting from the revolutions of the rotor.  


Impact From a Falling Object
The first step is to set the equations for gravitational potential energy and work equal to each other and solve for force. W = PE is F × d = m × g × h, so F = (m × g × h) ÷ d.

 1 g is the average gravitational acceleration on Earth, the average force, which affects a resting person at sea level. 0 g is the value at zero gravity. 1 g = 9.80665 m/s² = 32.17405 ft/s². To reach this value at a linear acceleration, you must accelerate from 0 to 60 mph in 2.74 seconds. 


To calculate this g-force, use this formula: The little g is the g-force or the amount of acceleration caused by gravity. The big G is Newton's gravitational constant, approximately 6.67 x 10-11 N * m2 / kg2. The little m is the mass of the object, and the little r is the radius of the object.

 When something falls on Earth with an acceleration of 9.8 m/s2 , then they are accelerating at 1 g. A person in freefall at 1 g would feel no external forces, but from the physics perspective we would say that he is experiencing an external force of gravity .


The magnitude of the force of gravity acting upon the passenger (or car) can easily be found using the equation Fgrav = m. g where g = acceleration of gravity (9.8 m/s2). The magnitude of the normal force depends on two factors - the speed of the car, the radius of the loop and the mass of the rider.

 That force will cause the plane to accelerate unless it exactly balances gravity and drag. Since the the pilot is strapped into the plane he or she feels the force caused by the acceleration of the seat and/or straps. ... The g-forces you feel are caused by inertia . 


g-force is pretty much an acceleration force. For example 1g (Earth gravity) is basically an acceleration of 9.8m/s2 towards the Earth, you don't accelerate because the ground resists this force 


Well Einstein gave us the answer to that: it would feel exactly like standing on the surface of the Earth where the acceleration due to gravity is 1g. ... The overall speed doesn't matter, and the astronauts would have no way of directly perceiving their velocity, but yes they would constantly perceive the acceleration


At the surface of the Earth, the acceleration due to gravity is roughly 9.8 m/s2. The average distance to the centre of the Earth is 6371 km. Using the constant , we can work out gravitational acceleration at a certain altitude. Example: Find the acceleration due to gravity 1000 km above Earth's surface. 


G-force induced loss of consciousness (abbreviated as G-LOC, pronounced 'JEE-lock') is a term generally used in aerospace physiology to describe a loss of consciousness occurring from excessive and sustained g-forces draining blood away from the brain causing cerebral hypoxia. The condition is most likely to affect pilots of high performance fighter and aerobatic aircraft or astronauts but is possible on some extreme amusement park rides. G-LOC incidents have caused fatal accidents in high performance aircraft capable of sustaining high g for extended periods. High-G training for pilots of high performance aircraft or spacecraft often includes ground training for G-LOC in special centrifuges, with some profiles exposing pilots to 9 gs for a sustained period. 

                              
                                           Hasil gambar untuk g-lock in astronomy celluloid

Under increasing positive g-force, blood in the body will tend to move from the head toward the feet. For higher intensity or longer duration, this can manifest progressively as:
  • Greyout - a loss of color vision
  • Tunnel vision - loss of peripheral vision, retaining only the center vision
  • Blackout - a complete loss of vision but retaining consciousness.
  • G-LOC - where consciousness is lost.
Under negative g, blood pressure will increase in the head, running the risk of the dangerous condition known as redout, with too much blood pressure in the head and eyes.
Because of the high level of sensitivity that the eye’s retina has to hypoxia, symptoms are usually first experienced visually. As the retinal blood pressure decreases below globe pressure (usually 10–21 mm Hg), blood flow begins to cease to the retina, first affecting perfusion farthest from the optic disc and retinal artery with progression towards central vision. Skilled pilots can use this loss of vision as their indicator that they are at maximum turn performance without losing consciousness. Recovery is usually prompt following removal of g-force but a period of several seconds of disorientation may occur. Absolute incapacitation is the period of time when the aircrew member is physically unconscious and averages about 12 seconds. Relative incapacitation is the period in which the consciousness has been regained, but the person is confused and remains unable to perform simple tasks. This period averages about 15 seconds. Upon regaining cerebral blood flow, the G-LOC victim usually experiences myoclonic convulsions (often called the ‘funky chicken’) and often full amnesia of the event is experienced.[1] Brief but vivid dreams have been reported to follow G-LOC. If G-LOC occurs at low altitude, this momentary lapse can prove fatal and even highly experienced pilots can pull straight to a G-LOC condition without first perceiving the visual onset warnings that would normally be used as the sign to back off from pulling any more gs.
The human body is much more tolerant of g-force when it is applied laterally (across the body) than when applied longitudinally (along the length of the body). Unfortunately most sustained g-forces incurred by pilots is applied longitudinally. This has led to experimentation with prone pilot aircraft designs which lies the pilot face down or (more successfully) reclined positions for astronauts.

 The g thresholds at which these effects occur depend on the training, age and fitness of the individual. An un-trained individual not used to the G-straining manoeuvre can black out between 4 and 6 g, particularly if this is pulled suddenly. A trained, fit individual wearing a g suit and practicing the straining manoeuvre can, with some difficulty, sustain up to 12-14g without loss of consciousness. The “Blue Angels” have to perform their maneuvers without the aid of flight-suits, and regularly sustain 3-5 second bursts of 10 ‘g’ thresholds .

                        Hasil gambar untuk g-lock in astronomy celluloid

________________________________________________________________________________

                      Hasil gambar untuk usa flag detector and locker   Gambar terkait   

                      
                                                Hasil gambar untuk g-lock in astronomy 

      Gen. Mac Tech Zone e- Lockers and detectors for application the space and time


_________________________________________________________________________________

Kamis, 25 Juli 2019

e- Robo with Artificial Inteligence whereon INTERNET system AMNIMARJESLOW GOVVERNMENT 91220017 Xi Xie Intern pair e- IC 0209010014 LJBUSAR tweeter ___ thanks to Lord for joint point to point hubble ___ Gen. Mac Tech Zone e- Intern ROBO ART .









                         Hasil gambar untuk the internet of robotic things

The research futuristic is a comprehensive view of the new concept of IoT especially proposed for robotics i.e. Internet of Robotic Things (IoRT). IoRT is a mix of diverse technologies like Cloud Computing, Artificial Intelligence (AI), Machine Learning and Internet of Things (IoT).

  
                                  Hasil gambar untuk the internet of robotic things

The Future of Robotics. Robotic engineers are designing the next generation of robots to look, feel and act more human, to make it easier for us to warm up to a cold machine. Realistic looking hair and skin with embedded sensors will allow robots to react naturally in their environment. 
When it comes to robots cooking and cleaning, it's unlikely to happen in the next fifty years. Mainly because tasks such as ironing, washing dishes, and folding clothes would cost tons of money. Overseeing health problems and being an attendant to the elderly are the domestic roles robots will play in the future.

Robots could replace nearly a third of the U.S. workforce by 2030. Over the next 13 years, the rising tide of automation will force as many as 70 million workers in the United States to find another way to make money, a new study from the global consultancy McKinsey predicts.

Robots are already working in our everyday lives and have changed the way that some industries operate. The future of robotics will change how we live forever. ... Soon, robots could look and function similar to humans. Robots may become smarter than their creators as soon as 2029, as experts estimate.

This kind of job is better done by robots than by humans. Most robots today are used to do repetitive actions or jobs considered too dangerous for humans. A robot is ideal for going into a building that has a possible bomb. Robots are also used in factories to build things like cars, candy bars, and electronics.

Robots are a good way to implement the lean principle in an industry. They save time as they can produce more products. They also reduce the amount of wasted material used due to the high accuracy. Including robots in production lines, will save money as they have quick return on investment (ROI) .

Robotics deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing. These technologies are used to develop machines that can substitute for humans and replicate human actions.

By 2019, more than 1.4 million new industrial robots will be installed in factories around the world - that's the latest forecast from the International Federation of Robotics (IFR).

Robot software is the set of coded commands or instructions that tell a mechanical device and electronic system, known together as a robot, what tasks to perform. Robot software is used to perform autonomous tasks. Many software systems and frameworks have been proposed to make programming robots easier.

Robotics and Artificial Intelligence. Artificial Intelligence (AI) is a general term that implies the use of a computer to model and/or replicate intelligent behavior. Research in AI focuses on the development and analysis of algorithms that learn and/or perform intelligent behavior with minimal human intervention.

                                 Hasil gambar untuk the internet of robotic things

                                                         The Internet 

The Internet is a worldwide system, or network, of computers. It got started in the late 1960s, originally conceived as a network that could survive nuclear war. Back then it was called ARPAnet, named after the Advanced Research Project Agency (ARPA) of the United States federal government.

Protocol and packets When people began to connect their computers into ARPAnet, the need became clear for a universal set of standards, called a protocol, to ensure that all the machines “speak the same language.” The modern Internet is such that you can use any type of computer—IBM-compatible, Mac, or other—and take advantage of all the network’s resources. All Internet activity consists of computers “talking” to one another. This occurs in machine language. However, the situation is vastly more complicated than when data goes from one place to another within a single computer. In the Internet (often called simplythe Net), data must often go through several different computers to get from the transmitting or source computer to the receiving or destination computer. These intermediate computers are called nodes, servers, hosts, or Internet service providers (ISPs). Millions of people are simultaneously using the Net; the most efficient route between a given source and destination can change from moment to moment. The Net is set up in such a way that signals always try to follow the most efficient route. If you are connected to a distant computer, say a machine at the National Hurricane Center, the requests you make of it and the data it sends you, are broken into small units called packets. Each packet coming to you has, in effect, your computer’s name written on it. But not all packets necessarily travel the same route through the network. Ultimately, all the packets are reassembled into the data you want, say, the infrared satellite image of a hurricane, even though they might not arrive in the same order they were sent.

                                          E-mail and newsgroups 

For many computer users, communication via Internet electronic mail (e-mail) and/or newsgroups has practically replaced the postal service. You can leave messages for, and receive them from, friends and relatives scattered throughout the world.






To effectively use e-mail or newsgroups, everyone must have an Internet address. These tend to be arcane. An example is
                                               sciencewriter@nanosecond.com

The first part of the address, before the @ symbol, is the username. The word after the @ sign and before the period (or dot) represents the domain name. The three letter abbreviation after the dot is the domain type. In this case, “com” stands for “commercial.” Nanosecond is a commercial provider. Other common domain types include “net” (network), “org” (organization), “edu” (educational institution), and “gov” (government). In recent years, country abbreviations have been increasingly used at the ends of Internet addresses, such as “us” for United States, “de” for Germany, “uk” for United Kingdom, and “jp” for Japan.

Internet conversations You can carry on a teletype-style conversation with other computer users via the Internet, but takes a bit of getting used to. When done among users within a single service provider, this is called chat. When done among people connected to different service providers, it is called Internet relay chat (IRC). Typing messages to and reading them from other people in real time is more personal than letter writing, because your addressees get their messages immediately. But it’s less personal than talking on the telephone, especially at first, because you cannot hear, or make, vocal inflections. It is possible to digitize voice signals and transfer them via the Internet. This has given rise to hardware and software schemes that claim to provide virtually toll-free long-distance telephone communications. As of this writing, this is similar to amateur radio in terms of reliability and quality of connection. When Net traffic is light, such connections can be good. But when Net traffic is heavy, the quality is marginal to poor. Audio signals, like any other form of Internet data, are broken into packets. All, or nearly all, the packets must be received and reassembled before a good signal can be heard. This takes variable time, depending on the route each packet takes through the Net. If many of the packets arrive disproportionately late, and the destination computer can only “do its best” to reassemble the signal. In the worst case, the signal might not get through at all.
Getting information One of the most important features of Internet is the fact that it can get you in touch with thousands of sources of information. Data is transferred among computers by means of a file transfer protocol (FTP) that allows the files on the hard drives of distant computers to become available exactly as if the data were stored on your own computer’s hard drive, except the access time is slower. You can also store files on distant computers’ hard drives. When using FTP, you should be aware of the time at the remote location, and avoid, if possible, accessing files during the peak hours at the remote computer. Peak hours usually correspond to working hours, or approximately 8:00 a.m. to 5:00 p.m. local time, Monday through Friday.




You must take time differences into account if you’re not in the same time zone as the remote computer. The World Wide Web (also called WWW or the Web) is one of the most powerful information servers you will find on-line. Its outstanding feature is hypertext,a scheme of cross-referencing. Certain words, phrases, and images make up so-called links. When you select a link in a Web page or Web site (a document containing text and graphics and sometimes also other types of files), your computer is transferred to another document dealing with the same or a related subject. This site will probably also contain numerous links. Before long, you might find yourself “surfing” the Web for hours going from site to site. The word surfing derives from the similarity of this activity to television “channel surfing.” The Web works fastest during the predawn hours in the United States, when the fewest number of people are connected to the Internet. When Net traffic is heavy, Web documents can take a long time to appear. In some instances you won’t be able to get a page at all; you’ll sit there staring at a blank display or at an hourglass. This problem is worst with comparatively slow telephone-line modems, but it can occur even with the most expensive, high-speed Internet connections. When you experience it, you’ll know why some people refer to the Web as the “World Wide Wait.”

Getting connected If you’re not an Internet user but would like to get access, try calling the computer science department of the nearest trade school, college, or university. Many, if not most, academic institutions have Internet access, and some will let you in for a reasonable charge. If you aren’t near a school that can provide you with Internet service, you can get connected through a commercial provider. You’ll have to pay a fee, generally by the month. You might also have to pay for any hours you use per month past a certain maximum.




                                 Robotics and artificial intelligence

                       Hasil gambar untuk the internet and robots with artificial intelligence

A ROBOT IS A SOPHISTICATED MACHINE, OR A SET OF MACHINES WORKING together, thatperforms certain tasks. Some people imagine robots as having two legs, a head, two arms with functional end effectors (hands), an artificial voice, and an electronic brain. The technical name for a humanoid robot is android. Androids are within the realm of technological possibility, but most robots are structurally simpler than, and don’t look or behave like, people. An electronic brain, also called artificial intelligence (AI), is more than mere fiction, thanks to the explosive evolution of computer hardware and software. But even the smartest computer these days would make a dog seem brilliant by comparison. Some scientists think computers and robots might become as smart as, or smarter than, human beings. Others believe the human brain and mind are more complicated than anything that can be duplicated with electronic circuits. Asimov’s three laws In one of his early science-fiction stories, the famous author Isaac Asimov first mentioned the word robotics, along with three “fundamental rules” that all robots ought to obey:

• First controlling : A robot must not injure, or allow the injury of, any human being.
• Second controlling : A robot must obey all orders from humans, except orders that would contradict the First controlling .
• Third controlling : A robot must protect itself, except when to do so would contradict the First controlling or the Second controlling .

These rules were first coined in the 1940s, but they are still considered good standards for robots nowadays.

Robot generations Some researchers have analyzed the evolution of robots, marking progress according to so-called robot generations. This has been done with computers and integrated circuits, so it only seems natural to do it with robots, too. One of the first engineers to make formal mention of robot generations was the Japanese engineer Eiji Nakano.

First generation According to Nakano, a first-generation robot is a simple mechanical arm without AI. Such machines have the ability to make precise motions at high speed, many times, for a long time. They have found widespread industrial application and have been around for more than half a century. These are the fast-moving systems that install rivets and screws in assembly lines, that solder connections on printed circuits, and that, in general, have taken over tedious, mind-numbing chores that used to be done by humans. First-generation robots can work in groups if their actions are synchronized. The operation of these machines must be constantly watched, because if they get out of alignment and are allowed to keep operating anyway, the result can be a series of bad production units. At worst, a misaligned and unsupervised robot might create havoc of a sort that you can hardly begin to imagine even if you let your mind run wild.

Second generation A second-generation robot has some level of AI. Such a machine is equipped with pressure sensors, proximity sensors, tactile sensors, binocular vision, binaural hearing, and/or other devices that keep it informed about goings-on in the world around it. A computer called a robot controller processes the data from the sensors and adjusts the operation of the robot accordingly. The earliest second-generation robots came into common use around 1980. Second-generation robots can stay synchronized with each other, without having to be overseen constantly by a human operator. Of course, periodic checking is needed with any machine, because things can always go wrong, and the more complex the system, the more ways it can malfunction.

Third generation Nakano gave mention to third-generation robots, but in the years since the publication of his original paper, some things have changed. Two major avenues are developing for advanced robot technology. These are the autonomous robot and the insect robot. Both of these technologies hold promise for the future. An autonomous robot can work on its own. It contains a controller and can do things largely without supervision, either by an outside computer or by a human being. A good example of this type of third-generation robot is the personal robot about which technophiles dream.


There are some situations in which autonomous robots don’t work well. In these cases, many simple robots, all under the control of one central computer, can be used. They function like ants in an anthill or bees in a hive. The individual machines are stupid, but the group as a whole is intelligent.

Fourth generation and beyond Nakano did not write about anything past the third generation of robots. But we might mention a fourth-generation robot: a machine of a sort yet to be deployed. An example is a fleet or population of robots that reproduce and evolve. Past that, we might say that a fifth-generation robot is something humans haven’t yet dreamed of—or, if someone has thought up such a thing, he or she has not published the brainstorm. Independent or dependent? Years ago, roboticist and engineer Rodney Brooks became well known for his work with insect robots. His approach to robotics and AI was at first considered unorthodox. Before his time, engineers wanted robotic AI systems to mimic human thought processes. The machines were supposed to stand alone and be capable of operating independently of humans or other machines. But Brooks believed that insect like intelligence might be superior to human like intelligence in many applications. Support for his argument came from the fact that insect colonies are highly efficient, and often they survive adversity better than supposedly higher life forms.


                                                        Insect robots 

Insect robots operate in large numbers under the control of a central AI system. A mobile insect robot has several legs, a set of wheels, or a track drive. The first machines of this type, developed by Brooks, looked like beetles. They ranged in size from more than a foot long to less than a millimeter across. Most significant is the fact that they worked collectively. Individual robots, each with its own controller, do not necessarily work well together in a team. This shouldn’t be too surprising; people are the same way. Professional sports teams have been assembled by buying the best players in the business, but the team won’t win unless the players get along. Insects, in contrast, are stupid at the individual level. Ants and bees are like idiot robots, or, perhaps, like ideal soldiers. But an anthill or beehive, just like a well-trained military unit, is an efficient system, controlled by a powerful central brain. Rodney Brooks saw this fundamental difference between autonomous and collective intelligence. He saw that most of his colleagues were trying to build autonomous robots, perhaps because of the natural tendency for humans to envision robots as humanoid. To Brooks, it was obvious that a major avenue of technology was being neglected. Thus he began designing robot colonies, each consisting of many units under the control of a central AI system. Brooks envisioned microscopic insect robots that might live in your house, coming out at night to clean your floors and countertops. “Antibody robots” of even tinier proportions could be injected into a person infected with some previously incurable disease.

Controlled by a central microprocessor, they could seek out the disease bacteria or viruses and swallow them up.
Autonomous robots A robot is autonomous if it is self-contained, housing its own computer system, and not depending on a central computer for its commands. It gets around under its own power, usually by rolling on wheels or by moving on two, four, or six legs.



                                                  Autonomous robots 

A robot is autonomous if it is self-contained, housing its own computer system, and not depending on a central computer for its commands. It gets around under its own power, usually by rolling on wheels or by moving on two, four, or six legs. Robots are shown as squares and controllers as solid black dots. In the drawing at A, there is only one controller; it is central and is common to all the individual robots. The computer communicates with, and coordinates, the robots through wires or fiber optics or via radio. In the drawing at B, each robot has its own controller, and there is no central computer. Straight lines show possible paths of communication among robot controllers in both scenarios. Simple robots, like those in assembly lines, are not autonomous. The more complex the task, and the more different things a robot must do, the more autonomy it will have. The most autonomous robots have AI. The ultimate autonomous robot will act like a living animal or human. Such a machine has not yet been developed, and it will probably be at least the year 2050 before this level of sophistication is reached.


                                                       Androids 

An android is a robot, often very sophisticated, that takes a more or less human form. An android usually propels itself by rolling on small wheels in its base. The technology for fully functional arms is under development, but the software needed for their operation has not been made cost-effective for small robots. Legs are hard to design and engineer and aren’t really necessary; wheels or track drives work well enough. (Elevators can be used to allow a rolling android to get from floor to floor in a building.) An android has a rotatable head equipped with position sensors. Binocular, or stereo, vision allows the android to perceive depth, thereby locating objects anyplace within a large room. Speech recognition and synthesis are common. Because of their human like appearance, androids are ideal for use wherever there are children. Androids, in conjunction with computer terminals, might someday replace school teachers in some situations. It is possible that the teaching profession will suffer because of this, but it is more likely that the opposite will be true. There will be a demand for people to teach children how to use robots. Robots might free human teachers to spend more time in areas like humanities and philosophy, while robots instruct students in computer programming, reading, arithmetic, and other rote-memory subjects. Robotic teachers, if responsibly used, might help us raise children into sensitive and compassionate adults. Robot arms Robot arms are technically called manipulators. Some robots, especially industrial robots, are nothing more than sophisticated manipulators. A robot arm can be categorized according to its geometry. Some manipulators resemble human arms. The joints in these machines can be given names like “shoulder,” “elbow,” and “wrist.” Other manipulators are so much different from human arms that these names don’t make sense. An arm that employs revolute geometry is similar to a human arm, with a “shoulder,” “elbow,” and “wrist.” An arm with cartesian geometry is far different from a human arm, and moves along axes (x, y, z) that are best described as “upand-down,” “right-and-left,” and “front-to-back.”
Degrees of freedom The term degrees of freedom refers to the number of different ways in which a robot manipulator can move. Most manipulators move in three dimensions, but often they have more than three degrees of freedom.







You can use your own arm to get an idea of the degrees of freedom that a robot arm might have. Extend your right arm straight out toward the horizon. Extend your index finger so it is pointing. Keep your arm straight, and move it from the shoulder. You can move in three ways. Up-and-down movement is called pitch.Movement to the right and left is called yaw.You can rotate your whole arm as if you were using it as a screwdriver. This motion is called roll. Your shoulder therefore has three degrees of freedom: pitch, yaw, and roll. Now move your arm from the elbow only. This is hard to do without also moving your shoulder. Holding your shoulder in the same position constantly, you will see that your elbow joint has the equivalent of pitch in your shoulder joint. But that is all. Your elbow, therefore, has one degree of freedom. Extend your arm toward the horizon again. Now move only your wrist. Try to keep the arm above the wrist straight and motionless. Your wrist can bend up-and-down, side-to-side, and it can also twist a little. Your lower arm has the same three degrees of freedom that your shoulder has, although its roll capability is limited. In total, your arm has seven degrees of freedom: three in the shoulder, one in the elbow, and three in the arm below the elbow. You might think that a robot should never need more than three degrees of freedom. But the extra possible motions, provided by multiple joints, give a robot arm versatility that it could not have with only three degrees of freedom. (Just imagine how inconvenient life would be if your elbow and wrist were locked and only your shoulder could move.)

                                                  Degrees of rotation 

The term degrees of rotation refers to the extent to which a robot joint, or a set of robot joints, can turn clockwise or counterclockwise about a prescribed axis. Some reference point is always used, and the angles are given in degrees with respect to that joint. Rotation in one direction (usually clockwise) is represented by positive angles; rotation in the opposite direction is specified by negative angles. Thus, if angle X 58 degrees, it refers to a rotation of 58 degrees clockwise with respect to the reference axis. If angle Y 274 degrees, it refers to a rotation of 74 degrees counterclockwise. Figure 34-2 shows a robot arm with three joints. The reference axes are J1,J2, and J3, for rotation angles X, Y, and Z. The individual angles add together. To move this robot arm to a certain position within its work envelope, or the region in space that the arm can reach, the operator enters data into a computer. This data includes the measures of angles X, Y, and Z. The operator has specified X 39, Y 75, and Z 51. In this example, no other parameters are shown. (This is to keep the illustration simple.) But there would probably be variables such as the length of the arm sections, the base rotation angle, and the position of the robot gripper (hand).

                                                     Articulated geometry 

The word articulated means “broken into sections by joints.” A robot arm with articulated geometry bears some resemblance to the arm of a human. The versatility is defined in terms of the number of degrees of freedom. For example, an arm might  have three degrees of freedom: base rotation (the equivalent of azimuth), elevation angle, and reach (the equivalent of radius). If you’re a mathematician, you might recognize this as a spherical coordinate scheme. There are several different articulated geometries for any given number of degrees of freedom.


Cartesian coordinate geometry Another mode of robot arm movement is known as cartesian coordinate geometry or rectangular coordinate geometry. This term comes from the cartesian system often used for graphing mathematical functions. The axes are always perpendicular to each other. Variables are assigned the letters x and y in a two-dimensional cartesian plane, or x, y, and z in cartesian three-space. The dimensions are called reach for the x variable, elevation for the y variable, and depth for the z variable.

Cylindrical coordinate geometry A robot arm can be guided by means of a plane polar coordinate system with an elevation dimension added . This is known as cylindrical coordinate geometry. In the cylindrical system, a reference plane is used. An origin point is chosen in this plane. A reference axis is defined, running away from the origin in the reference plane. In the reference plane, the position of any point can be specified in terms of reach x, elevation y, and rotation z, the angle that the reach arm subtends relative to the reference axis.  except that the sliding movement is also capable of rotation. The rotation angle z can range from 0 to 360 degrees counterclockwise from the reference axis. In some systems, the range is specified as 0 to 180 degrees (up to a half circle counterclockwise from the reference axis), and 0 to 180 degrees (up to a half circle clockwise from the reference axis).
Revolute geometry A robot arm capable of moving in three dimensions using revolute geometry . The whole arm can rotate through a full circle (360 degrees) at the base point, or shoulder.There is also an elevation joint at the base that can move the arm through 90 degrees, from horizontal to vertical. A joint in the middle of the robot arm, at the elbow, moves through 180 degrees, from a straight position to doubled back on itself. There might be, but is not always, a wrist joint that can flex like the elbow and/or twist around and around. A 90-degree-elevation revolute robot arm can reach any point within a half sphere. The radius of the half sphere is the length of the arm when its elbow and wrist (if any) are straightened out. A 180-degree-elevation revolute arm can be designed that will reach any point within a fully defined sphere, with the exception of the small obstructed region around the base. 



Robotic hearing and vision Machine hearing involves more than just the picking up of sound (done with a microphone) and the amplification of the resulting audio signals (done with an amplifier). A sophisticated robot can tell from which direction the sound is coming and perhaps deduce the nature of the source: human voice, gasoline engine, fire, or barking dog.


Binaural hearing Even with your eyes closed, you can usually tell from which direction a sound is coming. This is because you have binaural hearing. Sound arrives at your left ear with a different intensity, and in a different phase, than it arrives at your right ear. Your brain processes this information, allowing you to locate the source of the sound, with certain limitations. If you are confused, you can turn your head until the sound direction becomes apparent to you. Robots can be equipped with binaural hearing. Two sound transducers are positioned, one on either side of the robot’s head. A microprocessor compares the relative phase and intensity of signals from the two transducers. This lets the robot determine, within certain limitations, the direction from which sound is coming. If the robot is confused, it can turn until the confusion is eliminated and a meaningful bearing is obtained. If the robot can move around and take bearings from more than one position, a more accurate determination of the source location is possible if the source is not too far away.




Hearing and AI With the advent of microprocessors that can compare patterns, waveforms, and huge arrays of numbers in a matter of microseconds, it is possible for a robot to determine the nature of a sound source, as well as where it comes from. A human voice produces one sort of waveform, a clarinet produces another, a growling bear produces another, and shattering glass produces yet another. Thousands of different waveforms can be stored by a robot controller and incoming sounds compared with this repertoire. In this way, a robot can immediately tell if a particular noise is a lawn mower going by or person shouting, an aircraft taking off or a car going down the street. Beyond this coarse mode of sound recognition, an advanced robot can identify a person by analyzing the waveform of his or her voice. The machine can even decipher commonly spoken words. This allows a robot to recognize a voice as yours or that of some unknown person and react accordingly. For example, if you tell your personal robot to get you a hammer and a box of nails, it can do so by recognizing the voice as yours and the words as giving that particular command. But if a burglar comes up your walkway, approaches your robot, and tells it to go jump in the lake, the robot can trundle off, find you by homing in on the transponder you’re wearing for that very purpose, and let you know that an unidentified person in your yard just told it to hydrologically dispose of itself.


                                                Visible-light vision 

A visible-light robotic vision system must have a device for receiving incoming images. This is usually a charge-coupled device (CCD) video camera, similar to the type used in home video cameras. The camera receives an analog video signal. This is processed into digital form by an analog-to-digital converter (ADC). The digital signal is clarified by digital signal processing (DSP). The resulting data goes to the robot controller. The moving image, received from the camera and processed by the circuitry, contains an enormous amount of information. It’s easy to present a robot controller with a detailed and meaningful moving image. But getting the machine’s brain to know what’s happening, and to determine whether or not these events are significant, is another problem altogether.


Vision and AI There are subtle things about an image that a machine will not notice unless it has advanced AI. How, for example, is a robot to know whether an object presents a threat? Is that four-legged thing there a big dog, or is it a bear cub? How is a robot to forecast the behavior of an object? Is that stationary biped a human or a mannequin? Why is it holding a stick? Is the stick a weapon? What does the biped want to do with the stick, if anything? The biped could be a department-store dummy with a closedup umbrella or a baseball bat. It could be an old man with a cane. Maybe it is a hunter with a rifle. You can think up various images that look similar, but that have completely different meanings. You know right away if a person is carrying a tire iron to help you fix a flat tire, or if the person is clutching it with the intention of smashing something up. How is a robot to determine subtle things like this from the images it sees? It is important for a police robot or a security robot to know what constitutes a threat and what does not. In some robot applications, it isn’t necessary for the robot to know much about what’s happening. Simple object recognition is good enough. Industrial robots are programmed to look for certain things, and usually they aren’t hard to identify. A bottle that is too tall or too short, a surface that’s out of alignment, or a flaw in a piece of fabric is easy to pick out.



                                                Sensitivity and resolution

Sensitivity is the ability of a machine to see in dim light or to detect weak impulses at invisible wavelengths. In some environments, high sensitivity is necessary. In others, it is not needed and might not be wanted. A robot that works in bright sunlight doesn’t need to be able to see well in a dark cave. A robot designed for working in mines, pipes, or caverns must be able to see in dim light, using a system that might be blinded by ordinary daylight. Resolutionis the extent to which a machine can differentiate between objects. The better the resolution, the keener the vision. Human eyes have excellent resolution, but machines can be designed with greater resolution. In general, the better the resolution, the more confined the field of vision must be. To understand why this is true, think of a telescope. The higher the magnification, the better the resolution (up to a certain maximum useful magnification). Increasing the magnification reduces the angle, or field, of vision. Zeroing in on one object or zone is done at the expense of other objects or zones. Sensitivity and resolution are interdependent. If all other factors remain constant, improved sensitivity causes a sacrifice in resolution. Also, the better the resolution, the less well the vision system will function in dim light.

Invisible and passive vision Robots have a big advantage over people when it comes to vision. Machines can see at wavelengths to which humans are blind. Human eyes are sensitive to electromagnetic waves whose length ranges from 390 to 750 nanometers (nm). The nanometer is a billionth (10 9) of a meter, or a millionth of a millimeter. The longest visible wavelengths look red. As the wavelength gets shorter, the color changes through orange, yellow, green, blue, and indigo. The shortest waves look violet. Energy at wavelengths somewhat longer than 750 nm is infrared (IR); energy at wavelengths somewhat shorter than 390 nm is ultraviolet (UV). Machines, and even nonhuman living species, often do not see in this exact same range of wavelengths. In fact, insects can see UV that we can’t and are blind to red and orange light that we can see. (Maybe you’ve used orange “bug lights” when camping to keep the flying pests from coming around at night or those UV devices that attract bugs and then zap them.) A robot might be designed to see IR or UV or both, as well as (or instead of) visible light. Video cameras can be sensitive to a range of wavelengths much wider than the range we see. Robots can be made to “see” in an environment that is dark and cold and that radiates too little energy to be detected at any electromagnetic wavelength. In these cases the robot provides its own illumination. This can be a simple lamp, a laser, an IR device, or a UV device. Or the robot might emanate radio waves and detect the echoes; this is radar. Some robots can navigate via ultrasonic echoes, like bats; this is sonar.

                                                   Binocular vision 
Binocular machine vision is the analog of binocular human vision. It is sometimes called stereo vision. In humans, binocular vision allows perception of depth. With only one eye, that is, with monocular vision, you can infer depth to a limited extent on the basis of perspective. Almost everyone, however, has had the experience of being fooled when looking at a scene with one eye covered or blocked. A nearby pole and a distant tower might seem to be adjacent, when in fact they are a city block apart. the basic concept of binocular robot vision. Of primary importance are high-resolution video cameras and a sufficiently powerful robot controller.

Color sensing Robot vision systems often function only in gray scale, like old-fashioned 1950s television. But color sensing can be added, in a manner similar to the way it is added to television systems. Color sensing can help a robot with AI tell what an object is. Is that horizontal surface a floor inside a building, or is it a grassy yard? (If it is green, it’s probably a grassy yard or maybe a playing field with artificial turf.) Sometimes, objects have regions of different colors that have identical brightness as seen by a gray-scale system; these objects, obviously, can be seen in more detail with a color-sensing system than with a vision system that sees only shades of gray. In a typical color-sensing vision system, three gray-scale cameras are used. Each camera has a color filter in its lens. One filter is red, another is green, and another is blue.



These are the three primary colors. All possible hues, levels of brightness, and levels of saturation are made up of these three colors in various ratios. The signals from the three cameras are processed by a microcomputer, and the result is fed to the robot controller. 




                                                Robotic navigation 

Mobile robots must get around in their environment without wasting motion, without running into things, and without tipping over or falling down a flight of stairs. The nature of a robotic navigation system depends on the size of the work area, the type of robot used, and the sorts of tasks the robot is required to perform. In this section, we’ll look at a few of the more common methods of robotic navigation.


                                                      Clinometer 

A clinometer is a device for measuring the steepness of a sloping surface. Mobile robots use clinometers to avoid inclines that might cause them to tip over or that are too steep for them to ascend while carrying a load. The floor in a building is almost always horizontal. Thus, its incline is zero. But sometimes there are inclines such as ramps. A good example is the kind of ramp used for wheelchairs, in which a very small elevation change occurs. A rolling robot can’t climb stairs, but it might use a wheelchair ramp, provided the ramp isn’t so steep that it would upset the robot’s balance or cause it to lose its payload. In a clinometer, a transducer produces an electrical signal whenever the device is tipped from the horizontal.

                                                    Edge detection 

The term edge detection refers to the ability of a robotic vision system to locate boundaries. It also refers to the robot’s knowledge of what to do with respect to those boundaries. A robot car, bus, or truck might use edge detection to see the edges of a road and use the data to keep itself on the road. But it would have to stay a certain distance from the right-hand edge of the pavement to avoid crossing into the lane of oncoming traffic (Fig. 34-8). It would have to stay off the road shoulder. It would have to tell the difference between pavement and other surfaces, such as gravel, grass, sand, and snow. The robot car could use beacons for this purpose, but this would require the installation of the guidance system beforehand. That would limit the robot car to roads that are equipped with such navigation aids. The interior of a home contains straight-line edge boundaries of all kinds, and each boundary represents a potential point of reference for a mobile robotic navigation system. The controller in a personal home robot would have to be programmed to know the difference between, say, the line where carpet ends and tile begins and the line where a flight of stairs begins. The vertical line produced by the intersection of two walls would present a different situation than the vertical line produced by the edge of a doorway.




                                                            Embedded path 

An embedded path is a means of guiding a robot along a specific route. This scheme is commonly used by a mobile robot called an automated guided vehicle (AGV). A common embedded path consists of a buried, current-carrying wire. The current in the wire produces a magnetic field that the robot can follow. This method of guidance has been suggested as a way to keep a car on a highway, even if the driver isn’t paying attention. The wire needs a constant supply of electricity for this guidance method to work. If this current is interrupted for any reason, the robot will lose its way unless some backup navigation method (or good old human control) is substituted. Alternatives to wires, such as colored or reflective paints or tapes, do not need a supply of power, and this gives them an advantage. Tape is easy to remove and put somewhere else; this is difficult to do with paint and practically impossible with wires embedded in concrete. However, tape will be obscured by even the lightest snowfall, and at night, glare from oncoming headlights might be confused for reflections from the tape.

                                              Range sensing and plotting 

Range sensing is the measurement of distances to objects in a robot’s environment in a single dimension. Range plotting is the creation of a graph of the distance (range) to objects, as a function of the direction in two or three dimensions. For one-dimensional (1-D) range sensing, a signal is sent out, and the robot measures the time it takes for the echo to come back. This signal can be sound, in which case the device is sonar. Or it can be a radio wave; this constitutes radar. Laser beams can also be used. Close-in, one-dimensional range sensing is known as proximity sensing. Two-dimensional (2-D) range plotting involves mapping the distance to various objects, as a function of their direction. The echo return time for a sonar signal, for example, might be measured every few degrees around a complete circle, resulting in a set of range points. A better plot would be obtained if the range were plotted every degree, every tenth of a degree, or even every minute of arc (1/60 degree). But no matter how detailed the direction resolution, a 2-D range plot renders only one plane, such as the floor level in a room, or some horizontal plane above the floor. The greater the number of echo samples in a complete circle (that is, the smaller the angle between samples), the more detail can be resolved at a given distance from the robot, and the greater the distance at which a given amount of detail can be resolved. Three-dimensional (3-D) range plotting is done in spherical coordinates: azimuth (compass bearing), elevation (degrees above the horizontal), and range (distance). The distance must be measured for a large number of directions— preferably at least several thousand—at all orientations. In a furnished room, a 3D sonar range plot would show ceiling fixtures, things on the floor, objects on top of a desk, and other details not visible with a 2-D plot. The greater the number of echo samples in a complete sphere surrounding the robot, the more detail can be resolved at a given distance, and the greater the range at which a given amount of detail can be resolved.

Epipolar navigation

Epipolar navigation is a means by which a machine can locate objects in three-dimensional space. It can also navigate, and figure out its own position and path. Epipolar navigation works by evaluating the way an image changes as the viewer moves. The human eyes and brain do this without having to think, although they are not very precise. Robot vision systems, along with AI, can do it with extreme precision. Imagine you’re piloting an airplane over the ocean. era, and AI software. You can figure out your coordinates and altitude, using only these devices, by letting the computer work with the image of the island. As you fly along, you aim the camera at the island and keep it there. The computer sees an image that constantly changes shape. The computer has the map data, so it knows the true size, shape, and location of the island. The computer compares the shape/size of the image it sees, from the vantage point of the aircraft, with the actual shape/size of the island, which it knows from the map data. From this alone, it can determine your altitude, your speed relative to the surface, your exact latitude, and your exact longitude. There is a one-to-one correspondence between all points within sight of the island and the size/shape of the island’s image. Epipolar navigation works on any scale, for any speed. It is a method by which robots can find their way without triangulation, direction finding, beacons, sonar, or radar. It is only necessary that the robot have a computer map of its environment and that viewing conditions be satisfactory.


                                                          Telepresence 

Telepresence is a refined, advanced form of robot remote control. The robot operator gets a sense of being “on location,” even if the remotely controlled machine, or telechir, and the operator are miles apart. Control and feedback are done by means of telemetry sent over wires, optical fibers, or radio.
What it’s like What would it be like to operate a telechir? Here is a possible scenario. The robot is autonomous and has a humanoid form. The control station consists of a suit that you wear or a chair in which you sit with various manipulators and displays. Sensors can give you feelings of pressure, sight, and sound. You wear a helmet with a viewing screen that shows whatever the robot camera sees. When your head turns, the robot head, with its vision system, follows, so you see an image that changes as you turn your head, as if you were in a space suit or diving suit at the location of the robot. Binocular robot vision provides a sense of depth. Binaural robot hearing allows you to perceive sounds. Special vision modes let you see UV or IR; special hearing modes let you hear ultrasound or infrasound. Robot propulsion can be carried out by means of a track drive, a wheel drive, or robot legs. If the propulsion uses legs, you propel the robot by walking around a room. Otherwise you sit in a chair and drive the robot like a car. The telechir has two arms, each with grippers resembling human hands. When you want to pick something up, you go through the motions. Back-pressure sensors and position sensors let you feel what’s going on. If an object weighs 10 pounds, it will feel as if it weighs 10 pounds. But it will be as if you’re wearing thick gloves; you won’t be able to feel texture. You might throw a switch, and something that weighs 10 pounds feels as if it only weighs one pound. This might be called “strength 10” mode. If you switch to “strength 100” mode, a 100pound object seems to weigh 1 pound. Figure 34-10 is a simple block diagram of a telepresence system.



Applications You can certainly think of many different uses for a telepresence system. Some applications are
• Working in extreme heat or cold
• Working under high pressure, such as on the sea floor
 • Working in a vacuum, such as in space
• Working where there is dangerous radiation
• Disarming bombs
• Handling toxic substances
• Police robotics
• Robot soldier
• Neurosurgery Of course, the robot must be able to survive conditions at its location. Also, it must have some way to recover if it falls or gets knocked over.


Limitations In theory, the technology for telepresence exists right now. But there are some problems that will be difficult, if not impossible, to overcome. The most serious limitation is the fact that telemetry cannot, and never will, travel faster than the speed of light in free space. This seems fast at first thought (186,282 miles, or 299,792 kilometers, per second). But it is slow on an interplanetary scale. The moon is more than a light second away from the earth; the sun is 8 light minutes away. The nearest stars are at distances of several light years. The delay between the sending of a command, and the arrival of the return signal, must be less than 0.1 second if telepresence is to be realistic. This means that the telechir cannot be more than about 9300 miles, or 15,000 kilometers, away from the control operator. Another problem is the resolution of the robot’s vision. A human being with good eyesight can see things with several times the detail of the best fast-scan television sets. To send that much detail, at realistic speed, would take up a huge signal bandwidth. There are engineering problems (and cost problems) that go along with this. Still another limitation is best put as a question: How will a robot be able to “feel” something and transmit these impulses to the human brain? For example, an apple feels smooth, a peach feels fuzzy, and an orange feels shiny yet bumpy. How can this sense of texture be realistically transmitted to the human brain? Will people allow electrodes to be implanted in their brains so they can perceive the universe as if they are robots?

                                            The mind of the machine 

A simple electronic calculator doesn’t have AI. But a machine that can learn from its mistakes, or that can show reasoning power, does. Between these extremes, there is no precise dividing line. As computers become more powerful, people tend to set higher standards for what they will call AI. Things that were once thought of as AI are now quite ordinary. Things that seem fantastic now will someday be humdrum. There is a tongue-in-cheek axiom: AI is AI only as long as it remains esoteric.

                                                Relationship with robotics 

Robotics and artificial intelligence go together; they complement each other. Scientists have dreamed for more than a century about building smart androids: robots that look and act like people. Androids exist, but they aren’t very smart. Powerful computers exist, but they lack mobility. If a machine has the ability to move around under its own power, to lift things, and to move things, it seems reasonable that it should do so with some degree of intelligence if it is to accomplish anything worthwhile. Otherwise it is little more than a bumbling box, and might even be dangerous, like a driverless car with a brick on the gas pedal. If a computer is to manipulate anything, it will need to be able to move around, to grasp, to lift, and to carry objects. It might contemplate fantastic exploits and discover new secrets about the cosmos, but if it can’t act on its thoughts, the work (and the risk) must be undertaken by people, whose strength, maneuverability, and courage are limited.


                                                                Expert systems 

The term expert systems refers to a method of reasoning in AI. Sometimes this scheme is called the rule-based system. Expert systems are used in the control of smart robots. The heart of an expert system is a set of facts and rules. In the case of a robotic system, the facts consist of data about the robot’s environment, such as a factory, an office, or a kitchen. The rules are statements of the logical form “If X, then Y,” similar to many of the statements in high-level programming languages. An inference engine decides which logical rules should be applied in various situations and instructs the robot to carry out certain tasks. But the operation of the system can only be as sophisticated as the data supplied by human programmers. Expert systems can be used in computers to help people do research, make decisions, and make forecasts. A good example is a program that assists a physician in making a diagnosis. The computer asks questions and arrives at a conclusion based on the answers given by the patient and doctor. One of the biggest advantages of expert systems is the fact that reprogramming is easy. As the environment changes, the robot can be taught new rules, and supplied with new facts.

                                                 How smart a machine? 

Experts in the field of AI have been disappointed in the past few decades. Computers have been designed that can handle tasks no human could ever contend with, such as navigating a space probe. Machines have been built that can play chess and checkers well enough to compete with human masters. Modern machines can understand, as well as synthesize, much of any spoken language. But these abilities, by themselves, don’t count for much in the dreams of scientists who hope to create artificial life. The human mind is incredibly complicated. A circuit that would have occupied, and used all the electricity in, a whole city in 1940 can now be housed in a box the size of a vitamin pill and run by a battery. Imagine this degree of miniaturization happening again, then again, and then again. Would that begin to approach the level of sophistication in your body’s nervous system? Perhaps. A human brain might be nothing more than a digital switching network, but no electronic device yet conceived has come anywhere near its level of intelligence. Some experts think that a machine might someday be built that is smarter than a human. But most concede that if it’s ever done, it won’t be for a very long time. Other experts say that we humans cannot create a mind more powerful than our own, because that would violate the laws of nature.
Artificial life What, exactly, constitutes something living, which makes it different from something nonliving? This has been one of the great questions of science throughout history. In some cultures, life is ascribed to things that Americans think of as inanimate. One definition of artificial life involves the ability of a human-made thing to reproduce itself. Suppose you were to synthesize a new kind of molecule in a beaker and named it QNA (which stands for some weird name nobody can ever remember). Suppose this molecule, like DNA, could make replicas of itself, so that when you put one of them in a glass of water, you’d have a whole glassful in a few days. This molecule would be artificial life, in the sense that it could reproduce and that it was made by humans, rather than by nature in general. You might build a robot that could assemble other robots like itself. The machine would also be artificial life according to the above definition. It would, of course, be far different from the QNA molecule in terms of size and environment. But reproduction ability is the basis for the definition. The QNA molecule and the self-replicating robot both meet the definition. A truly self-replicating robot hasn’t yet been developed. Society is a long way from having to worry that robots might build copies of themselves and take over the whole planet. But artificially living robots might be possible from a technological standpoint. Robots would probably reproduce by merely assembling other robots similar to themselves. Robots could also build machines much different from themselves individually. It’s interesting to think of the ways in which artificially living machine populations might evolve. Another definition for artificial life involves thought processes. At what point does machine intelligence become consciousness? Can a machine ever be aware of its own existence, and be able to ponder its place in the universe, the way a human being can? The reproduction question is answerable by an “either/or,” but the consciousness question can be endlessly debated. One person might say that all computers are fully conscious; another person could point out that a sizable proportion of the human population, at any given time, is semiconscious or unconscious. In the end, questions like this might not matter much. As long as smart robots obey Asimov’s three laws, the issue of whether or not they are living beings will most likely take second place to more pragmatic concerns.


                     IIII0   how we can combine robots with the Internet of Things



                            A worker puts finishing touches to an iPal social robot, designed by AvatarMind, at an assembly plant in Suzhou, Jiangsu province, China July 4, 2018. Designed to offer education, care and companionship to children and the elderly, the 3.5-feet tall humanoid robots come in two genders and can tell stories, take photos and deliver educational or promotional content. Picture taken July 4, 2018.REUTERS/Aly Song - RC1B1450D2B0






The Internet of Things is a popular vision of objects with internet connections sending information back and forth to make our lives easier and more comfortable. It’s emerging in our homes, through everything from voice-controlled speakers to smart temperature sensors. To improve our fitness, smart watches and Fitbits are telling online apps how much we’re moving around. And across entire cities, interconnected devices are doing everything from increasing the efficiency of transport to flood detection.
In parallel, robots are steadily moving outside the confines of factory lines. They’re starting to appear as guides in shopping malls and cruise ships, for instance. As prices fall and the Artificial Intelligence (AI) and mechanical technology continues to improve, we will get more and more used to them making independent decisions in our homes, streets and workplaces.
Here lies a major opportunity. Robots become considerably more capable with internet connections. There is a growing view that the next evolution of the Internet of Things will be to incorporate them into the network – opening up thrilling possibilities along the way.

                                                             Home improvements

Even simple robots become useful when connected to the internet – getting updates about their environment from sensors, say, or learning about their users’ whereabouts and the status of appliances in the vicinity. This lets them lend their bodies, eyes and ears to give an otherwise impersonal smart environment a user-friendly persona. This can be particularly helpful for people at home who are older or have disabilities.
We recently unveiled a futuristic apartment at Heriot-Watt University to work on such possibilities. One of a few such test sites around the EU, our whole focus is around people with special needs – and how robots can help them by interacting with connected devices in a smart home.
Suppose a doorbell rings that has smart video features. A robot could find the person in the home by accessing their location via sensors, then tell them who is at the door and why. Or it could help make video calls to family members or a professional carer – including allowing them to make virtual visits by acting as a telepresence platform.


Equally, it could offer protection. It could inform them the oven has been left on, for example – phones or tablets are less reliable for such tasks because they can be misplaced or not heard. Similarly, the robot could raise the alarm if its user appears to be in difficulty.
Of course, voice-assistant devices like Alexa or Google Home can offer some of the same services. But robots are far better at moving, sensing and interacting with their environment. They can also engage their users by pointing at objects or acting more naturally, using gestures or facial expressions. These “social abilities” create bonds which are crucially important for making users more accepting of the support and making it more effective.

                                                  Robots offshore

There are comparable opportunities in the business world. Oil and gas companies are looking at the Internet of Things, for example; experimenting with wireless sensors to collect information such as temperature, pressure and corrosion levels to detect and possibly predict faults in their offshore equipment.
In future, robots could be alerted to problem areas by sensors to go and check the integrity of pipes and wells, and to make sure they are operating as efficiently and safely as possible. Or they could place sensors in parts of offshore equipment which are hard to reach, or help to calibrate them or replace their batteries. The likes of the ORCA Hub, a £36m project led by the Edinburgh Centre for Robotics, bringing together leading experts and over 30 industry partners, is developing such systems. The aim is to reduce the costs and the risks of humans working in remote hazardous locations.
   
                                     ORCA tests a drone robot.



Working underwater is particularly challenging, since radio waves don’t move well under the sea. Underwater autonomous vehicles and sensors usually communicate using acoustic waves, which are many times slower (1,500 metres a second vs 300m metres a second for radio waves). Acoustic communication devices are also much more expensive than those used above the water.
This academic project is developing a new generation of low-cost acoustic communication devices, and trying to make underwater sensor networks more efficient. It should help sensors and underwater autonomous vehicles to do more together in future – repair and maintenance work similar to what is already possible above the water, plus other benefits such as helping vehicles to communicate with one another over longer distances and tracking their location.
Beyond oil and gas, there is similar potential in sector after sector. There are equivalents in nuclear power, for instance, and in cleaning and maintaining the likes of bridges and buildings. My colleagues and I are also looking at possibilities in areas such as farming, manufacturing, logistics and waste.
First, however, the research sectors around the Internet of Things and robotics need to properly share their knowledge and expertise.


To the same end, industry and universities need to look at setting up joint research projects. It is particularly important to address safety and security issues – hackers taking control of a robot and using it to spy or cause damage, for example. Such issues could make customers wary and ruin a market opportunity.
We also need systems that can work together, rather than in isolated applications. That way, new and more useful services can be quickly and effectively introduced with no disruption to existing ones. If we can solve such problems and unite robotics and the Internet of Things, it genuinely has the potential to change the world.

                                          Hasil gambar untuk the internet and robots with artificial intelligence
               Hasil gambar untuk the internet and robots with artificial intelligence


                 Hasil gambar untuk the internet and robots with artificial intelligenceHasil gambar untuk the internet and robots with artificial intelligence  
       Hasil gambar untuk the internet and robots with artificial intelligence       Hasil gambar untuk the internet and robots with artificial intelligence       

   

                                The new tech future


Artificial intelligence

Software algorithms are automating complex decision-making tasks to mimic human thought processes and senses. Artificial intelligence (AI) is not a monolithic technology. A subset of AI, machine learning, focuses on the development of computer programs that can teach themselves to learn, understand, reason, plan, and act when blasted with data. Machine learning carries enormous potential for the creation of meaningful products and services — for example, hospitals using a library of scanned images to quickly and accurately detect and diagnose cancer; insurance companies digitally and automatically recognizing and assessing car damage; or security companies trading clunky typed passwords for voice recognition.


Embodied AI

Technologies: 3-D printing, AI, Drones, IoT, Robotics 
AI is everywhere. Along with IoT sensors, it’s integrated in many products, from simple cameras to sophisticated drones. Embedded sensors collect data, which is fed to algorithms that give that object the illusion of intelligence. This enables drones to follow a moving object like a truck or a person autonomously. It enables a 3-D printer to automatically modify a design as it is being printed to have a stronger structure, become lighter, or be more cost effective to print. It enables AR glasses to overlay data on an anchored endpoint or allow you to communicate via voice with a robot or conversational agent.

     Hasil gambar untuk the internet and robots with artificial intelligence    Hasil gambar untuk the internet and robots with artificial intelligence

                                    Hasil gambar untuk the internet and robots with artificial intelligence


 _______________________________________________________________________________

                                                      e- Intern ROBO ART
                                                Hasil gambar untuk the internet and robots with artificial intelligence

                                          
 _______________________________________________________________________________