Jumat, 16 Oktober 2020

e_camera with technology that continues to progress, especially towards dynamic robot camera technology in e-Eyes ROBOT to space technology to penetrate the space patterns between planets and galaxies with the speed of time controlled space AMNIMARJESLOW124 GOVERNMENT Electronic contactless ( Negative look out positive Scanning ) 2020 Thankyou to my wife and Child so do my Favourite show timer THN ( T-TAM ( Timing - To Annual Matching )) welcome until 2 in 1 THN to Gen.Mac Tech for Integrated Loop Always Good memory in Life and live look like world and sky flash ( Negation between Position ) ; e- Camera Robot IC hertz Dynamic Future windows Agusman Siboro Luv 2033 cảm ơn bạn with OMG Thanks

T



he camera is an important organ for every creature in any system both on earth and in space because a special research organ called a camera is a method and technique for seeing and adapting an organ of life in space and time. Humans in the content of the movement of their life on earth and previously used a camera called the eye and the eye of the heart to examine the objects around them and adapt in them, which caused motor neurological movements in all their organs. on earth now the camera civilization has progressed quite satisfactorily, especially with the advancement of the electronics industry and the electronic machine learning in the space of super sophisticated electronic neural networks. the example research of electronic neural networks : The "invisible" 3D printed fiber sensors could be used to power electronic devices and sensors that are capable of sensing breath and sound. A device that's able to sense smells, sound, and touch in the way humans do may be one step closer to becoming a reality thanks to the realization of 3D printed electronic fibers in recent research. Researchers from the University of Cambridge say they’ve used 3D printing techniques to create these “invisible” electronic fibers, each of which is 100 times thinner than a human hair. Functioning as sensors, these fibers have capabilities beyond that of conventional film-based sensor devices. research advances : Inexpensive and Extremely Sensitive E-Fibers Small-diameter conducting fibers have unique properties that set them apart from other classes of film-based conducting micro and nanostructures. However, existing fabrication techniques such as chemical growth, wet spinning, and 2D/3D direct printing do not readily allow the assembly of fiber architectures. This leads to device functions that exploit combinations of the fibers’ unique characteristics: directionality, conductivity, and high surface-area-to-volume ratio. To solve this challenge, the Cambridge researchers developed a new printing method that can be used to make non-contact, wearable, portable respiratory sensors that are highly sensitive and cheap to produce. They can also be attached to an electronic device such as a smartphone to concurrently collect breath pattern information, sound, and images. The portable trilayer fiber sensor The portable trilayer fiber sensor can attach to the front camera of a smartphone, positioning the single-layer fiber sensor above the nose. Image (modified) used courtesy of Science Advances According to the research paper, the fibers could be particularly useful for applications in health monitoring (respiration rate, for instance) and the Internet of Things. Cambridge’s Department of Engineering, used the fiber sensor to measure the amount of breath moisture that leaked through his face covering during different simulated respiratory conditions like regular breathing, rapid breathing, and coughing. the fiber sensors outperformed comparable commercial sensors that are currently available, especially when monitoring rapid breathing. Inflight Fiber Printing (IFP) give 3D printed the composite fibers, which are made from silver and/or semiconducting polymers, using a method they developed known as inflight fiber printing (IFP). This technique creates a core-shell fiber structure with a high-purity conducting fiber core wrapped in a thin polymer sheath that acts as a protective coating. It’s very similar to the structure of electrical wires but at a much smaller scale of only a few micrometers in diameter. 3D printed invisible fibers IFP fiber sensor is attached to a face covering to detect human breath with high sensitivity. IFP is carried out at sub-100°C, which creates in situ bonds of thin conducting fiber arrays. These can either be suspended or placed on a surface and require no postprocessing. By optimizing the size of the fibers, the researchers claim they’ve demonstrated a versatile technique for rapid on-circuit creation of small-diameter conducting fibers. Making Way for "Floating Electronic Architectures" As a proof of concept, the researchers produced inorganic, metallic silver fibers from a solution-based reactive synthesis and organic conducting PEDOT:PSS fibers. With IFP able to fabricate fibers directly onto a circuit, the researchers say they were able to exploit the functional advantages of the fiber array to explore the novel circuitry architecture afforded by the IFP process. Namely, this process opens doors for the concept of 3D “floating electronic architectures” that merge organic and inorganic fiber materials in the same transparent conducting network. Architecture of 3D-layered floating circuit Architecture of 3D-layered floating circuit. Image (modified) used courtesy of Science Advances The team is currently looking to develop the IFP method for a number of multi-functional sensors. Suppose and hopefull Cam electronics : -------------------------------------- advancesadvances in the field of electronics, learning and the ability of optical electronic neuroscience to enable humans to explore space and between planets which at the time take part in conducting reliable and measurable exploration in quality with the help of Robotics network cameras that can send, store, print both in the laboratory, printing , social media and object research for the next supporting electronics in the future, so that humans can travel time and live and live a lifestyle that more ensures the integrity of modern humans in the future with new languages ​​and new writing structures and humans learn to be new humans in a more modern style with new planetary space and capable infrastructure. thankyou and a sign of love and affection and praise for the Father in heaven . love and love 2 in 1 Windows NT & THN ( TAM ) Agustinus Manguntam Siboro , ST ., MM @Luv NT * THN ( Gen . Mac Tech to be Luv integrate e- Camera Robot IC hertz Dynamic Future windows innner Close Loop ) camera from beginning until now and future : Cameras evolved from the camera obscura through many generations of photographic technology – daguerreotypes, calotypes, dry plates, film – to the modern day with digital cameras and camera phones. Camera obscura : The forerunner to the photographic camera was the camera obscura. Camera obscura (Latin for "dark room") is the natural optical phenomenon that occurs when an image of a scene at the other side of a screen (or for instance a wall) is projected through a small hole in that screen and forms an inverted image (left to right and upside down) on a surface opposite to the opening.The use of a lens in the opening of a wall or closed window shutter of a darkened room to project images used as a drawing aid has been traced back to circa 1550. Since the late 17th century portable camera obscura devices in tents and boxes were used as a drawing aid. Before the invention of photographic processes there was no way to preserve the images produced by these cameras apart from manually tracing them. The earliest cameras were room-sized, with space for one or more people inside; these gradually evolved into more and more compact models. By Niépce's time portable box camerae obscurae suitable for photography were readily available. Pinhole camera : Pinhole camera. Light enters a dark box through a small hole and creates an inverted image on the wall opposite the hole. Photographic camera : Before the development of the photographic camera, it had been known for hundreds of years that some substances, such as silver salts, darkened when exposed to sunlight . In a series of experiments, published in 1727, the German scientist Johann Heinrich Schulze demonstrated that the darkening of the salts was due to light alone, and not influenced by heat or exposure to air. The Swedish chemist Carl Wilhelm Scheele showed in 1777 that silver chloride was especially susceptible to darkening from light exposure, and that once darkened, it becomes insoluble in an ammonia solution. The first person to use this chemistry to create images was Thomas Wedgwood. To create images, Wedgwood placed items, such as leaves and insect wings, on ceramic pots coated with silver nitrate, and exposed the set-up to light. These images weren't permanent, however, as Wedgwood didn't employ a fixing mechanism. He ultimately failed at his goal of using the process to create fixed images created by a camera obscura. The first permanent photograph of a camera image was made in 1825 by Joseph Nicéphore Niépce using a sliding wooden box camera made by Charles and Vincent Chevalier in Paris.Niépce had been experimenting with ways to fix the images of a camera obscura since 1816. The photograph Niépce succeeded in creating shows the view from his window. It was made using an 8-hour exposure on pewter coated with bitumen. Niépce called his process "heliography". Niépce corresponded with the inventor Louis-Jacques-Mandé Daguerre, and the pair entered into a partnership to improve the heliographic process. Niépce had experimented further with other chemicals, to improve contrast in his heliographs. Daguerre contributed an improved camera obscura design, but the partnership ended when Niépce died in 1833. Daguerre succeeded in developing a high-contrast and extremely sharp image by exposing on a plate coated with silver iodide, and exposing this plate again to mercury vapor. By 1837, he was able to fix the images with a common salt solution. He called this process Daguerreotype, and tried unsuccessfully for a couple of years to commercialize it. Eventually, with help of the scientist and politician François Arago, the French government acquired Daguerre's process for public release. In exchange, pensions were provided to Daguerre as well as Niépce's son, Isidore . In the 1830s, the English scientist William Henry Fox Talbot independently invented a process to fix camera images using silver salts. Although dismayed that Daguerre had beaten him to the announcement of photography, on January 31, 1839 he submitted a pamphlet to the Royal Institution entitled Some Account of the Art of Photogenic Drawing, which was the first published description of photography. Within two years, Talbot developed a two-step process for creating photographs on paper, which he called calotypes. The calotyping process was the first to utilize negative prints, which reverse all values in the photograph – black shows up as white and vice versa. Negative prints allow, in principle, unlimited duplicates of the positive print to be made.Calotyping also introduced the ability for a printmaker to alter the resulting image through retouching.Calotypes were never as popular or widespread as daguerreotypes,owing mainly to the fact that the latter produced sharper details.However, because daguerreotypes only produce a direct positive print, no duplicates can be made. It is the two-step negative/positive process that formed the basis for modern photography. The first photographic camera developed for commercial manufacture was a daguerreotype camera, built by Alphonse Giroux in 1839. Giroux signed a contract with Daguerre and Isidore Niépce to produce the cameras in France,with each device and accessories costing 400 francs.The camera was a double-box design, with a landscape lens fitted to the outer box, and a holder for a ground glass focusing screen and image plate on the inner box. By sliding the inner box, objects at various distances could be brought to as sharp a focus as desired. After a satisfactory image had been focused on the screen, the screen was replaced with a sensitized plate. A knurled wheel controlled a copper flap in front of the lens, which functioned as a shutter. The early daguerreotype cameras required long exposure times, which in 1839 could be from 5 to 30 minutes. After the introduction of the Giroux daguerreotype camera, other manufacturers quickly produced improved variations. Charles Chevalier, who had earlier provided Niépce with lenses, created in 1841 a double-box camera using a half-sized plate for imaging. Chevalier's camera had a hinged bed, allowing for half of the bed to fold onto the back of the nested box. In addition to having increased portability, the camera had a faster lens, bringing exposure times down to 3 minutes, and a prism at the front of the lens, which allowed the image to be laterally correct. Another French design emerged in 1841, created by Marc Antoine Gaudin. The Nouvel Appareil Gaudin camera had a metal disc with three differently-sized holes mounted on the front of the lens. Rotating to a different hole effectively provided variable f-stops, allowing different amounts of light into the camera. Instead of using nested boxes to focus, the Gaudin camera used nested brass tubes. In Germany, Peter Friedrich Voigtländer designed an all-metal camera with a conical shape that produced circular pictures of about 3 inches in diameter. The distinguishing characteristic of the Voigtländer camera was its use of a lens designed by Joseph Petzval. The f/3.5 Petzval lens was nearly 30 times faster than any other lens of the period, and was the first to be made specifically for portraiture. Its design was the most widely used for portraits until Carl Zeiss introduced the anastigmat lens in 1889. Within a decade of being introduced in America, 3 general forms of camera were in popular use: the American- or chamfered-box camera, the Robert's-type camera or “Boston box”, and the Lewis-type camera. The American-box camera had beveled edges at the front and rear, and an opening in the rear where the formed image could be viewed on ground glass. The top of the camera had hinged doors for placing photographic plates. Inside there was one available slot for distant objects, and another slot in the back for close-ups. The lens was focused either by sliding or with a rack and pinion mechanism. The Robert's-type cameras were similar to the American-box, except for having a knob-fronted worm gear on the front of the camera, which moved the back box for focusing. Many Robert's-type cameras allowed focusing directly on the lens mount. The third popular daguerreotype camera in America was the Lewis-type, introduced in 1851, which utilized a bellows for focusing. The main body of the Lewis-type camera was mounted on the front box, but the rear section was slotted into the bed for easy sliding. Once focused, a set screw was tightened to hold the rear section in place. Having the bellows in the middle of the body facilitated making a second, in-camera copy of the original image. Daguerreotype cameras formed images on silvered copper plates and images were only able to develop with mercury vapor. The earliest daguerreotype cameras required several minutes to half an hour to expose images on the plates. By 1840, exposure times were reduced to just a few seconds owing to improvements in the chemical preparation and development processes, and to advances in lens design. American daguerreotypists introduced manufactured plates in mass production, and plate sizes became internationally standardized: whole plate (6.5 x 8.5 inches), three-quarter plate (5.5 x 7 1/8 inches), half plate (4.5 x 5.5 inches), quarter plate (3.25 x 4.25 inches), sixth plate (2.75 x 3.25 inches), and ninth plate (2 x 2.5 inches).Plates were often cut to fit cases and jewelry with circular and oval shapes. Larger plates were produced, with sizes such as 9 x 13 inches (“double-whole” plate), or 13.5 x 16.5 inches . The collodion wet plate process that gradually replaced the daguerreotype during the 1850s required photographers to coat and sensitize thin glass or iron plates shortly before use and expose them in the camera while still wet. Early wet plate cameras were very simple and little different from Daguerreotype cameras, but more sophisticated designs eventually appeared. The Dubroni of 1864 allowed the sensitizing and developing of the plates to be carried out inside the camera itself rather than in a separate darkroom. Other cameras were fitted with multiple lenses for photographing several small portraits on a single larger plate, useful when making cartes de visite. It was during the wet plate era that the use of bellows for focusing became widespread, making the bulkier and less easily adjusted nested box design obsolete. For many years, exposure times were long enough that the photographer simply removed the lens cap, counted off the number of seconds (or minutes) estimated to be required by the lighting conditions, then replaced the cap. As more sensitive photographic materials became available, cameras began to incorporate mechanical shutter mechanisms that allowed very short and accurately timed exposures to be made. The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1889. His first camera, which he called the "Kodak," was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras. Films also made possible capture of motion (cinematography) establishing the movie industry by end of 19th century. The first partially successful photograph of a camera image was made in approximately 1816 by Nicéphore Niépce, using a very small camera of his own making and a piece of paper coated with silver chloride, which darkened where it was exposed to light. No means of removing the remaining unaffected silver chloride was known to Niépce, so the photograph was not permanent, eventually becoming entirely darkened by the overall exposure to light necessary for viewing it. In the mid-1820s, Niépce used a sliding wooden box camera made by Parisian opticians Charles and Vincent Chevalier to experiment with photography on surfaces thinly coated with Bitumen of Judea. The bitumen slowly hardened in the brightest areas of the image. The unhardened bitumen was then dissolved away. One of those photographs has survived. Daguerreotypes and calotypes camera : After Niépce's death in 1830, his partner Louis Daguerre continued to experiment and by 1837 had created the first practical photographic process, which he named the daguerreotype and publicly unveiled in 1839.Daguerre treated a silver-plated sheet of copper with iodine vapor to give it a coating of light-sensitive silver iodide. After exposure in the camera, the image was developed by mercury vapor and fixed with a strong solution of ordinary salt (sodium chloride). Henry Fox Talbot perfected a different process, the calotype, in 1840. As commercialized, both processes used very simple cameras consisting of two nested boxes. The rear box had a removable ground glass screen and could slide in and out to adjust the focus. After focusing, the ground glass was replaced with a light-tight holder containing the sensitized plate or paper and the lens was capped. Then the photographer opened the front cover of the holder, uncapped the lens, and counted off as many minutes as the lighting conditions seemed to require before replacing the cap and closing the holder. Despite this mechanical simplicity, high-quality achromatic lenses were standard. Dry plates : Collodion dry plates had been available since 1857, thanks to the work of Désiré van Monckhoven, but it was not until the invention of the gelatin dry plate in 1871 by Richard Leach Maddox that the wet plate process could be rivaled in quality and speed. The 1878 discovery that heat-ripening a gelatin emulsion greatly increased its sensitivity finally made so-called "instantaneous" snapshot exposures practical. For the first time, a tripod or other support was no longer an absolute necessity. With daylight and a fast plate or film, a small camera could be hand-held while taking the picture. The ranks of amateur photographers swelled and informal "candid" portraits became popular. There was a proliferation of camera designs, from single- and twin-lens reflexes to large and bulky field cameras, simple box cameras, and even "detective cameras" disguised as pocket watches, hats, or other objects. The short exposure times that made candid photography possible also necessitated another innovation, the mechanical shutter. The very first shutters were separate accessories, though built-in shutters were common by the end of the 19th century Kodak and the birth of film : The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1888–1889. His first camera, which he called the "Kodak", was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras. In 1900, Eastman took mass-market photography one step further with the Brownie, a simple and very inexpensive box camera that introduced the concept of the snapshot. The Brownie was extremely popular and various models remained on sale until the 1960s. Film also allowed the movie camera to develop from an expensive toy to a practical commercial tool. Despite the advances in low-cost photography made possible by Eastman, plate cameras still offered higher-quality prints and remained popular well into the 20th century. To compete with rollfilm cameras, which offered a larger number of exposures per loading, many inexpensive plate cameras from this era were equipped with magazines to hold several plates at once. Special backs for plate cameras allowing them to use film packs or rollfilm were also available, as were backs that enabled rollfilm cameras to use plates. Except for a few special types such as Schmidt cameras, most professional astrographs continued to use plates until the end of the 20th century when electronic photography replaced them. 35 mm : A number of manufacturers started to use 35 mm film for still photography between 1905 and 1913. The first 35 mm cameras available to the public, and reaching significant numbers in sales were the Tourist Multiple, in 1913, and the Simplex, in 1914. Oskar Barnack, who was in charge of research and development at Leitz, decided to investigate using 35 mm cine film for still cameras while attempting to build a compact camera capable of making high-quality enlargements. He built his prototype 35 mm camera (Ur-Leica) around 1913, though further development was delayed for several years by World War I. It wasn't until after World War I that Leica commercialized their first 35 mm Cameras. Leitz test-marketed the design between 1923 and 1924, receiving enough positive feedback that the camera was put into production as the Leica I (for Leitz camera) in 1925. The Leica's immediate popularity spawned a number of competitors, most notably the Contax (introduced in 1932), and cemented the position of 35 mm as the format of choice for high-end compact cameras. Kodak got into the market with the Retina I in 1934, which introduced the 135 cartridge used in all modern 35 mm cameras. Although the Retina was comparatively inexpensive, 35 mm cameras were still out of reach for most people and rollfilm remained the format of choice for mass-market cameras. This changed in 1936 with the introduction of the inexpensive Argus A and to an even greater extent in 1939 with the arrival of the immensely popular Argus C3. Although the cheapest cameras still used rollfilm, 35 mm film had come to dominate the market by the time the C3 was discontinued in 1966. The fledgling Japanese camera industry began to take off in 1936 with the Canon 35 mm rangefinder, an improved version of the 1933 Kwanon prototype. Japanese cameras would begin to become popular in the West after Korean War veterans and soldiers stationed in Japan brought them back to the United States and elsewhere. TLRs and SLRs : The first practical reflex camera was the Franke & Heidecke Rolleiflex medium format TLR of 1928. Though both single- and twin-lens reflex cameras had been available for decades, they were too bulky to achieve much popularity. The Rolleiflex, however, was sufficiently compact to achieve widespread popularity and the medium-format TLR design became popular for both high- and low-end cameras. A similar revolution in SLR design began in 1933 with the introduction of the Ihagee Exakta, a compact SLR which used 127 rollfilm. This was followed three years later by the first Western SLR to use 135 film, the Kine Exakta (World's first true 35mm SLR was Soviet "Sport" camera, marketed several months before Kine Exakta, though "Sport" used its own film cartridge). The 35mm SLR design gained immediate popularity and there was an explosion of new models and innovative features after World War II. There were also a few 35 mm TLRs, the best-known of which was the Contaflex of 1935, but for the most part these met with little success. The first major post-war SLR innovation was the eye-level viewfinder, which first appeared on the Hungarian Duflex in 1947 and was refined in 1948 with the Contax S, the first camera to use a pentaprism. Prior to this, all SLRs were equipped with waist-level focusing screens. The Duflex was also the first SLR with an instant-return mirror, which prevented the viewfinder from being blacked out after each exposure. This same time period also saw the introduction of the Hasselblad 1600F, which set the standard for medium format SLRs for decades. In 1952 the Asahi Optical Company (which later became well known for its Pentax cameras) introduced the first Japanese SLR using 135 film, the Asahiflex. Several other Japanese camera makers also entered the SLR market in the 1950s, including Canon, Yashica, and Nikon. Nikon's entry, the Nikon F, had a full line of interchangeable components and accessories and is generally regarded as the first Japanese system camera. It was the F, along with the earlier S series of rangefinder cameras, that helped establish Nikon's reputation as a maker of professional-quality equipment. Instant cameras : While conventional cameras were becoming more refined and sophisticated, an entirely new type of camera appeared on the market in 1948. This was the Polaroid Model 95, the world's first viable instant-picture camera. Known as a Land Camera after its inventor, Edwin Land, the Model 95 used a patented chemical process to produce finished positive prints from the exposed negatives in under a minute. The Land Camera caught on despite its relatively high price and the Polaroid lineup had expanded to dozens of models by the 1960s. The first Polaroid camera aimed at the popular market, the Model 20 Swinger of 1965, was a huge success and remains one of the top-selling cameras of all time. Automation : The first camera to feature automatic exposure was the selenium light meter-equipped, fully automatic Super Kodak Six-20 pack of 1938, but its extremely high price (for the time) of $225 (equivalent to $4,087 in 2019 kept it from achieving any degree of success. By the 1960s, however, low-cost electronic components were commonplace and cameras equipped with light meters and automatic exposure systems became increasingly widespread. The next technological advance came in 1960, when the German Mec 16 SB subminiature became the first camera to place the light meter behind the lens for more accurate metering. However, through-the-lens metering ultimately became a feature more commonly found on SLRs than other types of camera; the first SLR equipped with a TTL system was the Topcon RE Super of 1962. Digital cameras : Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print, or share photos, and are commonly found on mobile phones. Digital imaging technology : The basis for digital camera image sensors is metal-oxide-semiconductor (MOS) technology, which originates from the invention of the MOSFET (MOS field-effect transistor) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. This led to the development of digital semiconductor image sensors, including the charge-coupled device (CCD) and later the CMOS sensor. The first semiconductor image sensor was the CCD, invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969. While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straighforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting. The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels. The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985. The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993. Practical digital cameras were enabled by advances in data compression, due to the impractically high memory and bandwidth requirements of uncompressed images and video. The most important compression algorithm is the discrete cosine transform (DCT), a lossy compression technique that was first proposed by Nasir Ahmed while he was working at the University of Texas in 1972.[34] Practical digital cameras were enabled by DCT-based compression standards, including the H.26x and MPEG video coding standards introduced from 1988 onwards, and the JPEG image compression standard introduced in 1992. Early digital camera prototypes : The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of 800 x 800 pixels (0.64 megapixels). At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on "All Solid State Radiation Imagers" on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970. Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built. The Cromemco Cyclops, introduced as a hobbyist construction project in 1975, was the first digital camera to be interfaced to a microcomputer. Its image sensor was a modified metal-oxide-semiconductor (MOS) dynamic RAM (DRAM) memory chip. The first recorded attempt at building a self-contained digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak. It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973. The camera weighed 8 pounds (3.6 kg), recorded black-and-white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production. Analog electronic cameras : Handheld electronic cameras, in the sense of a device meant to be carried and used like a handheld film camera, appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused with the later cameras by Sony that also bore the Mavica name. This was an analog camera, in that it recorded pixel signals continuously, as videotape machines did, without converting them to discrete levels; it recorded television-like signals to a 2 × 2 inch "video floppy". In essence it was a video movie camera that recorded single frames, 50 per disk in field mode and 25 per disk in frame mode. The image quality was considered equal to that of then-current televisions. Canon RC-701, 1986 Analog electronic cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon demonstrated a prototype of this model at the 1984 Summer Olympics, printing the images in the Yomiuri Shinbun, a Japanese newspaper. In the United States, the first publication to use these cameras for real reportage was USA Today, in its coverage of World Series baseball. Several factors held back the widespread adoption of analog cameras; the cost (upwards of $20,000, equivalent to $47,000 in 2019), poor image quality compared to film, and the lack of quality affordable printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was beyond the reach of the average consumer. The "video floppy" disks later had several reader devices available for viewing on a screen, but were never standardized as a computer drive. The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics. This capability to transmit images without a satellite link was useful during the 1989 Tiananmen Square protests and the first Gulf War in 1991. US government agencies also took a strong interest in the still video concept, notably the US Navy for use as a real time air-to-sea surveillance system. The first analog electronic camera marketed to consumers may have been the Casio VS-101 in 1987. A notable analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for sale to general users, which sold only a few hundred units. It recorded images in greyscale, and the quality in newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex camera. Images were stored on video floppy disks. Silicon Film, a proposed digital sensor cartridge for film cameras that would allow 35 mm cameras to take digital photographs without modification was announced in late 1998. Silicon Film was to work like a roll of 35 mm film, with a 1.3 megapixel sensor behind the lens and a battery and storage unit fitting in the film holder in the camera. The product, which was never released, became increasingly obsolete due to improvements in digital camera technology and affordability. Silicon Films' parent company filed for bankruptcy in 200 Early true digital cameras : By the late 1980s, the technology required to produce truly commercial digital cameras existed. The first true portable digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a 2 MB SRAM (static RAM) memory card that used a battery to keep the data in memory. This camera was never marketed to the public. The first digital camera of any kind ever sold commercially was possibly the MegaVision Tessera in 1987 though there is not extensive documentation of its sale known. The first portable digital camera that was actually marketed commercially was sold in December 1989 in Japan, the DS-X by Fuji The first commercially available portable digital camera in the United States was the Dycam Model 1, first shipped in November 1990. It was originally a commercial failure because it was black-and-white, low in resolution, and cost nearly $1,000 (equivalent to $2,000 in 2019 ). It later saw modest success when it was re-sold as the Logitech Fotoman in 1992. It used a CCD image sensor, stored pictures digitally, and connected directly to a computer for download . Digital SLRs (DSLRs) : Nikon was interested in digital photography since the mid-1980s. In 1986, while presenting to Photokina, Nikon introduced an operational prototype of the first SLR-type digital camera (Still Video Camera), manufactured by Panasonic.[54] The Nikon SVC was built around a sensor 2/3 " charge-coupled device of 300,000 pixels. Storage media, a magnetic floppy inside the camera allows recording 25 or 50 B&W images, depending on the definition. In 1988, Nikon released the first commercial DSLR camera, the QV-1000C. In 1991, Kodak brought to market the Kodak DCS (Kodak Digital Camera System), the beginning of a long line of professional Kodak DCS SLR cameras that were based in part on film bodies, often Nikons. It used a 1.3 megapixel sensor, had a bulky external digital storage system and was priced at $13,000 (equivalent to $24,000 in 2019). At the arrival of the Kodak DCS-200, the Kodak DCS was dubbed Kodak DCS-100. The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display on the back was the Casio QV-10 developed by a team led by Hiroyuki Suetaka in 1995. The first camera to use CompactFlash was the Kodak DC-25 in 1996. The first camera that offered the ability to record video clips may have been the Ricoh RDC-1 in 1995. In 1995 Minolta introduced the RD-175, which was based on the Minolta 500si SLR with a splitter and three independent CCDs. This combination delivered 1.75M pixels. The benefit of using an SLR base was the ability to use any existing Minolta AF mount lens. 1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was the first digital SLR developed entirely from the ground up by a major manufacturer, and at a cost of under $6,000 (equivalent to $10,100 in 2019 ) at introduction was affordable by professional photographers and high-end consumers. This camera also used Nikon F-mount lenses, which meant film photographers could use many of the same lenses they already owned. Digital camera sales continued to flourish, driven by technology advances. The digital market segmented into different categories, Compact Digital Still Cameras, Bridge Cameras, Mirrorless Compacts and Digital SLRs. Since 2003, digital cameras have outsold film cameras and Kodak announced in January 2004 that they would no longer sell Kodak-branded film cameras in the developed world – and 2012 filed for bankruptcy after struggling to adapt to the changing industry . Camera phones : The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It stored up to 20 JPEG digital images, which could be sent over e-mail, or the phone could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network. The Samsung SCH-V200, released in South Korea in June 2000, was also one of the first phones with a built-in camera. It had a TFT liquid-crystal display (LCD) and stored up to 20 digital photos at 350,000-pixel resolution. However, it could not send the resulting image over the telephone function, but required a computer connection to access photos. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication. One of the major technology advances was the development of CMOS sensors, which helped drive sensor costs low enough to enable the widespread adoption of camera phones. Smartphones now routinely include high resolution digital cameras. Photographic lens design : The design of photographic lenses for use in still or cine cameras is intended to produce a lens that yields the most acceptable rendition of the subject being photographed within a range of constraints that include cost, weight and materials. For many other optical devices such as telescopes, microscopes and theodolites where the visual image is observed but often not recorded the design can often be significantly simpler than is the case in a camera where every image is captured on film or image sensor and can be subject to detailed scrutiny at a later stage. Photographic lenses also include those used in enlargers and projectors. Design Design requirements ------------------- From the perspective of the photographer, the ability of a lens to capture sufficient light so that the camera can operate over a wide range of lighting conditions is important. Designing a lens that reproduces colour accurately is also important as is the production of an evenly lit and sharp image over the whole of the film or sensor plane. For the lens designer, achieving these objectives will also involve ensuring that internal flare, optical aberrations and weight are all reduced to the minimum whilst zoom, focus and aperture functions all operate smoothly and predictably. However, because photographic films and electronic sensors have a finite and measurable resolution, photographic lenses are not always designed for maximum possible resolution since the recording medium would not be able to record the level of detail that the lens could resolve. For this, and many other reasons, camera lenses are unsuited for use as projector or enlarger lenses. The design of a fixed focal length lens (also known as prime lenses) presents fewer challenges than the design of a zoom lens. A high-quality prime lens whose focal length is about equal to the diameter of the film frame or sensor may be constructed from as few as four separate lens elements, often as pairs on either side of the aperture diaphragm. Good examples include the Zeiss Tessar or the Leitz Elmar. Design constraints To be useful in photography any lens must be able to fit the camera for which it is intended and this will physically limit the size where the bayonet mounting or screw mounting is to be located. Photography is a highly competitive commercial business and both weight and cost constrain the production of lenses. Refractive materials such as glass have physical limitations which limit the performance of lenses. In particular the range of refractive indices available in commercial glasses span a very narrow range. Since it is the refractive index that determines how much the rays of light are bent at each interface and since it is the differences in refractive indices in paired plus and minus lenses that constrains the ability to minimise chromatic aberrations, having only a narrow spectrum of indices is a major design constraint. Design Design requirements Optical lens design ------------------------------------- From the perspective of the photographer, the ability of a lens to capture sufficient light so that the camera can operate over a wide range of lighting conditions is important. Designing a lens that reproduces colour accurately is also important as is the production of an evenly lit and sharp image over the whole of the film or sensor plane. For the lens designer, achieving these objectives will also involve ensuring that internal flare, optical aberrations and weight are all reduced to the minimum whilst zoom, focus and aperture functions all operate smoothly and predictably. However, because photographic films and electronic sensors have a finite and measurable resolution, photographic lenses are not always designed for maximum possible resolution since the recording medium would not be able to record the level of detail that the lens could resolve. For this, and many other reasons, camera lenses are unsuited for use as projector or enlarger lenses. The design of a fixed focal length lens (also known as prime lenses) presents fewer challenges than the design of a zoom lens. A high-quality prime lens whose focal length is about equal to the diameter of the film frame or sensor may be constructed from as few as four separate lens elements, often as pairs on either side of the aperture diaphragm. Good examples include the Zeiss Tessar or the Leitz Elmar. Design constraints To be useful in photography any lens must be able to fit the camera for which it is intended and this will physically limit the size where the bayonet mounting or screw mounting is to be located. Photography is a highly competitive commercial business and both weight and cost constrain the production of lenses. Refractive materials such as glass have physical limitations which limit the performance of lenses. In particular the range of refractive indices available in commercial glasses span a very narrow range. Since it is the refractive index that determines how much the rays of light are bent at each interface and since it is the differences in refractive indices in paired plus and minus lenses that constrains the ability to minimise chromatic aberrations, having only a narrow spectrum of indices is a major design constraint. Aperture control : ------------------ The aperture control, usually a multi-leaf diaphragm, is critical to the performance of a lens. The role of the aperture is to control the amount of light passing through the lens to the film or sensor plane. An aperture placed outside of the lens, as in the case of some Victorian cameras, risks vignetting of the image in which the corners of the image are darker than the centre. A diaphragm too close to the image plane risks the diaphragm itself being recorded as a circular shape or at the very least causing diffraction patterns at small apertures. In most lens designs the aperture is positioned about midway between the front surface of the objective and the image plane. In some zoom lenses it is placed some distance away from the ideal location in order to accommodate the movement of floating lens elements needed to perform the zoom function. Most modern lenses for 35mm format rarely provide a stop smaller than f/22 because of the diffraction effects caused by light passing through a very small aperture. As diffraction is based on aperture width in absolute terms rather than the f-stop ratio, lenses for very small formats common in compact cameras rarely go above f/11 (1/1.8") or f/8 (1/2.5"), while lenses for medium- and large-format provide f/64 or f/128. Very-large-aperture lenses designed to be useful in very low light conditions with apertures ranging from f/1.2 to f/0.9 are generally restricted to lenses of standard focal length because of the size and weight problems that would be encountered in telephoto lenses and the difficulty of building a very wide aperture wide angle lens with the refractive materials currently available. Very-large-aperture lenses are commonly made for other types of optical instruments such as microscopes but in such cases the diameter of the lens is very small and weight is not an issue. Many very early cameras had diaphragms external to the lens often consisting of a rotating circular plate with a number of holes of increasing size drilled through the plate.[3] Rotating the plate would bring an appropriate sized hole in front of the lens. All modern lenses use a multi-leaf diaphragm so that at the central intersection of the leaves a more or less circular aperture is formed. Either a manual ring, or an electronic motor controls the angle of the diaphragm leaves and thus the size of the opening. The placement of the diaphragm within the lens structure is constrained by the need to achieve even illumination over the whole film plane at all apertures and the requirement to not interfere with the movement of any movable lens element. Typically the diaphragm is situated at about the level of the optical centre of the lens. Shutter mechanism A shutter controls the length of time light is allowed to pass through the lens onto the film plane. For any given light intensity, the more sensitive the film or detector or the wider the aperture the shorter the exposure time need to be to maintain the optimal exposure. In the earliest cameras, exposures were controlled by moving a rotating plate from in front of the lens and then replacing it. Such a mechanism only works effectively for exposures of several seconds or more and carries a considerable risk of inducing camera shake. By the end of the 19th century spring tensioned shutter mechanisms were in use operated by a lever or by a cable release. Some simple shutters continued to be placed in front of the lens but most were incorporated within the lens mount itself. Such lenses with integral shutter mechanisms developed in the current Compur shutter as used in many non-reflex cameras such as Linhof. These shutters have a number of metal leaves that spring open and then close after a pre-determined interval. The material and design constraints limit the shortest speed to about 0.002 second. Although such shutters cannot yield as short an exposure time as focal-plane shutter they are able to offer flash synchronisation at all speeds. Incorporating a commercial made Compur type shutter required lens designers to accommodate the width of the shutter mechanism in the lens mount and provide for the means of triggering the shutter on the lens barrel or transferring this to the camera body by a series of levers as in the Minolta twin-lens cameras.The need to accommodate the shutter mechanism within the lens barrel limited the design of wide-angle lenses and it was not until the widespread use of focal-plane shutters that extreme wide-angle lenses were developed. Types of lenses : ----------------- The type of lens being designed is significant in setting the key parameters. Prime lens - a photographic lens whose focal length is fixed, as opposed to a zoom lens, or that is the primary lens in a combination lens system. Zoom lenses - variable focal length lenses. Zoom lenses cover a range of focal lengths by utilising movable elements within the barrel of the lens assembly. In early varifocal lens lenses, the focus also shifted as the lens focal length was changed. Varifocal lenses are also used in many modern autofocus cameras as the lenses are cheaper and simpler to construct and the autofocus can take care of the re-focussing requirements. Many modern zoom lenses are now confocal, meaning that the focus is maintained throughout the zoom range. Because of the need to operate over a range of focal lengths and maintain confocality, zoom lenses typically have very many lens elements. More significantly, the front elements of the lens will always be a compromise in terms of its size, light-gathering capability and the angle of incidence of the incoming rays of light. For all these reasons, the optical performance of zoom lenses tends to be lower than fixed-focal-length lenses. Normal lens - a lens with a focal length about equal to the diagonal size of the film or sensor format, or that reproduces perspective that generally looks "normal" to a human observer. Cross-section of a typical short-focus wide-angle lens. Wide angle lens - a lens that reproduces perspective that generally looks "wider" than a normal lens. The problem posed by the design of wide-angle lenses is to bring to an accurate focus light from a wide area without causing internal flare. Wide-angle lenses therefore tend to have more elements than a normal lens to help refract the light sufficiently and still minimise aberrations whilst adding light-trapping baffles between each lens element. Cross-section of a typical retrofocus wide-angle lens. Extreme or ultra-wide-angle lens - a wide-angle lens with an angle of view above 90 degrees.[4] Extreme-wide-angle lenses share the same issues as ordinary wide-angle lenses but the focal length of such lenses may be so short that there is insufficient physical space in front of the film or sensor plane to construct a lens. This problem is resolved by constructing the lens as an inverted telephoto, or retrofocus with the front element having a very short focal length, often with a highly exaggerated convex front surface and behind it a strongly negative lens grouping that extends the cone of focused rays so that they can be brought to focus at a reasonable distance. Cross-section - typical telephoto lens. L1 - Tele positive lens group L2 - Tele negative lens group D - Diaphragm Fisheye lens - an extreme wide-angle lens with a strongly convex front element. Spherical aberration is usually pronounced and sometimes enhanced for special effect. Optically designed as a reverse telephoto to enable the lens to fit into a standard mount as the focal length can be less than the distance from lens mount to focal plane. Long-focus lens - a lens with a focal length greater than the diagonal of the film frame or sensor. Long focus lenses are relatively simple to design, the challenges being comparable to the design of a prime lens. However, as the focal length increases the length of the lens and the size of the objective increase in size and length and weight quickly become significant design issues in retaining utility and practicality for the lens in use. In addition because the light path through the lens is long and glancing, the importance of baffles to control flare increases in importance. Telephoto lens - an optically compressed version of the long-focus lens. The design of telephoto lenses reduces some of the problems encountered by designers of long-focus lenses. In particular, telephoto lenses are typically much shorter and may be lighter for equivalent focal length and aperture. However telephoto designs increase the number of lens elements and can introduce flare and exacerbate some optical aberrations. Catadioptric lens - catadioptric lenses are a form of telephoto lens but with a light path that doubles back on itself and with an objective that is a mirror combined with some form aberration correcting lens (a catadioptric system) rather than just a lens. A centrally-placed secondary mirror and usually an additional small lens group bring the light to focus. Such lenses are very lightweight and can easily deliver very long focal lengths but they can only deliver a fixed aperture and have none of the benefits of being able to stop down the aperture to increase depth of field. Anamorphic lenses are used principally in cinematography to produce wide-screen films where the projected image has a substantially different ratio of height to width than the image recorded on the film plane. This is achieved by the use of a specialised lens design which compresses the image laterally at the recording stage and the film is then projected through a similar lens in the cinema to recreate the wide-screen effect. Although in some cases the anamorphic effect is achieved by using an anamorphising attachment as a supplementary element on the front of a normal lens, most films shot in anamorphic formats use specially designed anamorphic lenses, such as the Hawk lenses made by Vantage Film or Panavision's anamorphic lenses. These lenses incorporate one or more aspheric elements in their design. Enlarger lenses Lenses used in photographic enlargers are required to focus light passing through a relatively small film area on a larger area of photographic paper or film. Requirements for such lenses include the ability to record even illumination over the whole field to record fine detail present in the film being enlarged to withstand frequent cycles of heating and cooling as the illumination lamp is turned on and off to be able to be operated in the dark - usually by means of click stops and some luminous controls The design of the lens is required to work effectively with light passing from near focus to far focus - exactly the reverse of a camera lens. This demands that internal light baffling within the lens is designed differently and that the individual lens elements are designed to maximize performance for this change of direction of incident light. Projector lenses Projector lenses share many of the design constraints as enlarger lenses but with some critical differences. Projector lenses are always used at full aperture and must produce an acceptably illuminated and acceptably sharp image at full aperture. However, because projected images are almost always viewed at some distance, lack of very fine focus and slight unevenness of illumination is often acceptable. Projector lenses have to be very tolerant of prolonged high temperatures from the projector lamp and frequently have a focal length much longer than the taking lens. This allows the lens to be positioned at a greater distance from the illuminated film and allows an acceptable sized image with the projector some distance from the screen. It also permits the lens to be mounted in a relatively coarsely threaded focusing mount so that the projectionist can quickly correct any focusing errors. ______________________________________ Everything You Need To Know About instax Technology : ------------------------------------ instax cameras is that they are a fun and easy way to enjoy instant photo prints. Instax pushes out a print that self-develops within a few minutes of its emergence from the camera. In the digital era, there is little in the way of practical uses for this system, but since when does photography have to be practical? Use it to experiment, use it to enjoy good times with friends, at weddings or parties, and use it to show your kids that you can actually touch a photograph. Of course, as soon as I wrote that, I realized there might well be a few instances in business or even in film and photo production where an instant print might still be of use; however, I don’t think instax would be the choice tool for that function. But please do let me know if you use instax for practical applications. Film Two film formats are available for instax cameras—instax mini and instax WIDE. The wide and mini cannot be mixed and matched; they are designed for specific camera models. The wide format, which measures 3.4 x 4.3" with an image size of 2.4 x 3.9", fits the current instax WIDE 300 model. All other instax cameras use the 2.4 x 1.8" instax mini format, as do the Polaroid 300 cameras. Size is another reason these prints are less appropriate for practical applications. The mini print, which is basically the size of a credit card with the white borders included, is too small to reveal much in the way of intricate details, and while the wide is closer in size to “standard” 3 x 5" or 4 x 6" prints, it is still not a preferred method for instant documentation. In addition to the standard white-bordered prints, instax offers prints with playfully designed borders, including the multi-colored rainbow pack. Both formats are housed in disposable black plastic cartridge that contains 10 sheets. The cartridge inserts easily into the back of the camera. Integral film, the kind of instant film used by instax, works because it contains layers of emulsion dye and layers of developing dye sandwiched within its “sheet.” Developing and fixing chemicals are stored in the “sack” of white border on the bottom of the image and when the film is pushed out of the camera the developing process begins. For instax there is no need to peel off the negative image and no shaking or putting it under your arm (for proper temperature) required. Within an ambient temperature range of 41-104°F, just wait about two minutes and your image will appear, although it would be fun to experiment with different development temperatures. Both sizes of film are daylight balanced, ISO 800 with a 10 lines/mm resolving power, and can expose indoor and outdoor shots equally well. However, if you are expecting the saturation of a Velvia film stock or the dynamic range of the X-trans sensor, you’re in the wrong article. Given the minimal amount of exposure, aperture, and flash control offered by the cameras, be prepared for (and excited about) lo-fi image quality with a glossy surface. With minimal experience, the right light on bold color and proper distance to subject, you can expect pleasing results. The instax mini Lineup Currently, B&H offers four distinct instax mini models, three of which are available with color choices. In order of their complexity, from very simple to quite simple, there is the instax mini 8, with a range of candy-color options, the instax mini 25, the instax mini 70 and the instax mini 90 Neo Classic. Based on ease of use and color options, it would seem that the mini 8 is earmarked for the kiddies, although it does offer a nice black model for the “serious” shooter. There is little one needs to know about the camera, as it offers only the most basic adjustments. It features a fixed 1/60-second shutter speed, a built-in flash that always fires and, like all minis, it has a 60mm f/12.7 lens. On the side of the lens there are four aperture settings, which correspond to Indoor light, Cloudy/Shade, Partly Sunny, or Bright Sun. There is also a “high key” mode setting. Figuring out which to use is pretty straightforward, although there will be a margin of error. When I shot on a sunny but cloudy day the first time, it became clear that the proper exposure should have been Cloudy/Shade. Fortunately, you can immediately see the result of your exposure choice and adjust accordingly. One aspect of the Fujifilm design, compared to Polaroid, is that the power source for the camera and film is in the camera and not the film pack. The mini 8 uses two AA batteries, whereas the mini 25 and mini 70 use CR2 batteries, and the mini 90 uses a rechargeable lithium-ion battery. The instax mini 25 is slightly smaller than the mini 8, uses two CR2 lithium batteries, and features more control over exposure, including auto-variable shutter speed, exposure compensation, flash control, and a motor-driven close-up lens setting. It also has a small mirror next to the lens for easier composition of selfies. The instax mini 70 has a fully retractable lens and is the most compact of available models, even slightly resembling familiar point-and-shoot digital camera form factors and colors. It is marketed as ideal for selfies and also provides a mirror next to the lens. It features a Selfie Mode, which automatically sets appropriate brightness and focus distance. A self-timer mode and tripod socket are also featured. Auto shutter speed varies from ½- to 1/400-second. Focusing options are more advanced on the mini 70, with three distinct modes including a “macro” mode that focuses as close as 11.8". An LCD screen displays the exposure count and shooting mode. The instax mini 90 Neo Classic is available in black or brown and has a handsome retro design and two shutter buttons for convenient shooting in both horizontal and vertical positions. It, too, has a retractable lens design, but is the only mini to use a rechargeable lithium-ion battery. All instax minis have a 0.37x optical viewfinder, but the mini 90 has parallax adjustment for macro shooting. Six shooting modes are provided, including bulb mode for up to 10-second exposures and a double-exposure mode. Modes are changed by rotating the dial around the lens or with the button and LCD on the camera’s back. Its advanced flash enables better lighting for its various modes and its LCD and button controls are more familiar to anyone used to a digital camera. Three focusing modes, including macro, are the same as on the mini 70. The instax WIDE The instax WIDE 300 Instant Film Camera is shaped like a DSLR with a large handgrip, and the shutter button and power lever ergonomically located on top of that grip. It also uses a ring around the lens to control its zone-focus system. The two motor-driven focus modes are 3.0-9.8' and 9.8' to infinity. Shutter speeds run from 1/64- to 1/200-seconds and exposure control can be set to automatic or adjusted with +/-2 exposure compensation control. The LCD screen displays the exposure counter (number of shots remaining), Lighten-Darken control, Fill-in Flash Mode and the WIDE 300 also comes with a close-up lens adapter. Loading and Shooting the instax Cameras Loading film into the instax cameras is about as easy as it gets. No sprockets or spools, no pick-ups or release buttons. Just open the camera back and place the pre-loaded black film cartridge into the camera. Well, there is one trick—make sure the yellow tab marked on the corner of the film cartridge is aligned with the yellow tab mark in the camera’s chamber. When the film cartridge is in place, not askew, close the film chamber door and shoot one exposure to remove the plastic film cover, which is ejected from the film slot the same way as the film. After the film cover is ejected, the counter will read 10 and count down after each exposure until you are out of film and need to reload. Remember, it’s best not to open the back of a camera with film in it, but with instax, even that is forgiven. I opened the back, even touched the film cartridge where it indicates, “No fingers go here” and nothing adverse happened. [ note: A Professional photographer on closed circuit. Do not attempt.] Shooting with instax is, by design, very simple. Yes, certain models give you some control over exposure, flash, and focal range, but the basic idea is point and shoot. As mentioned above, it will take you one pack of film to figure the proper exposure settings, which are controlled in broad strokes no matter the camera model. If you are only accustomed to digital photography and film is a new expense, well, unfortunately, the best way to get to know the capability of an instax is trial and error. Close focus is a welcome mode on some camera models that I encourage you to try. Keep in mind that the viewfinder is not showing exactly what the lens frames, but the difference will not ruin the experience. Focus range is different for the various models, but a good rule of thumb is that for best focus, exposure, and flash illumination, your main subject should be 2-8' from you, and bold colors work well. One caveat is that, while some of the models are a bit heartier than the others, they are all made from plastic and will break if dropped.
______________________________________ Camera Electronics -------------------- Digital Video Hardware Optics In the traditional world of film cameras, shutter speed, f-stop aperture, and film speed were essential components to shooting static or motion pictures. There is a check-and-balance system with choosing the right configuration for the right type of photograph. Originally, in the days of mechanical cameras, this was done as a slow and deliberate process. Once electronics were added to the mechanical machines, some of the mystery was erased, but sacrifices were made because the camera didn’t “understand” the subject. Obviously, electronic cameras have come a long way, but cameras are just electronic machines and they still do not understand the subject matter they are required to capture. Because of my extensive experience in photography, a friend of mine asked me why his high-end Canon digital single-lens reflex (SLR) was taking blurry pictures. They weren’t blurry pictures but rather showed people and objects in motion. Any electronic camera, set to fully automatic, will choose what it perceives as the best configuration for that photograph based on lighting, the limitation of the assigned ISO setting, and the available shutter speeds, coupled with the widest aperture. To capture motion or freeze-frame the image, the fastest shutter speeds are required . Although the photographic camera has been around since the 19th century, it wasn’t until 1960 when the first light meter, a selenium photo cell, was placed behind the lens for better photographic accuracy. This is important because it’s the light that travels through the lens that creates the imagery. That was true in 1960 and it’s still true today. The selenium photo cell may have been replaced with CCD and CMOS sensors , but the camera itself and the theory behind it haven’t changed much at all. Cameras --------- The camera lens focuses the visual scene image onto the camera sensor area point by point and the camera electronics transforms the visible image into an electrical signal. The camera video signal (containing all picture information) is made up of frequencies from 30 cycles per second, or 30 Hz, to 4.2 million cycles per second, or 4.2 MHz. The video signal is transmitted via a cable (or wireless) to the monitor display. Almost all security cameras in use today are color or monochrome CCD with the rapid emergence of CMOS types. These cameras are available as low-cost single printed circuit board cameras with small lenses already built in, with or without a housing used for covert and overt surveillance applications. More expensive cameras in a housing are larger and more rugged and have a C or CS mechanical mount for accepting any type of lens. These cameras have higher resolution and light sensitivity and other electrical input/output features suitable for multiple camera CCTV systems. The CCD and CMOS cameras with LED IR illumination arrays can extend the use of these cameras to nighttime use. For LLL applications, the ICCD and IR cameras provide the highest sensitivity and detection capability. Significant advancements in camera technology have been made in the last few years, particularly in the use of DSP in the camera and development of the IP camera. All security cameras manufactured between the 1950s and 1980s were the vacuum tube type, vidicon, silicon, or LLL types using silicon intensified target (SIT) and intensified SIT. In the 1980s the CCD and CMOS solid-state video image sensors were developed and remain the mainstay in the security industry. Increased consumer demand for video recorders using CCD sensors in camcorders and the CMOS sensor in digital still frame cameras caused a technology explosion and made these small, high-resolution, high-sensitivity, monochrome and color solid-state cameras available for security systems. The security industry now has both analog and digital surveillance cameras at its disposal. Up until the mid-1990s analog cameras dominated, with only rare use of DSP electronics, and the digital Internet camera was only being introduced to the security market. Advances in solid-state circuitry, the demand from the consumer market, and the availability of the Internet were responsible for the rapid use of digital cameras for security applications. The Scanning Process Two methods used in the camera and monitor video scanning process are raster scanning and progressive scanning. In the past, analog video systems have all used the raster scanning technique; however, newer digital systems are now using progressive scanning. All cameras use some form of scanning to generate the video picture. A block diagram of the CCTV camera and a brief description of the analog raster scanning process and video signal The camera sensor converts the optical image from the lens into an electrical signal. The camera electronics process the video signal and generate a composite video signal containing the picture information (luminance and color) and horizontal and vertical synchronizing pulses. Signals are transmitted in what is called a frame of picture video, made up of two fields of information. Each field is transmitted in 1/60 of a second and the entire frame in 1/30 of a second, for a repetition rate of 30 fps. In the United States, this format is the Electronic Industries Association (EIA) standard called the NTSC system. The European standard uses 625 horizontal lines with a field taking 1/50 of a second and a frame 1/25 of a second and a repetition rate of 25 fps. Raster Scanning In the NTSC system the first picture field is created by scanning 262½ horizontal lines. The second field of the frame contains the second 262½ lines, which are synchronized so that they fall between the gaps of the first field lines thus producing one completely interlaced picture frame containing 525 lines. The scan lines of the second field fall exactly halfway between the lines of the first field resulting in a 2-to-1 interlace system. As shown in Fig. 20.18, the first field starts at the upper left corner (of the camera sensor or the CRT monitor) and progresses down the sensor (or screen), line by line, until it ends at the bottom center of the scan. Likewise the second field starts at the top center of the screen and ends at the lower right corner. Each time one line in the field traverses from the left side of the scan to the right it corresponds to one horizontal line as shown in the video waveform at the bottom of Fig. 20.18. The video waveform consists of negative synchronization pulses and positive picture information. The horizontal and vertical synchronization pulses are used by the video monitor (and VCR, DVR, or video printer) to synchronize the video picture and paint an exact replica in time and intensity of the camera scanning function onto the monitor face. Black picture information is indicated on the waveform at the bottom (approximately 0 V) and the white picture information at the top (1 V). The amplitude of a standard NTSC signal is 1.4 V peak to peak. In the 525-line system the picture information consists of approximately 512 lines. The lines with no picture information are necessary for vertical blanking, which is the time when the camera electronics or the beam in the monitor CRT moves from the bottom to the top to start a new field. Random-interlace cameras do not provide complete synchronization between the first and the second fields. The horizontal and the vertical scan frequencies are not locked together, therefore, fields do not interlace exactly. This condition, however, results in an acceptable picture, and the asynchronous condition is difficult to detect. The two-to-one interlace system has an advantage when multiple cameras are used with multiple monitors and/or recorders in that they prevent jump or jitter when switching from one camera to the next. The scanning process for solid-state cameras is different. The solid-state sensor consists of an array of very small picture elements (pixels) that are read out serially (sequentially) by the camera electronics to produce the same NTSC format—525 TV lines in 1/30 of a second (30 fps). The use of digital cameras and digital monitors has changed the way the camera and monitor signals are processed, transmitted, and displayed. The final presentation on the monitor looks similar to the analog method, but instead of seeing 525 horizontal lines (NTSC system) individual pixels are seen in a row and column format. In the digital system the camera scene is divided into rows and columns of individual pixels (small points in the scene) each representing the light intensity and color for each point in the scene. The digitized scene signal is transmitted to the digital display, be it LCD, plasma, or other, and reproduced on the monitor screen pixel by pixel, providing a faithful representation of the original scene. Digital and Progressive Scan The digital scanning is accomplished in the 2-to-1 interlace mode as in the analog system, or in a progressive mode. In the progressive mode each line is scanned in linear sequence: line 1, then line 2, line 3, and so forth. Solid-state camera sensors and monitor displays can be manufactured with a variety of horizontal and vertical pixels’ formats. The standard aspect ratio is 4:3 as in the analog system, and 16:9 for the wide screen. Likewise, there are many different combinations of pixel numbers available in the sensor and display. Some standard formats for color CCD cameras are 512 h × 492 v for 330 TV line resolution and 768 h × 494 v for 480 TV line resolution, and for color LCD monitors it is 1280 h × 1024 v. Solid-State Cameras Video security cameras have gone through rapid technological change during the last half of the 1980s to the present. For decades the vidicon tube camera was the only security camera available. In the 1980s the more sensitive and rugged silicon-diode tube camera was the best available. In the late 1980s the invention and development of the digital CCD and later the CMOS cameras replaced the tube camera. This technology coincided with rapid advancement in DSP in cameras, the IP camera, and use of digital transmission of the video signal over LAN, WANs, and the Internet. The two generic solid-state cameras that account for most security applications are the CCD and the CMOS. The first generation of solid-state cameras available from most manufacturers had 2/3-in. (sensor diagonal) and 1/2-in. sensor formats. As the technology improved, smaller formats evolved. Most solid-state cameras in use today are available in three image sensor formats: 1/2, 1/3, and 1/4 in. The 1/2-in.89 format produces higher resolution and sensitivity at a higher cost. The 1/2-in. and smaller formats permitted the use of smaller, less expensive lenses as compared with the larger formats. Many manufacturers now produce 1/3 and 1/4-in. format cameras with excellent resolution and light sensitivity. Solid-state sensor cameras are superior to their predecessors because of their (1) precise, repeatable pixel geometry, (2) low power requirements, (3) small size, (4) excellent color rendition and stability, and (5) ruggedness and long life expectancy. At present, solid-state cameras have settled into three main categories: (1) analog, (2) digital, and (3) Internet. Analog Analog cameras have been with the industry since CCTV has been used in security. Their electronics are straightforward and the technology is still used in many applications. Digital Since the end of the 1990s there has been an increased use of DSP in cameras. It significantly improves the performance of the camera by (1) automatically adjusting to large light level changes (eliminating the automatic-iris), (2) integrating the VMD into the camera, and (3) automatically switching the camera from color operation to higher sensitivity monochrome operation, as well as other features and enhancements. Internet The most recent camera technology advancement is manifest in the IP camera. This camera is configured with electronics that connects to the Internet and the WWW network through an Internet service provider. Each camera is provided with a registered Internet address and can transmit the video image anywhere on the network. This is really remote video monitoring at its best. The camera site is viewed from anywhere by entering the camera Internet address (ID number) and proper password. Password security is used so that only authorized users can enter the Website and view the camera image. Two-way communication is used so that the user can control camera parameters and direct the camera operation (pan, tilt, zoom, etc.) from the monitoring site. LLL-Intensified Camera When a security application requires viewing during nighttime conditions where the available light is moonlight, starlight, or other residual reflected light, and the surveillance must be covert (no active illumination like IR LEDs), LLL-intensified CCD cameras are used. The ICCD cameras have sensitivities between 100 and 1000 times higher than the best solid-state cameras. The increased sensitivity is obtained through the use of a light amplifier mounted in between the lens and the CCD sensor. LLL cameras cost between 10 and 20 times more than CCD cameras. Thermal Imaging Camera An alternative to the ICCD camera is the thermal IR camera. Visual cameras see only visible light energy from the blue end of the visible spectrum to the red end (approximately 400–700 nm). Some monochrome cameras see beyond the visible region into the near-IR region of the spectrum up to 1000 nm. This IR energy, however, is not thermal IR energy. Thermal IR cameras using thermal sensors respond to thermal energy in the 3–5 and 8–14 μm range. The IR sensors respond to the changes in heat (thermal) energy emitted by the targets in the scene. Thermal imaging cameras can operate in complete darkness as they require no visible or IR illumination whatever. They are truly passive nighttime monochrome imaging sensors. They can detect humans and any other warm objects (animals, vehicle engines, ships, aircraft, warm/hot spots in buildings) or other objects against a scene background. Panoramic 360 degree Camera Powerful mathematical techniques combined with the unique 360 degree panoramic lens have made a 360 degree panoramic camera possible. In operation the lens collects and focuses the 360 degree horizontal by up to 90 degree vertical scene (one-half of a sphere; a hemisphere) onto the camera sensor. The camera/lens is located at the origin (0). The scene is represented by the surface of the hemisphere. As shown, a small part (slice) of the scene area (A, B, C, D) is “mapped” onto the sensor as a, b, c, d. In this way the full scene is mapped onto the sensor. Direct presentation of the donut-ring video image onto the monitor does not result in a useful picture. That is where the use of a powerful mathematical algorithm comes in. Digital processing in the computer using the algorithm transforms the donut-shaped image into the normal format seen on a monitor, that is, horizontal and vertical. All of the 0–360 degree horizontal by 90 degree vertical images cannot be presented on a monitor in a useful way—there is just too much picture “squeezed” into the small screen area. This condition is solved by computer software by looking at only a section of the entire scene at any particular time. The main attributes of the panoramic system include: (1) capturing a full 360 degree FOV, (2) the ability to digitally pan/tilt to anywhere in the scene and digitally zoom any scene area, (3) having no moving parts (no motors, etc., that can wear out), and (4) having multiple operators that can view any part of the scene in real time or at a later time. The panoramic camera requires a high-resolution camera since so much scene information is contained in the image. Camera technology has progressed so that these digital cameras are available and can present a good image of a zoomed-in portion of the panoramic scene. The ViDi Labs SD/HD test chart ------------------------------ In order to help you determine your camera resolution, as well as check other video details, ViDi Labs Pty. Ltd. has designed this special test chart in A3 + format, which combines three charts in one: for testing standard definition (SD) with 4:3 aspect ratio, high definition (HD) with 16:9 aspect ratio and mega pixel (MP) cameras, and systems with 3:2 aspect ratio. We have tried to make it as accurate and informative as possible and although it can be used in the broadcast applications it should not be taken as a substitute for the various broadcast test charts. Its primary intention is to be used in the CCTV industry, as an objective guide in comparing different cameras, encoders, transmission, recording, and decoding systems. Using our experience and knowledge from the previously designed CCTV Labs test chart, as well as the feedback we have had from the numerous users around the world, we have designed this SD/ HD/MP test chart from ground up adding many new and useful features, but still tried to preserve the useful ones from the previous design. We kept the details used to verify face identification as per VBG (Verwaltungs-Berufsgenossenschaft) recommendation Installationshinweise für Optische Raumuberwachungs-anlagen (ORÜA) SP 9.7/5, and compliant with the Australia Standard AS 4806.2. With this chart you can check a lot of other details of an analogue or digital video signal, primarily the resolution, but also bandwidth, monitor linearity, gamma, color reproduction, impedance matching, reflection, encoders and decoders quality, compression levels, details quality in identifying faces, playing cards, numbers, and characters. Before you start testing Lenses For the best picture quality you must first select a very good lens (that has equal or better resolution than the CCD/CMOS chip itself). In order to minimise opto-mechanical errors, typically found in vari-focal lenses, we suggest to use good quality fixed focal length manual iris lens or perhaps a very good manual zoom lens. The lens should be suitable to the chip size used on the camera, i.e. its projection circle should cover the imaging chip completely, in addition to offering superior resolution for the appropriate camera. Avoid vari-focal lenses, especially if resolution is to be tested. Shorter focal lengths, showing angles of view wider than 30°, should usually be avoided because of the spherical image distortion they may introduce. A good choice for 1/2” CCD cameras would be an 8 mm, 12 mm, 16 mm, or 25 mm lens. For 1/3” CCD cameras a good choice would be when 6 mm, 8 mm, 12 mm, or 16 mm lens is used. Since the introduction of mega-pixel and HD cameras there are “mega-pixel” lenses that can be used. Although the name “mega-pixel” on the lens may not necessarily be a guarantee for superior optical quality, one would assume this would be better quality than just an average “vari-focal” CCTV lens. Monitors ---------- Displaying the best and most accurate picture is of paramount importance during testing. If you are using the chart for testing only analogue SD cameras and systems, it is recommended that you use a high resolution interlaced scanning CRT monitor, with CVBS (Composite Video Signal) input and underscanning feature. High resolution in the analogue world means - a monitor that could offer at least 500TVL. Colour monitors are acceptable only if they are of broadcast, or near-broadcast, quality. Such monitors are harder to find these days, as many monitor manufacturers have ceased their production, but good quality CVBS monitors can still be found. A good try could be the supplier of broadcast equipment close to you. Understandably, cameras having over 500 TV lines of horizontal resolution cannot have their resolution tested with such monitors, but even higher quality are needed, which are only found in the broadcast industry. In the past, when testing camera resolution the best choice were high quality monochrome (B/W) monitor since their resolution, not having RGB mask, reaches 1000 TV lines. Unfortunately, they are almost impossible to find these days. The next and more common option for good quality display nowdays are LCD and plasma screens. Some CCTV suppliers offer LCD monitors with BNC or RCA inputs for composite video signals too. If an LCD monitor is used for your analogue cameras testing, it is important to understand that the only way to have a composite video displayed on an LCD monitor is by way of converting the analogue signal to digital (A/D) by the monitor itself. The quality of such a conversion, combined with the quality of the LCD display, define how fine details you can see. The LCD monitor may have a high resolution (pixel count) which might be good for your computer image, but may fail when showing composite video. The reason for this could be the A/D conversion itself and the quality of this circuit (A/D precision and the upsampling quality). So, caution has to be excercised if using LCD monitors with CVBS input for measuring analogue camera resolution. When using LCD monitors for testing your digital SD, HD, and MegaPixel cameras, the first rule is to use the video driver card (of the computer that decodes the video) to run in the native resolution mode of the monitor. The native resolution of the monitor is usually shown in the pixel count specification of the monitor. Furhermore, if an SD video signal is for example decoded and displayed on an LCD monitor with higher pixel count than the SD signal itself (e.g. PAL digitised signal of 768x576 pixel count is displayed on an XGA monitor of 1024x768) then the best image quality of the SD signal would be if it is shown in the native resolution of that image, e.g. 768x576. Tripod If you are using longer focal length lens on your camera, this will force you to position the camera further away from the test chart. For this purpose it is recommended that you use a photographic tripod. Some users prefer to use “vertical” setup rather than “horizontal,” whereby the test chart is positioned on the floor and the camera up above looking down. This might be easier for the perpendicularity setup. In this case, a larger tripod is recommended with adjustable mounting head so that a camera can be positioned vertically, looking at the test chart. There are photographic tripods on the market which are very suitable for such mounting. Light When the test chart is positioned for optimal viewing and testing, controlled and uniform light is needed for illuminating the test chart surface. One of the most difficult things to control is the color temperature of the source of light used in the testing. This is even more critical if color white balance is tested. Caution has to be exercised if correct color and white balance is tested as there are many parameters influencing this values. Traditionally, tungsten light are used to illuminate the chart from either sides, at a steep angle enough so as to not cause reflection from the test chart surface, but at the same time illuminate the chart uniformly. Tungsten light has different color temperatures depending on the wattage and the type of light. Typically, a 100W tungsten light globe would produce an equivalent color temperature of 2870°K, whilst a professional photographic lights are designed to have around 3200°K. Per broadcast standards, a resolution measurement requires illumination of 2000 lux at 3100°K light source. In CCTV we allow for variation on these numbers because we rarely have controlled illumination as in broadcast, but it is good to know what are the broadcast standards. With the progress of lighting technology it is now possible to get solid state LEDs light globes with very uniform distribution of light (which is the important bit when checking camera response). In practice, many of you would probably use natural light, in which case the main consideration is to have uniform distribution of the light across the chart’s surface. The chart is made of a matte finish in order to minimise reflections, but still care should be taken not to expose the chart to direct sunlight for prolonged periods of time as the UV rays may change the colour pigments. The overall reflectivity of the ViDi Labs test chart v.4.1 is 60%. This number can be used when making illumination calculations, especially at low light levels. Testing SD / HD or MP This test chart has actually three aspect ratios on one chart. The aspect ratios, as well as the resolution of each part, has been accurately calculated and fine tuned so that the chart can be used as a Standard Definition test chart, with aspect ratio of 4:3, as a High Definition test chart with aspect ratio of 16:9 and as a MegaPixel with aspect ratio of 3:2. Since most of the measurements would be made with SD and HD cameras, we have made indicators for the SD to be white in color (the edge triangles and focus stars), and the indicators for the HD to be yellow in color (the edge triangles and the focus stars). Similar logic refers to the indicators of the SD analogue resolution in TVL (usually black text on white or gray) and the resolution in pixels for HD shown with yellow numbers (under the sweep pattern), or black on yellow area (for the resolution wedges). Only when analogue (SD) camera is adjusted to view exactly to the white/ black edges the measurements for resolution, bandwidth, face identification, and the other detail parameters will be accurate. The measurements for resolution, pixels count, face identification, and the other detailed parameters will only be accurate when the HD camera is adjusted to view exactly to the yellow/ black edges. Finally, a MegaPixel camera with 3:2 aspect ratio can also be tested, and in this case the complete chart has to be in view, up to the white/black arrows top and bottom, and up to the yellow/ black arrows left and right. In such a case, an approximate pixel count can be measured using the yellow/black numbers. Setup procedure Position the chart horizontally and perpendicular to the optical axis of the lens. When testing SD systems - the camera has to see a full image of the SD chart exactly to the white triangles around the black frame. To see this you must switch the CVBS monitor to underscan position so you can see 100% of the image. Without having under-scanning monitor it is not possible to test resolution accurately. When testing HD systems the camera has to see a full image of the HD chart exactly to the yellow triangles/arrows around the black frame. The accurate positioning of the camera for SD and HD systems respectively, refers to testing the resolution, pixel count, bandwidth, face identification, playing cards and number-plate detection. Other visual parameters, such as colour, linearity, A/D conversion and compression artefacts can be determined/measured without having to worry about the exact positioning. Illuminate the chart with two diffused lights on both sides, while trying to avoid light reflection off the chart. For more accurate resolution test, the illumination of the test chart, according to broadcast standards, should be around 2000 lux, but anything above 1000 lux may still provide satisfactory results as long as this illumination is constant. It would be an advantage to have the illuminating lights controlled by a light dimmer, because then you can also test the camera's minimum illumination. Naturally, if this needs to be tested, this whole operation would need to be conducted in a room without any additional light. Also, if you want to check the low-light level performance of your camera you would need to obtain a precise lux-meter. When using color cameras, please note that most cameras have better colors when switched on after the lights have been turned on, so that the color white balance circuit detects its white point. Position the camera on a tripod, or a fixed bracket, at a distance which will allow you to see a sharp image of the full test chart. The best focus sharpness can be achieved by seeing the centre of the “focus target” section. Set the lens’ iris to the middle position (F/5.6 or F/8) as this is the best optical resolution in most lenses and then adjust the light dimmer to get a full dynamic range video signal. In order to see this an oscilloscope will be necessary for analog cameras. For digital, or IP, cameras, good quality computer with viewing/decoding software will be needed. Care should be taken about the network connection quality, such as the network cable, termination, and the network switch. For analog cameras, make sure that all the impedances are matched, i.e., the camera “sees” 75 Ohms at the end of the coaxial line. When measuring minimum illumination of a camera, it is expected that all video processing circuitry inside the camera electronics are turned off, e.g. AGC, Dynamic Range, IR Night Mode, CCD-iris, BLC, and similar. What you can test ----------------- To check the camera resolution you have to determine the point at which the five wedge lines converge into four or three. That is the point where the resolution limits can be read off the chart, but only when the camera view is positioned exactly to the previously discussed white/black or yellow/black arrows. The example on the right shows a horizontal resolution of approximately 1,300 pixels when HD camera is tested. If, hypothetically, this image was from a 4:3 aspect ratio camera, it would have had an equivalent analogue resolution of approximately 900TVL. If you want to check the video bandwidth of the signal read the megahertz number next to the finest group of lines where black and white lines are distinguishable. On the right example, one can notice that the analogue bandwidth indication ends up at 9.4MHz (or 750TVL) which is sufficient to cover what an analogue SD camera can produce. The real example to the right shows blurring of the 1,400 pixels pattern. This is the same consistent camera result as in the example explaining the wedges above. A camera that is designed to produce a True HD signal (1920x1080) doesn't necessarily mean it will show 1,920pixels on the test chart. Details can be lost due to compression, misalignment, low light or bad lens/focus. The concentric star circle in the middle, as well as around the chart corners, can be used for easy focusing and/or back-focus adjustments. Prior to doing this, you should check the exact distance between the camera and the test chart. In most cases, the distance should be measured to the plane where the CCD chip resides. Some lenses though, may have the indicator of the distance referring to the front part of the lens. The main circle may indicate the non-linearity of (usually) a CRT monitor, but it can also be used to check A/D circuitry of the cameras, or monitor stretching, like in cases when there is no pixel for pixel mapping. The imaging CCD/ CMOS chips, by design, have no geometrical distortions, but it is possible that A/D or compression circuitry introduces some non-linearity and this can be checked with the main circle in the middle. The big circle in the centre can also be used to see if a signal is interlaced or progressive (progressive would show smoother lines). The smaller starred circles around the corners of the chart can be used not only for focus and back- focus adjustments, but also for checking the lens distortions, which typically appears near the corners. On some cameras it is possible to have lens optical axis misaligned, i.e. the lens not being exactly perpendicular to the imaging chip, in which case the four small starred circles around the test chart will not appear equally sharp. The wide black and white bars on the right-hand side have twofold function. Firstly, they will show you if your impedances are matched properly or if you have signal reflection, i.e. if you have a spillage of the white into the black area (and the other way around), which is a sign of reflections from the end of the line of an analogue camera. The same clean black/white bars can show you the quality of a long cable run (when analogue cameras are tested), or, in the case of a DVR/encoder/decoder, its decoding/playback quality. example shoot to The children's head shots, as well as the white and yellow patterns on the right-hand side, can be used to indicate face identification as per Australian Standard AS4806.2 where a person's head needs to occupy around 15% of the SD test chart height. This is equivalent to having 100% person's height in the filed of view, as per AS4806.2. The equivalent head dimensions have been calculated and represented with another two smaller shots of the same, one referring to HD720 and the other to HD1080 when using the 16:9 portion of the chart. The same can be measured by using the white and yellow patterns, as per VBG (VerwaltungsBerufsgenossenschaft) recommendations. The white pattern refers to SD cameras with 4:3 aspect ratio and the yellow ones refer to HD720 and HD1080 respectively. If you can distinguish the pattern near the green letter C, then you can identify a face with such system. If your system can further distinguish B, or, even better, A pattern, then the performance of such a system exceeds when compared to a system where only C can be distinguished. However, distinguishing the C pattern only is sufficient to comply with the standards. NOTE: It is the total system performance that define the measured details. This includes the lens optical quality and sharpness, the angle of coverage (lens focal length), the camera in general (imaging chip size, number of pixels, dynamic range, noise), the illumination of the chart, the compression quality, the decoder quality, and, finally, the monitor itself. This is why this -whole testing refers to system measurement rather than camera only. We the observer has 20/20 vision. Furthermore, the skin color of the three kids' faces will give you a good indication of the Caucasian flesh colors. If you are testing cameras for their color balance you must consider the light source color temperature and the automatic white balance of the camera, if any. In such a case you should take into account the color temperature of your light source, which, in the case of tungsten globes, is around 2800° K. Simplest and easiest to use is the daylight for such testing. Avoid testing color performance of a camera under fluoro-lights or mixed sources of light (e.g. tungsten and fluoro). The color scale on the top of the chart is made to represent typical broadcast electronic color bars consisting of white, yellow, cyan, green, magenta, red, blue, and black colors. These colors are usually reproduced electronically with pure saturated combinations of red, green, and blue, which is typically expressed with intensity of each of these primary colors from 0 to 255 (8-bit colors). Such coordinates are shown under each of the colors. If you have a vectorscope you can check the color output on one of the lines scanning the color bar. Like with any color reproduction system, the color temperature of the source is very important and, in most cases, it should be a daylight source. NOTE: The test chart is a hard copy of the computer created artwork. Since the test chart is on a printed medium, it uses different color space then the computer color space (subtractive versus additive color mixing). Because of these differences it is almost impossible to replicate 100% accurately these colors on paper. We have certainly used all available technology to make such colors as close as possible, by using Spyder3 color matching system, but with time, and different exposure of the chart to UV and humidity, the color pigment ages and changes. For these reasons we do not recommend using the color bars as an absolute color reference. The RGB continuous colour strip below the colour bars shown above, is made to have gradual change of colours, from red, through green and blue at the end. This can be used to check how good a digital circuitry, or how high compression, an encoder uses. If the end result shows obvious discountinuity in this gradual change of colors it would indicate that either the encoder or the level of compression is not at its best. Similarly, using the gray-scales at the bottom of the chart, a few things can be checked and/or adjusted. Using the 11 gray-scale steps Gamma response curve of camera/monitor can be checked. All 11 steps should be clearly distinguished. Monitors can also be adjusted using these steps by tweaking the contrast/ brightnest so as to see all steps clearly. When doing so, analogue cameras have to be set so that the video signal is 1 Vpp video signal, while viewing the full image of the test chart. Observe and note the light conditions in the room while setting this up, as this dictates the contrast/brightness setting combination. The continuous changing strips, from black to white and from white to black, are made so that their peak of white, and their peak of black, respectively, comes in the middle of the strips. These can also be used to verify and check A/D conversion of a streamer, encoder or compression circuitry. The smoother these strips appear when displayd on a screen the better the system. Always use minimum amount of light in the monitor room so that you can set the monitor brightness pot at the lowest position. When this is the case the sharpness of the electron beam of the monitor’s CRT is maximum since it uses less electrons. The monitor picture is then, not only sharper, but the lifetime expectancy of the phosphor would be prolonged. Modern day displays (LCD, plasma and similar) do not use electronic beam which could be affected by the brightness/contrast settings, but they will also display better picture, and better dynamic range if the brightness/contrast are set correctly, using the 11 steps mentioned previously. Lately, there are an increasing number of LCD monitors with composite video inputs, designed to combine a CCTV analogue display, as well as HD. As noted in the very beginning of this manual, under the Monitor heading, please be aware of the re-sampling such monitors perform in order to fill-up a composite analogue video into a (typically) XGA screen (1024X768 pixels). Because of this, LCD monitors are not recommended for resolution testing. If image testing needs to be done using a frame grabber board on a PC use the highest possible resolution you can find, but not less than the full ITU-601 recommendation (720X576 for PAL, and 720X480 for NTSC). Again, in such a case, native camera resolution testing can not be performed accurately as signal is digitised by the frame grabber. If, however, various digital video recorders are to be compared then the “artificial (digitised) resolution" can be checked and compared. The “ABC” fonts are designed to go from 60 points font size for “A” down to 4 points for “Z.” This could also be used for some testing and comparisons amongst systems. For the casino players, we have inserted playing cards as visual references. The cards may be used to see how well your system can see card details. If you can recognize all four of the cards (the Ace, the King, the Queen, and the Jack) then your system is pretty good. Similar logic was used with the playing card setup for SD, HD720 and HD1080 standards, hence there are three different cards sizes. It goes without saying that when the SD cards are viewed the CCTV camera should be set to view the 4:3 portion of the chart, exactly to the white/black arrows. Similarly, when the HD cards are viewed, the camera has to be set to view the 16:9 portion of the test chart, exactly to the yellow/black arrows. Typically, playing cards height should not be smaller than approximately 50 pixels on the screen, irrespective of whether this is an SD, HD720 or HD1080 system. The playing cards different sizes in this test chart are calculated so that they are displayed on your screen at approximately 50 pixels height. NOTE: In casino systems the illumination levels are very low, typically around 10 lux or lower. Such a low illumination may influence cards recognition too, so, if realistic testing is to be done, the chart illumination should be around the same low levels. Finally, the four corners of the test chart have 90% black and 10% white areas. Although these corners fall outside the 4:3 and the 16:9 chart areas, they may still be used to check on system reproduction quality. If you can distinguish these areas from the 100% black border or 100% white (0% black) frame with the “3:2” numbers it suggests your system overall performance is keeping such details and can be classified as very good. If this is not the case, adjustments need to be done either in the camera A/D section (typically where Gamma or brightness/contrast settings are) or where encoder/compression section is. IMPORTANT NOTE: The lifetime expectancy of the color pigments on the test chart is approximately two years, and it can be shorter if exposed longer periods to sunlight or humidity. It is therefore advised that a new copy is ordered after two years. A special discount is available for existing customers. Some real-life examples some more snap-shots from various real camera testing. The quality of this reproduction is somewhat reduced by the PDF/JPG compression of images, as well as the quality of this print in the book, but the main idea can still be seen. For example, the four snippets below are from a same HD camera, using H.264 video compression, set to 4 Mb/s, using four different lenses, all of which are claimed to be "mega-pixel" lenses. The snippets shown below are color and HD resolution (1920 pixels wide), but this book print is in B/W so the readers may not see the real difference, but anybody interested in it can e-mail me and I wil make the actual files available for inspection. It is quite obvious that the first lens has the worst optical quality, and the last has the best. If you would to use this camera/lens combination for face identification, the first one would hardly qualify. The smallest pattern in the yellow group of patterns is not distinguishable, which means face identification will hardly happen. Yet, as shown in the bottom snippet, the same camera with the same compression settings and another (better) lens will pass. Another real example snapshots are shown below, where two different lenses are tested and compared. Namely, of the two lenses, the top one obviously looks sharp throughout the test chart, while the bottom one appears reasonably sharp in the middle, but becomes blurry as you go towards the periphery of the test chart. Resolution in low light cannot be measured due to high noise content. As indicated below, lenses viewing the test chart from a very close distance usually produce geometric “barrel” distortion. It is not recommended for accurate resolution measurement, but, if there is no choice, it may still allow for reasonably good quality measurement The ViDi Labs test chart can be used for other various testing too, precise focusing at certain distance for example, or just simply to compare details and video reproduction in various circumstances. The example on the right shows just one more way of using the ViDi Labs test chart for checking dynamic range of various cameras, as conducted by IPVM And last, but not least , on the next page there is a graphical summary of the key measuring points in the test chart and their meaning. _____________________________________ Animatronic eyes ------------------ Animatronic Eye with an Electromagnetic Drive and Fluid Suspension and with Video Capability ; An animatronic eye with fluid suspension, electromagnetic drive, and video capability. The eye assembly includes a spherical, hollow outer shell that contains a suspension liquid. An inner sphere is positioned in the outer shell in the suspension liquid to be centrally floated at a distance away from the shell wall. The inner sphere includes painted portions providing a sclera and iris and includes an unpainted rear portion and front portion or pupil. The shell, liquid, and inner sphere are have matching indices of refraction such that interfaces between the components are not readily observed. A camera is provided adjacent a rear portion of the outer shell to receive light passing through the shell, liquid, and inner sphere. A drive assembly is provided including permanent magnets on the inner sphere that are driven by electromagnetic coils on the outer shell to provide frictionless yaw and pitch movements simulating eye movements. BACKGROUND 1. Field of the Description The present description relates, in general, to apparatus for simulating a human or human-like eye such as a robot or animatronic eye or a prosthetic eye, and, more particularly, to an animatronic or prosthetic eye assembly that utilizes fluid-suspension and is electromagnetically driven and that provides optical functions such as video capability. 2. Relevant Background Animatronics is used widely in the entertainment industry to bring mechanized puppets, human and human-like figures, and characters to life. Animatronics is generally thought of as the use of electronics and robotics to make inanimate objects appear to be alive. Animatronics are used in moviemaking to provide realistic and lifelike action in front of the camera as well as in other entertainment setting such as in theme parks, e.g., to provide lifelike characters in a theme ride or a show. Animatronics are often used in situations where it may be too costly or dangerous for a live actor to provide a performance. Animatronics may be computer controlled or manually controlled with actuation of specific movements obtained with electric motors, pneumatic cylinders, hydraulic cylinders, cable driven mechanisms and the like that are chosen to suit the particular application including the show or ride setting or stage and the specific character parameters and movement requirements. In the field of animatronics, there is a continuing demand to provide animatronic characters that better imitate humans and animals. Specifically, much of human and human-like character expression and communication is based on the eye including eye contact, eye movement, and gaze direction, and designers of robotic eyes attempt to mimic the subtle movements and appearance of the human eye to make animatronic figures more lifelike, believable, and engaging. To date, animatronic designers have not been able to accurately replicate human eye appearance and movement with challenges arising due to the need for rotation of the eye in a socket in a relatively rapid and smooth manner and also due to the relatively small form factor of the eye in an animatronic figure. Many types of robotic or animatronic eyes have been created with a number of actuating mechanisms. To actuate or rotate the eye, a drive or actuating mechanism is provided adjacent the eye such as in the animatronic figure's head that includes external motors, hydraulic cylinders, gears, belts, pulleys, and other mechanical drive components to drive or move a spherical or eye-shaped orb. As a result, the eye assemblies require a large amount of external space for included moving parts, and space requirements has become a major issue as the eye itself is often dwarfed by the mechanical equipment used to move the eye up and down (e.g., tilt or pitch) and side-to-side (or yaw). The mechanical drive equipment has moving components external to and attached to the eye that needs mounting fixtures and space to freely move. In some cases, existing animatronic eye designs are somewhat unreliable and require significant amounts of maintenance or periodic replacement due, in part, to wear caused by friction of the moving parts including the eye within a socket device. To retrofit an eye assembly, the electromechanical, pneumatic, hydraulic, or other drive or eye-movement systems typically have be completely removed and replaced. In some cases, animatronic eyes cannot perform at the speeds needed to simulate human eye movement. Movements may also differ from smooth human-like action when the drive has discontinuous or step-like movements, which decreases the realism of the eye. Additionally, many animatronic eye assemblies use a closed loop servo control including a need for a position or other feedback signal such as from optical, magnetic, potentiometer or other position sensing mechanisms. Further, the eye or eyeball's outer surfaces may rub against the seat or socket walls since it is difficult to provide a relatively frictionless support for a rotating sphere or orb, which may further slow its movement, cause wear on painted portions of the eyeball, or even prevent smooth pitch and yaw movements. More recently, there has been a demand for video capability such as to assist in tele-operation of the animatronics or to provide vision-based interactivity (e.g., track a location of a person or other moving object relative to the animatronic figure and then operate the animatronic figure in response such as by moving the eyes or the whole head). Some animatronic eye assemblies have been provided with video functionality, with some implementations positioning a tiny video camera within the eyeball itself to move with the eyeball and with its. lens at or providing the lens and/or pupil of the eye. However, this creates other problems because the camera power and signal lines may experience wear or be pinched by the movement of the eyeball or interfere with rotation of the eyeball as movement of the eyeball has to move or drag the cords that extend out the back wall of the eyeball. Hence, there remains a need for improved designs for animatronic or robotic eye assemblies that better simulate the appearance and movements of the human eye or an animal's eye. Such designs may have a smaller form factor (or use less space for drive or movement mechanisms) when compared with existing systems, may be designed to better control maintenance demands, and may, at least in some cases, provide video capability. SUMMARY OF THE INVENTION The following description provides eye assemblies that are well-suited for animatronic figures or characters and also for human prosthetics. For example, an animatronic eye assembly may utilize fluid suspension for a rotatable/positionable eye (or spherical eyeball, orb, or the like) that is electromagnetically gimbaled or driven. The eye assembly may be compact in size such that it may readily be used to retrofit or replace prior animatronic eyes that relied on external moving parts to drive the eyeball's rotation. The animatronic eye assembly may include a solid, clear plastic inner sphere that is floated or suspended within a clear liquid, and the inner sphere or eyeball along with the suspension liquid may be contained or housed in a close-fitting, clear plastic outer shell or housing, which also may be generally spherical in shape. The floatation or suspension fluid may have its index of refraction matched to the plastics of the eyeball and of the outer shell/housing such that the entire eye appears to be a clear, solid plastic sphere even though it contains a rotatable eye or eyeball in its center. The outer shell, liquid, inner sphere or eyeball may act in conjunction as a lens or lens assembly of a tiny, stationary camera (e.g., a video camera) that can be mounted to the rear portion of the outer shell. The front (or exposed) portion of the inner sphere or eyeball may be painted to have a human eye appearance with a center sphere surface or portion left clear to allow light to be passed to the camera (e.g., to provide a pupil of the eyeball). The eye assembly may utilize an electromagnetic drive in some embodiments, and, to this end, the inner clear sphere may have four permanent magnets or magnetic elements mounted to its top, bottom, and two sides at antipodal points about the equator or a great circle of the sphere. On the outer shell, four electromagnetic drives may be mounted so as to be associated with each of these inner sphere magnets (e.g., each electromagnetic drive is used to apply a magnetic or driving force upon one of the inner sphere magnets). The external drives may each included a pair of electromagnetic coils that are positioned adjacent and proximate to each other but, in some cases, on opposite sides of the equator (or a great circle of the outer shell that generally bisects the outer shell into a front and back shell portion or hemisphere) and with each drive equidistally spaced about the equator such that the four drives are located at 90 degree spacings and such that opposite electromagnetic drives include antipodal coils (e.g., coils on opposite sides of the outer shell with their center points being antipodal points on the outer shell/spherical housing). The external drive coils may be lay-flay, magnetic coils that may be selective operated to generate magnetic fields (e.g., energize or drive antipodal coils concurrently) to yaw and tilt/pitch the inner sphere or eyeball. The design of the eye assembly has no external moving parts, which eases its installation in new and retrofit, animatronic applications. The optical effects achieved by the eye assembly make it appear that the entire eye assembly is rotating (or at least that the outer shell is moving) within its mounting socket or location such as within an eye socket of an animatronic figure. Due to the magnification of the liquid in the outer shell, the inner eyeball or sphere appears to be as large as the entire outer shell, which means the eye assembly simulates a rotating eye even when the outer shell is locked into an eye socket and/or is under facial skin of an animatronic figure. The eye assembly with the shell, liquid, and inner sphere/eyeball (and other components in some cases) acting as a lens or lens assembly provides a foveal view that is automatically highlighted in the camera's image, and the spherical lens structure supports a relatively wide field of view even through the relatively small entrance portion of the inner sphere or eyeball (e.g., the pupil may be less than one third the front hemisphere of the inner sphere or orb such as 0.1 to 0.25 of the front hemisphere surface area). In some animatronic figures, two eye assemblies are provided that act to support stereo viewing while sharing the same electrical drive signal for objects at infinity while other arrangements prove eye assemblies that are arranged to be “toed-in” such as by using offset drive signals for their separate electromagnetic drive assemblies that may be derived from knowledge of an object's distance from the eye assemblies. To provide a prosthetic implementation, the eye assembly may be separated or include two parts: a hermetically sealed plastic eyeball and a remote electromagnetic drive including coils and control components. The sealed eyeball contained with a suspension fluid in a substantially clear outer shell or housing may be positioned within a human eye socket such as in a recipient that has one functioning eye. The drive assembly may be provided remotely such as within a frame of a set of glasses or otherwise attached/supported to the recipient's head near the skull and eye sockets. The eye assembly may be gimbled or driven magnetically through the skull such as by providing electromagnetic coils or a positionable permanent magnet in the eye glasses, and the motion of the prosthetic eye in the eye assembly may be controlled so as to match movement of the recipient/user's functioning eye such as based on an eyetracker (e.g., camera with tracking software) that may also be built into or provided on the eyeglasses. The eye assembly is attractive for use as a prosthetic due to its form factor and lack of rotation of the outer shell within the recipient's eye socket (which may be undesirable due to discomfort and other issues associated with implanting prosthetics). More particularly, an apparatus is provided for simulating an eye such as to provide an animatronic eye(s). The apparatus includes an outer shell with a thin wall that defines, with its inner surface, a substantially spherical inner void space and that has a substantially spherical outer surface. The outer shell has a front portion (e.g., front hemisphere or the like) and a rear portion (e.g., rear hemisphere or the like) that both transmit light with a first index of refraction (e.g., are substantially clear or at least highly transmissive of light as with most glasses and many plastics). The apparatus further includes a volume of flotation or suspension liquid contained with the inner void space of the outer shell, and the liquid transmits light with a second index of refraction substantially matched to the second index of refraction (e.g., within 10 percent or less of the same index value). The apparatus also includes an inner sphere with a solid body of material with a third index of refraction that substantially matches the first and second indices of refraction, and the inner sphere is positioned within the inner void space of the outer shell to float in the suspension liquid. The apparatus further includes a camera, such as a video camera, with an image capturing device positioned adjacent or near the rear portion of the outer shell such that it receives light passing through the outer shell, the suspension liquid, and the inner sphere (e.g., these three components act as a single camera lens with the index of refraction matching causing only the front and back surfaces of the shell to have any optical value). The rear portion of the shell may include an opaque hemisphere with an opening formed therein and a spherical camera lens or lens element may be positioned over the opening to provide a liquid seal with an outer surface of the opaque hemisphere. This camera lens may be shaped to correct focusing of the camera such as to provide an overall spherical shape with the shell components and/or to cause the image capture device to focus on the front portion of the outer shell (and, typically, not on the inner sphere or the suspension liquid). In some embodiments, the solid body and the wall of the outer shell may be formed of a substantially transparent plastic such as an acrylic. In practice, the suspension liquid may be chosen to have a specific gravity such that the inner sphere has neutral buoyancy whereby it floats in the center of the void space and liquid fills the space between the solid body and the inner surfaces of the outer shell wall (e.g., the body is maintained at a spaced apart distance from the shell). For example, when the body is a plastic (and may contain further weight-adding components such as permanent magnets), the liquid may be a mixture of glycerin and water such as ¾ glycerin and ¼ water or the like to achieve a desired buoyancy of the body and also, in some cases, to provide a desired viscosity so as to dampen movement of the body during quick rotation (e.g., to control movements solely on momentum or the like). In some cases, it is desirable that the inner sphere body has a diameter that is smaller than the thin wall's inner diameter but with not excessive spacing such as by providing the body with an outer diameter that is at least 80 to 90 percent or more of the inner diameter of the shell (or such that a spacing is less than about 0.5 inches and more typically less than about 0.25 inches such as less than about 0.1 inches or the like). The apparatus may further include an electromagnetic drive assembly that provides a set of magnetic elements (such as small permanent magnets) on the solid body of the inner sphere (such as two to four or more magnets spaced equidistally about a great circle of this sphere). The drive assembly may also include a like number of electromagnetic drive mechanisms positioned on or proximate to the outer surface of the outer shell, and these drive mechanisms are selectively operable to apply drive magnetic fields to the magnetic elements to provide yaw and pitch movements (concurrent or independent) to the solid body. Each of the drive mechanisms may also include a restoring permanent magnet that is positioned on or proximate to a great circle (or equator) of the outer shell to apply a restoring magnetic force on each of the magnetic elements on the inner sphere body to return/maintain the body in a predefined central/neutral position in the inner void space of the outer shell (e.g., spaced apart from the wall with its center coinciding with a center of the sphere defined by the outer shell). In one embodiment, the magnetic elements include four permanent magnets spaced 90 degrees apart about a great circle of the inner sphere body, and four electromagnetic drive mechanisms are provided on the outer shell near a great circle/equator of the shell's sphere. Each of these drive mechanisms includes a pair of electromagnetic coils that are positioned adjacent each other but on opposite sides of the great circle (e.g., mounted on opposite hemispheres of the outer shell). The coils are positioned such that pairs of the coils in opposite drives make up antipodal coil pairs (with an axis extending through their center points also extending through antipodal points of the outer shell sphere). The drive assembly is operable to concurrently operate antipodal pairs of the coils so as to apply an equal and opposite (or symmetric) magnetic drive force on the permanent magnets of the inner sphere body, whereby the inner sphere body may be caused to move through a range of yaw and pitch movements while remaining spaced apart from the wall of the outer shell. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates a perspective view of an animatronic figure including an eye assembly that provides a pair of magnetically driven eyes with fluid suspension as described herein; FIG. 2 is a front view of an animatronic eye assembly with fluid suspension, an electromagnetic drive, and video capability as may be used in the animatronic figure or character of FIG. 1; FIG. 3 is a side view of the animatronic eye assembly of FIG. 3 showing that each of the spaced apart magnetic drive mechanisms or components includes a pair of electromagnets adjacent to each other on opposite sides of the “equator” on an outer surface of a spherical eye housing (or container/outer shell); FIGS. 4 and 5 are exploded front and back views of an animatronic eye assembly as described herein and as may be used in animatronic figures as shown in FIG. 1; FIG. 6 is a functional block diagram of an animatronic eye assembly such as may be used to implement the assemblies shown in FIGS. 1-5; and FIG. 7 illustrates a prosthetic eye assembly of one embodiment useful for providing eye rotation with a remote magnetic field generator. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Briefly, embodiments described herein are directed to a compact, fluid-suspension, electromagnetically-gimbaled (or driven) eye that may be used in eye assemblies for animatronic figures as well as for human prosthetics. Each eye or eye assembly features extremely low operating power, a range of motion and saccade speeds that may exceed that of the human eye while providing an absence of frictional wear points (e.g., between the eyeball or inner sphere and an eye socket). The design of the eye assembly has no external moving parts, which eases its installation in new and retrofit animatronic applications. The eye assembly may include a clear or substantially transparent outer shell that contains a suspension fluid and an inner orb or sphere (e.g., an eyeball). The inner sphere may be a solid plastic or similar material ball with a set of magnetic elements attached or embedded thereon. An electromagnetic drive assembly may be provided that includes a set of magnetic drive mechanisms or components attached to the outer shell and each magnetic drive mechanism may include two magnetic coils and a permanent magnet used for restoring the inner sphere to a neutral position within the outer shell. For prosthetic applications, the eye portion (e.g., the shell, liquid, and inner sphere) may be separated from the electromagnetic drive as a hermetically sealed portion that may be placed in the eye socket, and the drive may be provided as an extra-cranially-mounted magnetic drive or the like. By using a controller or driver to selectively energize opposite or antipodal pairs of the magnetic coils, the inner sphere may be caused to rotate away from the neutral position (e.g., by overcoming the restoring or retaining forces of a set of permanent magnets) as desired to simulate an eye's movements such as to follow an object passing by an animatronic figure's head, to move as would be expected with speech or other actions of an animatronic figure, and so on. A video or other camera may be mounted on the rear, outer portion of the outer shell, and it may focus through the outer shell (or a lens thereon), through the suspension fluid, and the inner sphere (which may be painted to take on the appearance of a human or other eye but leaving a rear port/window or viewing portion of the inner sphere as well as a front/entrance window or pupil that is unpainted or clear/transmissive to provide a path for light through the eye assembly). In other words, all or much of the eye assembly acts as a lens assembly for the camera, and, to support this function, the index of refractions of the shell, suspension liquid, and inner sphere may be selected (e.g., matched or nearly correlated) to provide a unitary lens effect with a clear view through the entire structure (except for painted portions) from front to back, making a rear stationary camera possible, and the camera view is supported without a large entrance pupil and using a still camera even during rotation of the inner sphere or eyeball. Two eye assemblies may be used to support stereo viewing or imaging while they may share the same electrical drive signals from a controller/driver for viewing objects at infinity. Alternatively, the eyeball or inner spheres may be toed-in by using offset drive signals derived from a knowledge or calculation of object distances. we illustrates an animatronic FIG. 100 with a head 104 supported upon a body 108 (which is only shown partially for convenience but not as a limitation). Significantly, the animatronic FIG. 100 includes an eye assembly 110 that as may be implemented according to the following description (such as shown in FIGS. 2-6) to provide realistically moving eyes as well as an animatronic FIG. 100 with video capabilities. As shown in FIG. 1, the eye assembly 110 may include first and second fluid suspension, electromagnetically driven eyes 112, 113 that each include an inner sphere or eyeball 114, 115 that may be driven through the use of magnetic forces to rotate in any direction as indicated with arrows 116, 117. The number of eyes or eye devices 112, 113 is not limiting to the invention with some assemblies 110 including one, two, three, or more eye/eye devices that may be driven concurrently with the same or differing drive signals (e.g., to rotate similarly or independently) or driven independently with the same or differing drivers. Two eyes or eye devices 112, 113 are shown to indicate that an animatronic FIG. 100 may be provided with stereo video capabilities similar to a human by providing a video camera attached in each eye or eye device 112, 113 while other embodiments may only include one eye or eye device 112, 113 or one camera (e.g., only mount a video camera on an outer shell, for stationary mounting, of one of the two eyes 112, 113). The specific configuration of the eyes 112, 113 may be varied to practice the FIG. 100, but FIGS. 2 and 3 illustrates one useful eye or eye assembly 200 for providing the FIG. 100. As shown, the eye assembly includes an outer shell 210 that may be formed of an optically clear or substantially clear material such as a plastic (e.g., a high-grade acrylic or the like), a glass, a ceramic, or the like, and it is hollow with relatively thin outer walls that have an inner surface defining an inner volume or space of the assembly 200. The shell 210 may be formed of two or more parts such as a front and rear half or hemisphere and may be spherical or nearly spherical in its inner and outer shapes. The assembly 200 includes an inner sphere or eyeball assembly 260 that is positioned within the outer shell 210 and is suspended within a volume of liquid 250 (or suspension liquid or fluid). The eyeball assembly 260 includes a spherical body 262 that may take the form of a solid ball or sphere formed of an optically clear or substantially transparent material such as a plastic (e.g., a high-grade acrylic or the like), a glass, a ceramic, or the like with outer dimensions that are less than the inner dimensions of the outer shell. In some embodiments of the assembly 200, the spherical body 262 has an outer diameter that is less than the inner diameter of the shell 210 by about 20 percent or less such that the body 262 and shell 210 are relatively close fitting with little spacing that is filled with the liquid 250 (e.g., the inner diameter of the shell 210 may be 1.5 inches while the outer diameter of the body 262 may be 1.25 inches or more such that a clearance or spacing of about 0.125 inches or less is provided between the body's surfaces and the inner surfaces of the shell with this void or suspension space filled with liquid 250). The liquid 250 is chosen to have a specific gravity that allows it to support the weight of the ball 262 as well as to provide desired optical characteristics. Hence, it may be chosen to provide neutral buoyancy of the ball 262. e.g., to float ball 262 with its center of gravity coinciding with the center of the inner space/void defined by the inner surfaces of the outer shell 210. In this manner, the spacing between the inner surfaces of the shell 210 and outer surfaces of the body 262 may be substantially equal about the body 262. The liquid 250 also acts as a “lubricant” in the sense that there is no friction or physical contact between the body 262 and the shell 210 when the body 262 is rotated within the shell 210 (e.g., when the assembly 200 is operated as a spherical motor device). The optical characteristics may be chosen such that the liquid 250 has an index of refraction that substantially matches that of the shell 210 and/or the body 262 such that there is little or no refraction or diffraction at each material/component interface and the shell 210, liquid 250, and body 262 may generally act as a single lens or lens assembly and may create an effect where the body 262 and liquid 250 are nearly invisible to an observer. The eye assembly 200 further includes an electromagnetic drive assembly with FIGS. 2 and 3 showing portions of this assembly that is used to control movement and positioning of the spherical body or eyeball 262 (e.g., with control/driver portions not shown in FIGS. 2 and 3 but discussed more with reference to FIG. 6). For example, to provide a control or drive functionality, the assembly 200 includes two drives or drive mechanisms 212, 236 that are used to drive, via control signals on wires/lines 224 that would be connected to a controller/driver (not shown), the eyeball or inner sphere assembly 260 by defining yaw or side-to-side movements. Further, the assembly 200 includes two drives or drive mechanisms 220, 230 that are used to drive the eyeball or inner sphere assembly 260 by defining pitch or tilt movements of the eyeball 262. Each of the drives or drive mechanisms 212, 220, 230, 236 include a pair of magnetic coils with coils 214, 222, 322, 238, 338, 232, 332 being shown as flat-laying coils wrapped with wire 215, 223, 323, 239, 339, 233, 333 (which is driven by input lines 224 that are linked to a controller/driver (not shown)). Within each drive 212, 220, 2330, 236, the magnetic coils are spaced on the outer surface of the shell 210 so as to be adjacent each other but on opposite sides of a great circle (or the equator) of the spherical shell 210. In this way, opposite pairs of the magnetic coils may be thought of as antipodal coils in the drive assembly that may be concurrently operated to drive the inner sphere body 262 to rotate through a range of yaw and pitch angles (independently or concurrently to define movement/rotation of the body 262 in shell 210). For example, axes extending through antipodal pairs of the coils may define a range of motion of about 15 to 30 degrees with 20 degrees being used in some implementations to define yaw and pitch movement ranges. Specifically, coils 222 and 332 may be one pair of antipodal coils that drive the ball 262 in the pitch direction while coils 232 and 322 may define the other pitch antipodal coil pair. Although not shown in FIGS. 2 and 3, the eyeball assembly 260 would include a set of magnetic elements (e.g., permanent magnets on its outer surface or the like) that correspond in number and position to the drives 212, 220, 230, 236 (e.g., 4 permanent magnets may be embedded in the outer surface of the ball/body 262 at 90 degree spacings about a great circle or the equator of the body 262). By concurrently applying equal drive signals to either of either (or both of) these antipodal pairs, the eyeball 262 may be caused to pitch or tilt forward or backward, and adding driving forces in the yaw drives 212, 236 may be used to cause the eyeball 262 to yaw or move side-to-side to provide a full (or desired amount of movement). To return the eyeball or inner sphere 262 to a neutral position (as shown in the figures), each of the drives 212, 220, 230, 236 may further include a restoring magnet 216, 226, 234, 240. In one embodiment, the magnets 216, 226, 234, 240 are permanent magnets extending across the equator/great circle between (or overlapping) the adjacent coils (such as magnet 226 extending over/between coils 222 and 322) such that when no (or little) power is applied to the drives 212, 220, 230, 236 the magnetic force (e.g., an attractive force) provided by restoring magnets 216, 226, 234, 240 acts to cause the eyeball 262 to rotate to the neutral position with the inner sphere-mounted magnetic elements generally positioned between the drive coil pairs of each drive 212, 220, 230, 236 or adjacent the restoring magnetic 216, 226, 234, 240. This provides a power saving measure or function for assembly 200 in which the eyeball 262 is returned to and maintained at a desired position (which may or may not be a centered or straight-ahead gaze line as shown) without application of additional or ongoing power. The eyeball or inner sphere body 262 may be colored or have its outer surface colored to achieve a desired optical effect and to simulate a human or other eye. For example, as shown, the front portion or hemisphere may include a portion that is white in color as shown with a colored iris portion 266 in its center. Further, to provide a direct light path, a clear pupil or entrance window/section 268 may be provided in the center of the iris portion 266. A rear portion or hemisphere 370 of the spherical body 262 may be made opaque to light such as with a blue or black coloring (as this portion is not visible when the assembly 200 is placed and used in an animatronic figure), and the light path is provided by leaving a viewing section or rear window/port 374 in the rear portion or hemisphere 370 free from coloring/paint. To provide video capability, a video assembly 380 may be provided in assembly 200 and attached to the outer shell 210. As shown, a video camera or image capture device 382 is attached to the container or shell 210 with its lens (or a lens portion of the shell 210 as discussed in reference to FIGS. 4 and 5) coinciding with the clear or transparent window/port 374 of the surface of shell 210. The video camera 382 is rigidly attached to the shell 210 and not (in this case) the spherical body 262 such that the camera 382 is stationary or immobile during rotation or driving of the eyeball or body 262 with the electromagnetic drive assembly. Image signals/data may be transferred from the camera 382 to other components (such as monitoring or display equipment, object recognition and/or tracking software modules that may be used to control the movement of the eyeball 262, and the like) of the assembly 200 via line(s) 384 extending outward from camera 382. From the front and side views of the assembly 200 and the above description, it will be understood that the eye assembly 200 uses a combination of liquid suspension and a compact electromagnetic drive to provide a selectively positionable eyeball 262 that accurately simulates human and other eyes. The eye assembly uses a sphere-in-a-sphere magnification illusion to good effect as even though the inner sphere or eyeball 262 is smaller than the inner dimensions of the outer sphere/shell 210, the overall effect when the assembly is viewed by a user or observer is that the eyeball or sphere 262 is exactly the diameter of the shell (or that there is a one piece construction such that the liquid 250 is invisible as is the floating eyeball 262). Because of this illusion, the entire eye may be caused to appear to rotate when the inner sphere or eyeball 262 is rotated within the liquid by magnetic driving forces while the outer surface of the shell 210 remains fixed in its mountings (such as within an animatronic figure's head/skull frame). One feature of the eye assemblies described herein is that the outside of the eye does not move, e.g., the outer shell does not move relative to its mounting or support structure. Specifically, embodiments may have a stationary transparent housing or shell that is spherical (or generally so at least in its inner void space defined by the inner surfaces of its walls), and a transparent eyeball or inner sphere is floated in a suspension fluid or liquid within this shell. The shell and sphere's indices of refraction are matched to each other and to the floatation or suspension liquid, and, then, the structure formed by these three components, with the internal and moving/movable eyeball/sphere, make up a simple optical system or lens. Essentially, these components provide a transparent sphere or spherical lens with a single index of refraction. To provide a video or image capturing capability (or to make the eye assembly “see”), a video camera or other image capture device may be placed behind it and directed to receive light that passes through these three components or the spherical lens/optical system, e.g., use these components as the or a lens of the camera or image capture device. The camera may be stationary in this case such as by mounting it on or near the outer shell. In order to create a believable organic eye, a realistic pupil, iris, and sclera may be provided so as to be visible from the front outside of the eye assembly. However, to achieve this effect and not interfere with a camera's optical path is not a trivial design challenge as the inventors pursued a number of approaches to achieve useful results (such as with the assemblies shown in FIGS. 2-5). FIGS. 4 and 5 illustrate exploded front and back views, respectively, of an eye assembly (or portion thereof) 410 according to one embodiment that was designed to facilitate manufacture and assembly as well as provide the drive and optical functions described throughout this description. As shown, the assembly 410 includes an inner sphere or eyeball assembly 420 that includes a body 422 that may be formed as a solid ball or sphere from a clear or substantially transparent material such as plastic or glass. To simulate an eye while not interfering undesirably with an optical path for a camera 490, the body 422 has a first surface or spherical portion 424 that is painted with an exterior white layer to provide a sclera of the assembly 420. Next, a second surface or spherical portion 426 is painted a color (such as blue, brown, or the like) to provide an iris of the assembly 420. A “pupil” is provide in assembly as shown at 428 by providing a clear or unpainted third surface or spherical portion in the center of the iris or second surface 426 through which light may received or enter the ball 422 (e.g., provide a port or window for light to pass). The optical path through the eyeball or spherical body 422 is further defined by a fourth surface or spherical portion 430 of the body 422 that is also left clear or unpainted (and, which, may be larger than the pupil portion 428 but at least as large as the input to the camera 490 (e.g., at least as large as rear port/window 448 in a rear half/hemisphere 444 of the outer shell). To allow remote/not contact driving of the sphere 422 using magnetic fields, the eyeball assembly 420 includes a set or number of magnetic elements 432, 434, 436, 438 attached to or embedded within the sphere 422. For example, the elements 432, 434, 436, 438 may each be a permanent button or disk magnet embedded into the outer surface of the spherical body 422 (e.g., to be flush or receded with the surface of sphere 422 or to extend out some small distance less than the expected separation distance between the sphere 422 and the inner surfaces of shell formed of halves/hemispherical elements 440, 444). The elements 432, 434, 436, 438 may be positioned about a great circle of the spherical body 432 and typically with equal spacing (or equidistally spaced) such as at 90 degree spacings about the great circle of the spherical body 422 when four magnet elements are utilized as shown (or at 120 degree spacings if 3 elements are used and so on when other numbers are used). Either magnetic pole may be exposed with the opposite being provided by the driving coil when attractive forces are used to drive the eyeball 422 in the assembly 410. As shown, an axis 437 extends between the top and bottom magnetic elements 432, 436 indicating, in this case, these are positioned at antipodal points on the sphere's surface and in use, rotation about this axis 437 may be considered yaw rotation or movement, Rotationyaw. Similarly, an axis 439 extends between the two side magnetic elements 434, 438 showing these are also positioned on antipodal points of sphere 422 and when the eyeball 422 is caused to rotate about this axis 439 it may be thought of as having tilt or pitch movement or rotation, Rotationpitch. The eyeball or inner sphere 422 is rotated without friction in assembly 410. This is achieved in part by providing fluid suspension of the sphere 420 and in part by driving the rotation/movement using an electromagnet drive assembly. The fluid suspension is provided by a volume of liquid 460 that extends about and supports the sphere 422 such that it typically does not contact the inner surface of shell parts 440, 444 even as it is rotated or moved (e.g., which may be achieved by applying equal forces using antipodal pairs of drive magnet coils). The assembly 410 further includes an outer housing or shell that is provided by a first or front portion or hemisphere 440 and a second or rear portion or hemisphere 444. In this embodiment, the front portion 440 of the outer shell is formed of a clear or substantially transparent material and is left unadorned to provide an open optical path to the eyeball or sphere 422. The rear portion 444 in contrast may be formed of more opaque materials and has an exterior surface 446 that may be painted (blue or other color or left the color provided via manufacturing) to provide a desired outer appearance of the eye while the inner surface 447 that is painted a dark color (or left the color provided via manufacturing) such as black to limit the ability of an observer of the eye assembly 410 to see through the eye and to limit undesired reflections or transmittance of light. The rear portion 444 includes a port or opening 448 that is left unpainted in some cases or as shown is formed by removing the material (such as a plastic) used to form shell portion 444. Then, a camera lens 450 is attached over the opening or port 448 (e.g., an arcuate partial sphere formed of transparent plastic or the like may be sealably attached about the periphery of hole/opening 448. When assembled, the optical path of assembly 410 extends through the camera lens 450, the port or opening 448, a layer of liquid 460, the eyeball or sphere 422 (through portion 430 through the body thickness and then through the pupil 428), another layer or thickness of liquid 460, and then the front portion or hemisphere 440 of the outer shell. The assembly 410 further includes a video camera 490 such as a charge-coupled device (CCD) or the like that is affixed to the lens 450 or shell portion 444 or the like so as to be stationary even when the eyeball or sphere 422 is rotated. The electromagnetic drive assembly is provided in eye assembly 410 with the inclusion of a top drive (shown as including coils 480, 481 and restoring magnet 482), a bottom drive (shown as including coils 470, 471 and restoring magnet 472), a first side drive (shown as including coils 474, 475 and restoring magnet 476), and a second side drive (shown as including 484, 485 and restoring magnet 486). As shown, each drive mechanism includes a pair of coils that are spaced adjacent to each other but on opposite sides of a great circle of the outer shell, e.g., one coil of each pair is attached to the front portion or hemisphere 440 and one coil of each pair is attached to the rear portion or hemisphere 444. Further, these coils are arranged on antipodal points such that a coil in another drive mechanism is on an opposite side of the outer shell (e.g., coil 480 is an antipodal coil to coil 471 while coil 481 is an antipodal coil to coil 470). In practice, for example, application of drive signal on coils 480 and 471 that is adequately strong to overcome the restoring forces of magnets 472, 476, 482, 486 will cause the eyeball or inner sphere 422 to pitch or tilt forward, Rotationpitch, such as forward 10 to 20 degrees or more relative to vertical. Selective or concurrent driving of other ones of the antipodal pairs of the coils may be used to provide full motion of the eyeball 422 in the shell formed by halves 440, 444. The arrangement of assembly 410 shown in FIGS. 4 and 5 was selected in part to suit a particular manufacturing method but, of course, a wide variety of other manufacturing techniques may be used, which may drive a somewhat different design such as elimination of the hole 448 and lens 450 when the shell portion 444 is formed of clear material similar to shell portion 440. In the illustrated assembly 410, the inventors determined that rather than using all transparent parts it may be more precise and reproducible to use a mix of clear and stereo-lithographically-produced (or otherwise provided) opaque hemispherical parts. Specifically, the front half 440 of the outer shell may be a hemisphere of transparent acrylic or other material and is left clear while the rear/back half 444 of the outer shell may be substantially a hemisphere but be opaque. Since the front shell portion 440 provides the front of the camera's lens, the portion 440 typically will be manufactured to be optically smooth and as near as practical to an accurately shaped, hemispherical shell. For example, mold marks and aberrations may be controlled by using vacuum pulling with thermoplastic acrylic or the like at high temperatures into a hemispherical mold and terminating the pull just before the plastic or acrylic makes contact with the mold. This process may result in a shape that is slightly less than a full hemisphere, and this small deformation may be compensated for by fitting this piece or shell portion 440 into a back shell that is formed to be more-than-hemispherical (e.g., using precision stereo-lithography to form a greater than hemispherical shape in back shell portion 444), whereby the overall result of the shell is a hollow shell that is substantially spherical in shape. In a manner similar to the human eye, the assembly 410 leaves or provides a pupil-sized clear area (e.g., one fourth or less of the sphere/ball 422 area) at the front of the solid transparent inner sphere 422 as shown at 428. The remainder of the front of the inner sphere 422 may be made opaque with a layer of black paint that may then be painted as desired to provide a white sclera 424 and a colorful iris 426. A relatively large part or portion 430 (such as one fourth or more of the sphere/ball 422 area) is left transparent to provide a clear view or optical path for the camera 490 through the sphere 422 even when the eyeball/sphere 422 is rotated. The inside 447 of the back shell half or portion 444 may be painted black, and a centrally located, small hole 448 may be cut or formed into the shell half 444. The hole or opening 444 may be filled in or covered with a spherical section cut, camera lens 450 as may be formed by cutting it from a transparent hemisphere the same diameter as the front hemisphere or shell portion 440 (or slightly larger or smaller in spherical diameter to achieve a desired focus or optical effect with the overall lens assembly or spherical lens provided by the assembly 410). The video camera 490 provided at the back of the eye assembly 410 adjacent the lens focus correction piece or camera lens 450 may be selected to be physically small (such as with a diameter of less than about 0.5 inches and more typically less than about 0.3 inches) and, thus, to work well with only a small sized hole 448 to pass light to its CCD or other image detection device/portion. The space between the inner sphere 422 and outer shell formed of halves 440, 444 is filled with a suspension liquid 460. The liquid may serve at least three purposes including at least roughly matching the index of refraction of the sphere 422 and shells 440, 444 (or lens 450) so as to make all internal interfaces optically disappear or not be readily visible to an observer and/or to camera 490. The liquid 490 may also act to match the average specific gravity of the inner sphere 422 (including its small embedded magnets 432, 434, 436, 438) to render the internal sphere 422 neutrally buoyant so as to prevent (or limit) fictional rubbing on the top, bottom, and sides of the outer shell halves 440, 444 by sphere 422. Additionally, the suspension fluid 460 may be chosen so as to have a viscosity that provides a damping force on the rotation of the inner sphere 422, whereby over-spin during rapid eyeball 422 movements is better controlled (e.g., to provide a selectable, tunable amount of resistance to eye movement and to control momentum of the rotating eyeball 422). In one embodiment, the ball 422, shell portion 440, and camera lens 450 are formed of a substantially transparent acrylic, and, in this case, the suspension fluid 460 is made up of a mixture of approximately ¾ glycerin (e.g., 90 to 99 or more percent pure glycerol or glycerin) and ¼ water to achieve these purposes of the suspension fluid. The structures or assemblies described above use a solid internal sphere and a transparent outer shell to provide a clear view through the eye to a camera or other image detection device even during pupil (and eyeball/inner sphere rotation). This is due in part to the fact that the pupil is located behind the front surface of the entire eyeball lens (e.g., behind the liquid and front half of the outer shell). Hence, the pupil is out of the camera focus and acts as or similar to an aperture stop. At the extremes of eye movement (such as up to +/−20 degrees or more of yaw and/or pitch movement), the overall light available to the camera decreases because of the oblique position of the iris, but the automatic gain control (AGC) of the camera may compensate for this. Also, the depth of field increases somewhat at the extreme positions of the eyeball/inner sphere, and some spherical aberration may become apparent, which may make it desirable to limit the range of yaw and pitch movements (e.g., to about plus/minus 20 degrees or less or the like). we illustrates a functional block diagram of an eye assembly 600 of one embodiment that is useful for showing control (and other) features that may be used to implement the assemblies shown in FIGS. 1-5 and 7. The assembly 600 includes a lens assembly 610 that is made up of a spherical outer shell 612 that is typically formed of transparent or substantially clear thin wall (such as a two part shell of transparent glass, plastic, or the like). A suspension liquid 614 is provided within this shell 612 that has an index of refraction matching or selected based on the index of refraction of the shell 612 (as well as to provide a specific gravity to provide neutral buoyancy of the orb 616 and to provide a desired viscosity to control travel of the orb 616). The lens assembly further includes an eyeball or orb with or without painted surfaces such as a solid sphere of transparent or substantially clear material with an index of refraction matching or selected based on the index of refraction of the liquid 614 (such that interfaces between the orb 616 and liquid 614 are not readily visible to an observer or to the camera 624). The orb 616 may also include an optional load 618 within its interior when the orb 616 is hollow (such as an internally mounted camera in place or in addition to camera 624) or on an external surface. The lens assembly 610 may also include a camera lens 620 attached to the exterior surface of the shell 612 (such as cover a hole cut into the shell 612 when a rear portion is opaque or so as to correct/adjust focusing through lens assembly 610). The camera lens 620 may also be provided with or connected to the video camera 624. The image data from video or other camera 624 are transferred to a video processor 626 for processing such as for display on monitor or display device 627 as shown at 628. The processor 626 may also run an object tracking module 629 to process the image data to determine or recognize an object in the image data and/or track a location of the object relative to the lens assembly 610, and this information may be provided to an eye assembly controller 632 for use in positioning the lens assembly 610 (e.g., to cause the eye assembly 610 to rotate to follow or move a gaze direction based on an object's location or the like). The assembly 600 uses the eye assembly controller 632 to generate drive signals 634 that are used by the electromagnet drive assembly 630 to rotate/position the lens assembly 610 and, more accurately, to rotate/position the eyeball/orb 616 within the shell 612 (which is typically stationary and mounted to a frame such as within an animatronic figure). A power source 636 may be used by the controller 632 to generate a signal 634 (e.g., a voltage signal or the like) and the operation of controller 632 may follow a saved program (e.g., operate the drive assembly 630 based on code/instructions stored in memory of assembly 600 not shown) and/or based on data from tracking module 629 and/or based on user input mechanism 638 (e.g., a user may input control data via an analogy joystick a keyboard, a mouse, a touchscreen, or other input device). As shown, the drive assembly 630 includes permanent magnets 642, 644 that are mounted or provided on or in the eyeball or inner sphere 616. For example, two, three, four, or more rare earth or other permanent magnets may be embedded or attached to the eyeball or sphere 616 such as a number of magnets equally spaced apart about the surface such as on a great circle of the sphere 616. The drive assembly 630 also includes a number of restoring magnets 646, 648 that are provided to apply a continuous magnetic field upon the permanent magnets 642, 644 of the eyeball 616 to return and maintain the orb 616 at a center or neutral position (e.g., proximate to the magnets 646, 648), and, as such, the restoring magnets 646, 648 may have a like number as the eyeball/orb magnets 642, 644 and a similar spacing/position on the outer shell 612 as the magnets 642, 644 (each may be on a great circle and spaced apart about 90 degrees when four magnets are used). Hence, when no (or minimal energy) drive signals 634 are provided, the restoring magnets 646, 648 apply magnetic forces upon the eyeball/orb 616 that causes it to return and/or remain in a predefined center or neutral position within the shell 616 (e.g., spaced apart from the shell 612 and with a pupil gazing or directed generally straight outward or another useful position for the application). The drive assembly 630 provides an electromagnetic-based drive and, to this end, includes a plurality of electromagnets 650, 652, 654, 656 positioned on or near (close enough to apply adequate magnetic forces on the magnets 642, 644) the spherical outer shell 612. Typically, side-by-side or paired electromagnets 650, 652 are positioned adjacent to each other but with a center of their coils spaced apart and on opposite hemispheres of the shell 612. In this manner, the restoring magnets 646, 648 may be used to try to retain the magnets 642, 644 in a plane passing through or near a great circle (e.g., equator) of the outer shell 612 while selective energization of antipodal pairs of the electromagnets 650 and 656 or 652, 654 causes the magnets 642, 644 to be displaced from the neutral position. As shown, axes 660, 664 extend through antipodal points on the spherical shell 612 coinciding generally with centers of coils 650, 656 and 652, 654. The angle, θ, defined by these intersecting axes 660, 664 defines a range of movement of the eyeball 616 relative to a neutral position (e.g., when a plane passes through the restoring magnets 646, 648 as well as the permanent magnets 642, 644), and this range may be plus/minus 15 to 30 degrees such as plus/minus 20 degrees in one embodiment. As shown, the antipodal coils 650, 656 are being energized by the controller 632 with drive signals 634, which as causes a pitch or yaw movement defined by the angle, θ, which may be −20 degrees of pitch or yaw. In some embodiments, therefore, swiveling the eyeball is achieved with little or no net translational force being applied to the eyeball or inner sphere. A magnet/coil configuration is used that is symmetrical that acts to exert balanced forces around the center of the eyeball or inner sphere so that only (or at least mainly) rotational torques are applied during eyeball/inner sphere pitch and/or yaw (or combinations thereof) movement. In this manner, friction by the eyeball or inner sphere rubbing against the inner surfaces of the outer shell is eliminated because the opposite, equal magnitude driving forces combined with neutral buoyancy provided by the suspension liquid nearly prevent the inner sphere or eyeball from contacting the outer shell during normal operations of the eye assembly. In some embodiments, there are four small permanent magnets mounted at the North and South poles and at the extreme left and rights sides of the inner sphere on a great circle of this sphere, and the magnets are installed so their poles align across the sphere (are arranged on antipodal points of the inner sphere). These internal or inner sphere-mounted magnets are bracketed fore and aft by electromagnetic coils on the outer shell as shown in the figures. Restoring magnets (e.g., contoured and relatively weak rubberized magnet strips or the like) are applied over each pair of these coils on the outer shell. These restoring magnets may be empirically shaped to generate a quasi-uniform magnetic field across the eye ball assembly providing a restoring magnetic force for the permanent magnets on/in the inner sphere, so that its rest position is centered between the drive electromagnetic coils. The controller (such as controller 632) may be a relatively simple design such as an opamp voltage follower circuit used with power source (such as power source 636) to apply a drive voltage (such as signals 634) and, therefore, current to alternating pairs (or antipodal pairs) of the drive coils at the top/bottom and/or left/right sides of the eye assembly. In some cases, a user input device such as an analog joy stick may be used to allow users/operators to quickly move the eyeball or inner sphere by providing and/or modifying the input drive signals via the controller. One advantage of embodiments of the described eye assemblies is that open loop control may be acceptable for use in all but the most stringent applications because the position of the eyeball or inner sphere may be caused to directly track the strength of the driving magnetic field. Control is simplified, and there is no need for feedback on the eye position. The eyeball or inner sphere may be driven to plus/minus 20 degrees yaw and tilt/pitch such as by approximately plus/minus 200 milliAmps of coil current per axis, and the drive current at the neutral position is 0 milliAmps dues to the no-power restoring magnets. The dynamic drive current can easily be reduced by increasing the number of windings on the drive coils (e.g., if 100 turns of 0.13 mm wire for an approximate 4.5 ohm coil is used, increasing the number of turns likely will reduce the drive current used). Drive coils that wrap completely around the outer shell may be useful in some applications such as to free up more of the front-of-eye view. In one prototyped embodiment, the eye assembly's maximum saccade rate was measured using an NAC Image Technology, Model 512SC high-speed video camera set at the 100 frame/second rate. Frames were counted during plus/minus 20 degree excursions of the eyeball/inner sphere while driving it with an approximate square wave of current on each axis. With a 400 mA peak coil current drive, the peak saccade speeds measured were approximately 500 degrees/second, which exceeds the human eye speed of approximately 200 degrees/second for small excursions. The speed/power tradeoff may be optimized for the eye assembly and can be tailored, for example, by varying the viscosity of the suspension liquid and/or by modifying the coil or drive mechanism structures. For example, adding small amounts of water to the suspension liquid lowers its viscosity and supports higher sustained speeds at a given current. While eye assemblies described herein may be particularly well suited for animatronic uses, the eye assemblies may be used in many other settings such as novelties and toys and also for medical or prosthetic applications. This may involve using the above described configurations such as with the drive coils mounted on the outer shell that is used as part of the lens assembly and to contain/hold a volume of liquid and the inner sphere or eyeball. In some prosthetic (or other product/service) applications, though, it may be more useful or desirable to utilize remote coils or other external drive mechanisms spaced apart from the outer shell and further away from the rotated/driven inner sphere. For example, illustrates a prosthetic eye assembly 700 in which the rotary part of the assemblies described above are provided to a user/recipient 702 as a human-eye prosthesis 710 that is positioned within the eye socket of the recipient's head/skull 704. The eye prosthesis may include the outer shell, suspension fluid, and the eyeball/inner sphere as described above. The eye prosthesis 710 is well suited to this application 700 as its outer shell does not rotate but, instead, only appears to due to the magnification of the inner sphere, which is rotatable using remote magnetic fields. The eye prosthesis 710 may be manufactured in the form of a smooth but rugged, hermetically-sealed, inert ball that may be placed in the eye socket of the recipient's head 704 with no need to worry about rotational friction or rubbing against sensitive human tissue. Since the eye 710 is magnetically driven or steered, the drive force may be exerted from outside the skull to cause the rotation of the eyeball in prosthesis 710 as shown at 714 such as to move similarly or track movement of active/functioning eye 708. the assembly 700 includes a pair of eyeglasses 720 or another support may be provided for the components of assembly 700. A magnetic field generator 724 is mounted in or on the bow of the eyeglasses frame 720 such as adjacent the eye prosthesis 710 to selectively generate a magnetic field to move 714 the eyeball or inner sphere that is suspended in liquid in an outer shell in eye 710. To track movement of eye 708, the assembly may include a video camera 740 that provides video data or video input 746 to a controller 730 that operates to provide drive signals 728 to the magnetic field generator 724 such as based on the eye position of eye 708. As shown, the drive 724 for this human-embedded eye 710 may come from a modified pair of eyeglasses 720. The glasses 720 may contain either a compact set of electromagnet coils to rotate the eyeball of prosthesis 710 or one or more permanent magnets that may be driven by a miniature servo system or the like. It may be useful in some cases to provide a magnetic rotating system, such as like the fluid-suspended electromagnetic spherical drive described herein that is used to indirectly operate 714 the eyeball or inner sphere in the prosthesis 710. The drive or magnetic field generator 724 maybe devoted to rotating a permanent magnet outside the skull 704 that in turn would drive the eyeball or inner sphere in the prosthesis 710. In each arrangement of drive 724, the human-embedded eye or prosthetic eye 710 may follow the motion of the external magnetic field provided by generator 724. In some cases, the assembly 700 is adapted to provide eye tracking of the eyeball or inner sphere of the prosthetic eye 710 to the other, still functioning human eye 708. This may be achieved using video signals, optical signals, and/or electrooculography signals (e.g., tracking the small voltages generated around the human eye socket when the eyeball 708 rotates), with a video camera 740 combined with a controller 730 that uses eye-tracking software to generate a drive signal 728 being shown in assembly 700. In any of these tracking implementations, the electromagnetic prosthetic eye 710 may be operated to move the eyeball or inner sphere to match the gaze direction of the human eye 708 so as to provide a realistic prosthetic with regard to appearance and movement/rotation 714. Although the invention has been described and illustrated with a certain degree of particularity, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the combination and arrangement of parts can be resorted to by those skilled in the art without departing from the spirit and scope of the invention, as hereinafter claimed. As will be appreciated from the above description, the eye assembly provides eyeballs or inner spheres that have a full range of movement (or a range similar to the eye being simulated), and the whole mechanism is hardly bigger than the eyeball itself such that it can be installed in existing animatronic heads without even having to remove the old/existing actuators. The eye assembly has no moving parts outside the container or shell making it easy for use in retrofitting other eyes or to use in other settings such as a prosthetic or in compact robots, toys, or the like. The drive has low power requirements and consumption making it a useful eye assembly for untethered implementations and use of battery power. Further, the number of magnetic drives or drive mechanisms and corresponding inner sphere-mounted magnetic elements that are utilized may be varied to practice the eye assemblies described herein. For example, it may be useful to use a number other than 4 such as to use 3 drive mechanisms and 3 corresponding inner sphere-mounted magnetic elements that may be provided at 120-degree intervals along a great circle of the inner sphere and the outer shell (e.g., perpendicular to the line of sight of the eye or about/near the equator of the outer shell with the inner sphere being aligned due to operation of the coils and/or a set of restoring magnets). In other cases, more than 4 drive mechanisms and inner sphere-mounted magnetic elements may be utilized to suit a particular application. Additionally, the camera was shown typically mounted external to the inner sphere or eyeball, which was typically formed to be solid. In other cases, a non-solid eyeball or inner sphere may be utilized. In such cases, the camera may still be positioned external to the inner sphere or it or its lens may be positioned within the hollow inner sphere (e.g., as a payload of the body of the eye assembly that basically acts as a spherical motor that may have any number of payloads such as cameras (e.g., a wired device or more preferably wireless device (which may include power and video transfer in a wireless manner), lights, and the like). In some embodiments, a means for tracking movement of a functioning eye, as is discussed above, and this may involve use of electrooculography (EOC) for eye tracking (and EOC may even be used for the non-functioning eye in some cases). The tracking means may be wearable, implantable, and/or the like. For example, embodiments may be provided by implanting the control and/or the EM drivers. In other words, although wearable tracking devices are specifically shown in the figures, these may be replaced or supplemented with implantable tracking devices in some applications. Claims (21) Hide Dependent 1. An eye apparatus, comprising: an outer shell comprising a wall defining a substantially spherical inner void space and having a substantially spherical outer surface, wherein the outer shell includes a front portion and a rear portion both transmitting light with a first index of refraction; a volume of suspension liquid contained within the inner void space of the outer shell, the suspension liquid transmitting light with a second index of refraction substantially matching the first index of refraction; an inner sphere comprising a solid body with a third index of refraction substantially matching the first and second indices of refraction, wherein the inner sphere is positioned in the inner void space of the outer shell in the suspension liquid; and a camera positioned adjacent the rear portion of the outer shell, whereby an image capturing device of the camera receives light passing through the outer shell, the suspension liquid and the inner sphere. 2. The apparatus of claim 1, wherein the solid body and the wall of the outer shell are formed of a substantially transparent plastic. 3. The apparatus of claim 2, wherein the substantially transparent plastic comprises an acrylic. 4. The apparatus of claim 1, wherein the suspension liquid has a specific gravity selected to provide approximately neutral buoyancy to the inner sphere. 5. The apparatus of claim 4, wherein the suspension liquid comprises a mixture of glycerin and water. 6. The apparatus of claim 4, wherein the outer shell wall has an inner diameter greater than an outer diameter of the solid body of the inner sphere, the outer diameter having a magnitude greater than about 80 percent of the inner diameter of the outer shell wall. 7. The apparatus of claim 1, wherein the rear portion of the outer shell comprises an opaque hemisphere including an opening and a spherical camera lens extending over the opening, sealably attached to the opaque hemisphere, and positioned between the camera and the inner void space. 8. The apparatus of claim 1, further comprising an electromagnetic drive assembly including a set of magnetic elements spaced apart on a great circle of the solid body of the inner sphere and a set of electromagnetic drive mechanisms positioned proximate to the outer surface of the outer shell, wherein the electromagnetic drive mechanisms are selectively operable to apply drive magnetic fields to the magnetic elements to provide yaw and pitch movements to the solid body. 9. The apparatus of claim 8, wherein each of the electromagnetic drive mechanisms comprise a restoring permanent magnet positioned proximate to a great circle of the outer shell and wherein the restoring permanent magnets apply a restoring magnetic force on the magnetic elements on the solid body to return and maintain the solid body in a predefined center position in the inner void space spaced apart from the outer shell wall. 10. The apparatus of claim 8, wherein the set of magnetic elements comprise four permanent magnets spaced 90 degrees apart on the great circle of the solid body and wherein each of the electromagnetic drive mechanisms comprise a pair of electromagnetic coils positioned adjacent to each other and on opposite sides of a great circle of the outer shell, whereby pairs of the electromagnetic coils in the electromagnetic drive mechanisms located on opposite sides of the outer shell are positioned to have center points proximate to antipodal points of the outer shell such that the opposite pairs are antipodal coil pairs and wherein the antipodal pairs are operated concurrently to apply symmetric driving forces on the magnetic elements of the inner sphere to maintain the solid body spaced apart from the outer shell wall. 11. An animatronic eye for use in an animatronic figure, comprising: a lens assembly comprising a spherical, hollow outer shell for mounting onto the animatronic figure, a volume of liquid within the outer shell, and an inner sphere suspended within the liquid; and a drive assembly comprising a set of magnetic elements provided on a great circle of the inner sphere and a set of electromagnetic drives positioned external to the outer shell about a great circle of the spherical outer shell, wherein the electromagnetic drives are selectively operable to apply magnetic drive forces upon the magnetic elements to rotate the inner sphere about a center point in yaw and pitch directions. 12. The animatronic eye of claim 11, wherein the inner sphere has an outer diameter with a magnitude of at least about 80 percent of a magnitude of an inner diameter of the spherical outer shell and wherein the volume of liquid substantially fills a space between the inner sphere and the spherical outer shell and has a specific gravity to provide neutral buoyancy for the inner sphere such that the center point of the inner sphere substantially coincides with a center of the spherical outer shell. 13. The animatronic eye of claim 11, wherein the set of magnetic elements comprise four permanent magnets proximate to an outer surface of the inner sphere and spaced apart about 90 degrees on the great circle of the inner sphere. 14. The animatronic eye of claim 13, wherein the set of electromagnetic drives comprises four pairs of electromagnetic coils spaced apart about 90 degrees along the great circle of the spherical outer shell and wherein in each of the pairs a first coil is positioned on a first side and a second coil is positioned on a second side of the great circle of the spherical outer shell, whereby opposite ones of the first and second coils have center points coinciding with antipodal points of the spherical outer shell and whereby concurrently driving the opposite ones of the first and second coils applies symmetric driving forces upon the permanent magnets to cause the rotation in the yaw and pitch directions. 15. The animatronic eye of claim 11, further comprising a video camera attached to the outer shell upon a rear portion of the outer shell adapted to focus on a front portion opposite the rear portion, wherein the outer shell, the liquid, and the inner sphere have matching indices of refraction. 16. The animatronic eye of claim 15, wherein the inner sphere comprises a solid body with a painted portion extending over a hemisphere positioned proximate to the front portion of the outer shell and with an unpainted pupil portion provided within the painted portion providing an inlet window for light to pass through the solid body. 17. A prosthetic eye assembly, comprising: a spherical, hollow outer shell for positioning within a human eye socket of a prosthesis recipient; a suspension liquid contained within the outer shell; a spherical body suspended within the liquid; and a magnetic drive assembly comprising a set of magnetic elements provided on a great circle of the inner sphere and a magnetic field generator positioned a distance apart from the outer shell, wherein the magnetic field generator generates magnetic forces rotating the spherical body. 18. The assembly of claim 17, wherein the set of magnetic elements comprises a number of permanent magnets provided on a great circle of the spherical body and wherein the suspension liquid fill a space between the spherical body and the outer shell and has specific gravity to provide a substantially neutral buoyancy to the spherical body. 19. The assembly of claim 17, further comprising a controller providing drive signals to the magnetic field generator to drive the spherical body to have yaw and pitch movements tracking movement of a functioning eye of the prosthesis recipient. 20. The assembly of claim 19, further comprising means for tracking the movement of the functioning eye. 21. The assembly of claim 20, further comprising a frame wearable by the prosthesis recipient, wherein the magnetic field generator and tracking means are mounted in the frame with the magnetic field generator adjacent the spherical body. _____________________________________ Holographic Displays, Robot Eyes Hint at Your Interactive Future The eyes may be the window to the soul. But what do you see when you look into robotic eyes so real that it’s almost impossible to tell they are just empty, mechanical vessels? At Siggraph, the annual conference for graphics geeks that ended last week, Disney researchers created an animatronic eye that moves in a lifelike way, makes eye contact and tracks those who pass by. “We wanted two things from the eye, senior research scientist at Disney Research. “It should be able to see or have vision, and it should move as smoothly and fluidly as the human eye.” The animatronic eye was one of the 23 exhibits in the emerging-tech section of the conference. “Each year there’s always been some consistent themes , emerging-tech chair at Siggraph 2010. “But this year there hasn’t been one thing that has leapt out in front of others.” Instead a variety of technologies jostled for attention: new 3-D display technologies, augmented reality and robotics . A seeing Eye Research’s animatronic eye is relatively simple in its design. The eye has a transparent-plastic inner sphere with a set of magnets around it, painted to look just like a human eye. It is suspended in fluid and has a transparent outer shell. Using electromagnets from the outside, the eye is moved sideways or up and down, giving it a smooth and easy motion. “It is as fast as the human eye and as good as the human eye. The pupil and the back of the eye are clear. A camera placed at the rear of the eye helps the eye see. to be hopes the mechanism can be used to create prosthetic eyes. “The prosthetic eye based on this won’t restore sight, but it can restore cosmetic appearance to those who have lost an eye. Audio-Animatronics Audio-Animatronics (also known as simply Animatronics, and sometimes shortened to AAs) is the registered trademark for a form of robotics animation created by Walt Disney Imagineering for shows and attractions at Disney theme parks, and subsequently expanded on and used by other companies. The robots move and make noise (generally a recorded speech or song), but are usually fixed to whatever supports them. They can sit and stand but usually cannot walk. An Audio-Animatronic is different from an android-type robot in that it uses prerecorded movements and sounds, rather than responding to external stimuli. In 2009, Disney created an interactive version of the technology called Autonomatronics. Technology The former bride auction scene in Pirates of the Caribbean at Disneyland Pneumatic actuators are powerful enough to move heavier objects like simulated limbs, while hydraulics are used more for large figures. On/off type movement would cause an arm to be lifted (for example) either up over an animatron's head or down next to its body, but with no halting or change of speed in between. To create more realistic movement in large figures, an analog system was used. This gave the figures' body parts a full range of fluid motion, rather than only two positions. To permit a high degree of freedom, the control cylinders resemble typical miniature pneumatic or hydraulic cylinders, but mount the back of the cylinder on a ball joint and threaded rod. This ball joint permits the cylinders to float freely inside the frame, such as when the wrist joint rotates and flexes. Disney's technology is not infallible however; the oil-filled cylinders do occasionally drip or leak. It is sometimes necessary to do makeup touch-up work, or to strip the clothing off a figure due to leaking fluids inside. The Tiki Room remains a pneumatic theatrical set, primarily due to the leakage concerns; Disney does not want hydraulic fluids dripping down onto the audience during a show. Because each individual cylinder requires its own control channel, the original Audio-Animatronic figures were relatively simple in design, to reduce the number of channels required. For example, the first human designs (referred to internally by Disney as series A-1) included all four fingers of the hand as one actuator. It could wave its hand but it could not grasp or point at something. With modern digital computers controlling the device, the number of channels is virtually unlimited, allowing more complex, realistic motion. The current versions (series A-100) now have individual actuators for each finger. Disney also introduced a brand new figure that is used in Star Wars: Galaxy's Edge and is referred to as the A1000. Compliance Compliance is a new technology that allows faster, more realistic movements without sacrificing control. In the older figures, a fast limb movement would cause the entire figure to shake in an unnatural way. The Imagineers thus had to program slower movements, sacrificing speed in order to gain control. This was frustrating for the animators, who, in many cases, wanted faster movements. Compliance improves this situation by allowing limbs to continue past the points where they are programmed to stop; they then return quickly to the "intended" position, much as real organic body parts do. The various elements also slow to a stop at their various positions, instead of using the immediate stops that caused the unwanted shaking. This absorbs shock, much like the shock absorbers on a car or the natural shock absorption in a living body. Cosmetics The skin of an Audio-Animatronic is made from silicone rubber. Because the neck is so much narrower than the rest of the skull, the skull skin cover has a zipper up the back to permit easy removal. The facial appearance is painted onto the rubber, and standard cosmetic makeup is also used. Over time, the flexing causes the paint to loosen and fall off, so occasional makeup work and repainting are required. Generally as the rubber skin flexes, the stress causes it to dry and begin to crack. Figures that do not have a high degree of motion flexibility, such as the older A-1 series for President Lincoln, may only need to have their skin replaced every ten years. The most recent A-100 series human AAs, like the figure for President Barack Obama, also include flexion actuators that move the cheeks and eyebrows to permit more realistic expressions; however, the skin wears out more quickly and needs replacement at least every five years. The wig on each human AA is made from natural human hair for the highest degree of realism, although using real hair creates its own problems, since the changing humidity and constant rapid motions of the moving AA carriage hardware throughout the day cause the hair to slowly lose its styling, requiring touch-ups before each day's showing. Autonomatronics Autonomatronics is a registered trademark for a more advanced Audio-Animatronic technology, also created by Walt Disney Imagineers. The original Audio-Animatrons used hydraulics to operate robotic figures to present a pre-programmed show. This more sophisticated technology can include cameras and other sensors feeding signals to a high-speed computer which processes the information and makes choices about what to say and do. In September 2009, Disney debuted "Otto", the first interactive figure that can hear, see and sense actions in the room. Otto can hold conversations and react to the audience. In December 2009, Great Moments with Mr. Lincoln returned to Disneyland using the new Autonomatronics technology. Stuntronics In June 2018, it was revealed that Disney Imagineering had created autonomous, self-correcting aerial stunt robots called stuntronics. This new extension of animatronics utilizes onboard sensors for precision control of advanced robotics to create animatronic human stunt doubles that can perform advanced aerial movements, such as flips and twists ________________________________________________________________________________________________________________________________________________________ Camera Robot PULSE / Camera moving with Robotic eyes the concept to starship ROBOT IC Hertz -------------------------------------- PULSE is a top-notch product on the market of professional video equipment. A camera robot fulfills your most daring ideas and concepts. This is an ultimate upgrade of your operator’s mastery. Why do I need a robot cameraman? What is important in the operator’s work? To imagine the desired frame clearly, to choose the right angle and to keep the right speed. Sometimes this is challenging due to some external reasons: difficult shooting conditions, complicated trajectory, etc. The camera robot solves these problems. With 6 degrees of freedom, the PULSE robot moves along any given trajectory and angle. You can choose smooth acceleration or deceleration at any point. PULSE repeats the once given trajectory with an accuracy of 0.1 mm. Change the light, the scenery, do as many shots as you need, the accuracy will not be affected at all! The core advantages of the robot operator The robot helps you achieve exactly the shot you’ve imagined. With PULSE you get: 1. Freedom of moves Thanks to the 6-axis design, the robot moves at almost any angle. Tilting, trucking, following shots or closing in, any techniques are available with the robotic camera arm. Take separate shots or use continuous camera movement with a quick and precise change of angle for the long shots. Camera robot makes even complex trajectories with no change of speed or shaking. 2 . High speed A powerful motor is placed in each joint of the robot, which allows achieving an acceleration of 2 m / s in 0.1 seconds. Ready-made acceleration and deceleration programs will help you reach cool operator effects in just a couple of clicks. The robotic motors are designed using brushless technology. This allows to reduce the noise level and to avoid “shaking” when the robot moves. 3 . Motion Control Thanks to its construction, PULSE is very easy to control. It can be trained in manual mode. Just move the robot along the trajectory you need, and it will repeat it with an accuracy of 0.1 mm. You can save the algorithm you like and use it anytime. With PULSE, you can quickly and superbly shoot ads and promos, 360° product demos and even animations. The camera robot is very light (12.6 kg) and compact (fits in a suitcase) — you can easily move it around the studio or transport it from place to place. Key features of the camera robot PULSE: ------------------------------------ Mobile as a snake The robot camera arm moves like a human arm. 6 degrees of freedom are available. The maximum working radius is 750 mm. The working range of 4 axes is 360°, 2 middle axes — 170°. Fast as a leopard Linear speed — 2 m / s. Acceleration time 0-1 m / s — 0.1 s. Speed ​​values ​​do not change when the trajectory is changed. You can use ready-made presets for acceleration and deceleration of the camera robot. Tender as a kitten PULSE is a collaborative robot. It’s approved for direct work with people. PULSE is certified by safety standards ISO 10218-1: 2011 (E), ANSI and RIA R15.06. Obedient as a dog The purchased order includes a control unit and a software package. Programming the robot is possible with the manual guidance along the trajectory and the REST API. In the program, you can easily correct the movements of the robot operator to the nearest tenth of a millimeter. Flexible as chameleon the universal base of the camera robot, it can be installed on any flat surface: floor, walls, cage or ceiling. The capture device for the camera is selected based on the requirements of the client. PULSE uses a universal flange on which you can set any capture device. Reliable as... no idea... it's just reliable The operating conditions of the robot are 0–35° С. PULSE is protected from contamination by particles with a diameter of over 1 mm. For shooting in heavily-polluted conditions and at high temperatures or humidity, it is possible to develop safety covers for the robot. _____________________________________________________________________________________________________________________________________________________ Application of Camera Image Processing to Control of Humanoid Robot Motion ------------------------------------- The wide potential applications of humanoid robots require that the robots can move in general environment, overcome various obstacles, detect predefined objects and control of its motion according to all these parameters. The goal of this paper is address the problem of implementation of computer vision to motion control of humanoid robot. We focus on using of computer vision and image processing techniques, based on which the robot can detect and recognize a predefined color object in a captured image. An algorithm to detection and localization of objects is described. The results obtained from image processing are used in an algorithm for controlling of the robot movement , we call Image Processing, Robot Motion Control at SpaceShips . How to make a robot move: communication -------------------------------------- Smart Robotics The importance of motion planning At Smart Robotics, we cater several markets such as E-commerce, Food, Pharma and FMCG. For many of our applications, we use a robot by Universal Robots. It is a cobot, meaning it can work safely together with people. As it has a relatively small payload and reach, we need to ensure we get the most out of the robot. We have gone through multiple iterations in our products to get closer and closer to the requirements of our customers. As an example, one of the most important requirements for our customers is fast cycle times (or Takt times). To get closer to the requirements of our customers, we decided to work on creating our own motion planning for communication towards the robot. This has two main benefits: Total control over the executed motion; allowing for different motion planning solutions. Being able to respond reactively to the environment, I.e. change the trajectory of the robot while being in motion. To understand these benefits, let us look back at a previous implementation and the limitation of this approach. Previous robot communication implementations There are several robot manufacturers, each with their own hardware and software implementation of a robot. Most provide a programming environment to program the robot with. Examples of such manufactures are Universal Robot, ABB, KUKA and Staubli. One of the possibilities is to write your robot program in these scripts directly, creating the benefit of being able to use all the features from the robot manufacturer. However, it makes your program tightly coupled with the robot manufacturer. A second solution is one we have deployed at Smart Robotics in the past, using two features that most robot manufacturers provide: motion primitives and remote script sending. Motion primitives are low-level commands in the script language for moving the robot. The two most common commands are: MoveL: Moves the end-effector in cartesian space MoveJ: Moves the end-effector in joint space Remote script sending; sending scripts to the robot which the robot then executes. Given these features, you can set up a pipeline that communicates a combination of motion primitives to control the robot, using the controller and reference generator provided by the robot. An application can send motion goals, and these can be sent to the robot using the remote script sending feature. However, this approach has a few drawbacks: The first being reactivity: for example, an UR robot can only cancel current motion goals by stopping the robot and then executing a new motion primitive. This takes time. The second being the control over the path: because the controller and the reference generator are essentially a black box, we experienced some strange behaviors with the motion primitives for the UR robot, for which a cause was not discovered. A new way of robot communication For the development of a new approach a few goals were set, addressing the above-mentioned drawbacks: The robot should react quickly while being in motion – Hence, updating the trajectory in place was in important to reach the fast cycle times required by customers. For example, you may want to move the robot to a known position close to a product while a camera calculates the product position simultaneously. This product position could then be used to already generate a new trajectory for the robot to move towards. This path should be updated in place so that the robot can move to the new position without stopping. We want to take control over the path execution and experiment with different motion planning techniques so that the same communication pipeline can be used for various types of robots. To reach our desired goals, another feature has been used that most robotics suppliers provide; the ability to execute joint commands. In the UR the servoj function is used for that purpose. Instead of using the motion generator of the robot, we have developed our own motion planner as well as a driver that communicates directly with the UR through a joint-streaming socket. The driver can communicate with the UR and other robots using real-time frequencies of 125hz to 250hz. Robot communication in practice: an example Look at the following situation: there is a Python file that can send commands to a robot system, in this case the UR, using our own motion executive through our driver. The driver is built as a library that is ROS-agnostic. The library can be included in a ROS based project so that it can hook into the ROS ecosystem. The driver is based on the open-source project of ASIO, which provides high-speed asynchronous IO communication. Below you can see how this is set up at a high level: In the image above we have exposed functionality of the robot through several ROS topics (orange), Action Servers (red) and Services (green). In the image you can also see the split between the ROS-specific wrapper and the more general driver library. The driver uses the script sending functionality to open a streaming socket to which joint references can be communicated. Below you can see a diagram that showcases how the driver sets up a connection. In addition, we also open a general command port to send non real-time messages to the UR, such as switching the Payload or Tool Center Point (TCP). This port is extendable and allows more commands that target specific features of the robot to be exposed. Because the UR was not specifically designed with this use-case – the real-time streaming of joints – in mind, several problems had to be solved. For example, because the joints are streamed using a standard socket interface, connection latency can cause the joint references to arrive late. To compensate for this, we buffer the references in an intelligent way so that we improve stability but keep the responsiveness that we require. In addition, we also needed to solve for specific behaviors inherent to the UR. For example, in case of a robot emergency stop, the script is cancelled, and the sockets are closed. In the case of a protective stop, the script is simply paused. There are several other edge cases that needed to be accounted for. Therefore, our driver is designed in such a way that it monitors these states and has intelligent reconnection strategies, so that it can recover from these edge cases. How we solved these problems, however, will be a topic for a future blog post. New driver for different types of robots The driver designed by Smart Robotics enables the streaming of joints at high frequency and enables the updating of a trajectory on the fly. This in conjunction with the motion executive for the path planning solves the main drawbacks of the previous implemented approach and is also very extendable. As we only need a robot hat can follow joint references, we can port the driver to other types of robots, which we have already started to explore. Creating software that is robot-independent is one of our main focuses, and the above described motion planner and driver are important steps towards that goal. ______________________________________

Selasa, 06 Oktober 2020

Robot dynamic motion techniques as a drive in and out in an electronic control system in a flying car AMNIMARJESLOW GOVERNMENT 2020 , SKY WINDOWS STREET For Throught Windows street to be come on Gen. Mac Tech break through the time window timeline

       Time Windows electronic machine 

Flying cars helpful? They can travel shorter distances to make the same journey . Staying on the theme of lower emissions and greater efficiency. Flying cars can take a much more direct route from point A to point B. This means less fuel is required and the journey times are much quicker as a result when compared to a journey on land. Flying cars important ! In terms of efficiency, the researchers found that flying electric cars use significant energy to take off and land, but they're highly efficient when cruising. That means they're most efficient overall on longer trips. ... Such a service could replace car trips in areas with heavy congestion or circuitous routes. Flying cars will work ! Ducted fans enclosed within the wings and piercing the body of the vehicle itself will propel the aircraft vertically. After takeoff, rear-facing fans will thrust it forward so the wings can generate lift and the Aska can fly longer distances more efficiently than a more drone-like design can. Flying Cars will change the world ! Increasing numbers of flying cars will naturally give rise to a change in the layout and sizes of our cities. ... Because there will be fewer cars on the road, congestion will ease and roads in general should become safer. This will make owning and running a car cheaper. It may even insurance premiums go down. 

Flying car definition : “A flying car is a hybrid vehicle that combines fixed wing and rotary wing aircraft capabilities.”

Another way to understand this, is to think of a flying car as being part helicopter, and part airplane.

This is essentially a “mechanical” definition, but there is more to the flying car story than just mechanics. Software development in the field of autonomous systems is a crucial component in making these vehicles not just an interesting research project, but also an appealing business solution for transportation needs. In short, combined wing capabilities make flying cars possible, and autonomy makes them viable.

To fully understand the revolution these vehicles represent, it’s important to recognize the advantages that come with effectively combining fixed wing and rotary wing aircraft capabilities. Let’s look at the pros and cons of a helicopter (rotary flight) and a plane (wing flight), so that we can understand why combining both capabilities is an optimal solution.


- Rey' speederbike, an example of hovercraft in the eight episode of Star Wars .

THE FLYING CARS : THREE CONCEPTS AND A CONCLUSION.

After a long and interesting description of some concepts about the flying car, this is the final article on that hybrid vehicle.

Talking about the flying cars deals with many motoring meanings. The NASA analyzes the definition of the flying cars. Of course, some movies offer representations about that hybrid vehicles. In the previous article, I wrote on points presenting some models from America and Europe which built the history of the flying cars. Through some examples, I tried to understand why people wanted to invent that kind of transport. So how to define the flying cars through three concepts :

1- A PERSONAL AIR VEHICLE (PAV).

A Flying Flivver replica considered as the "Ford T of the air" .

A Flying Flivver replica considered as the "Ford T of the air" .

In 2003, a project called Aeoronautics Vehicles System program was developped by the NASA. What was the idea under that large project? Defining the main elements of the flying cars in our modern society, I'm sure SpaceX, Elon Musk' entreprise, was interested in the old concept. Previously, Henry Ford predicted the conception of flying car in 1940 saying some words :


"MARK MY WORD : A COMBINATION AIRPLANE AND MOTORCAR IS COMING. YOU MAY SMILE BUT IT WILL COME."

HENRY FORD

Ford's plane, the Flying Shivver, was far away to complete the standards chosen by the engineers of the NASA. It was a simple aircraft but the inventor of the amazing Tin Lizzie has some original ideas. What are the main elements to build a viable flying car? It would be a quiet and confortable vehicle which is driven/flown at speed of 150/200 mph (or 240/320 km/h). You wouldn't need to have a pilot license but just a driving license without forgetting the affordable prices of the flying cars. We can imagine a flying sportive utilitarian vehicles for your family or a supercar imagining a Renault Espace or a flying Porsche 911.


2- AN INTERMODAL PASSENGER TRANSPORT.

The eighth episode of Star Wars is released on 13th december 2017. George Lucas created an universe around a wealth and original history 40 years ago. The future of the transportation is well represented by the hovercrafts and space ships. Later, Back to the future showed the hovercrafts in details when Marty comes in the future in the second opus.

I mentioned the hovercrafts and not the roadable aircrafts. What is the hovercraft? We can define it as a personnal vehicle which flies at a constant altitude above the ground. In other words, it would be define like a flying car. What is the difference with a roadable aircraft? It is a flying car who can be driven on the roads and a plane with wheels which flies on the atmosphere. Once I asked to science what where his feelings when he was inside one of the flying car built by Taylor. He had answered me simply as you can see it below.

So what do science think about the hovercrafts?

So what do science think about the hovercrafts?

Finally, the flying hovercraft doesn't need wheels to fly freely in the air. We can divide the hovercrafts is two categories. The maritime hovercrafts flew on the water and can be driven on the roads whereas the flying hovercrafts are imaginative hybrid vehicles in the retrofuturism, a social artwork about the future of the transportation.

3-AN ENVIRONMENTAL AND ENGINEERING HYBRID VEHICLE.

More than a Convair 116, a flying car includes complex environmental and aeronautical regulations .

More than a Convair 116, a flying car includes complex environmental and aeronautical regulations .

Defining a flying car means to know the environmental regulations and aeronautic rules around the concepts of the cars and the planes. We enter into a new electric motoring era after a long and interesting development of the diesel engines. The environmental regulations are becoming more important with some initiatives in the big cities. The T-charge (around £10) will be add to the Congestion charges in London. The old cars aged of 20 years will be banned in Paris too.

THE FLYING CARS WOULD BE ABLE TO RESPECT THESE ENVIRONMENTAL RULES ON THE ROADS.

The flying cars would be able to respect these environmental rules on the roads. After pointing out the terrestrial pollution, we have to think of the atmospheric pollution as the naval engineers have done in the shipping. The atmospheric busy corridors became a real problem in the transport. You have to fly on invisible roads in the air too. How to manage to introduce the flying cars in the air which the pollution is increased by the traffic jam?

Creating new rules for personal air vehicles would be a solution. There are some advantages and inconveniences. Increasing the numbers of safety rules is a concret goal but a long issue. The Federation Internationale automobile (FIA) tries to improve the motoring safety on the roads. The results of the new regulation to drive safely are a long process. Moreover, the aeronautical rules are so complex too. Finding a harmonious compromise stays the best solution.

Some drivetribers told it's important the engines of the flying cars have to be powerful keeping a certain stability for the passengers. Indeed, landing on the roads is not easy for planes. They need a certain distance without mentioning the noises of the engines in an urban environment. Finally, the flying cars would be a vertical takeoff and landing vehicle called briefly a VTOL.

System' okey !

What is a flying car? Is it a pointless car . Is it a real concept as we saw it with the Taylor aerocar or the new aeromobil 3.0? Is it an old concept imagined by a former engineer from Ford in 1917? Defining a large and theorithical concept is hard because you can develop different ideas about the flying cars. Would you want to have a roadable aircraft or a hovercrafts with wheel or not? Anyway, a flying car is a personal air vehicle, an intermodal passenger transport and an environmental and engineering hybrid vehicle .


  --------------------------------------------------------------------------------------------------------------------------------------------------------- Controlling free flight of a robotic fly using an onboard vision sensor inspired by insect ocelli _________________________________________________________________________________________________ Scaling a flying robot down to the size of a fly or bee requires advances in manufacturing, sensing and control, and will provide insights into mechanisms used by their biological counterparts. Controlled flight at this scale has previously required external cameras to provide the feedback to regulate the continuous corrective manoeuvres necessary to keep the unstable robot from tumbling. One stabilization mechanism used by flying insects may be to sense the horizon or Sun using the ocelli, a set of three light sensors distinct from the compound eyes. Here, we present an ocelli-inspired visual sensor and use it to stabilize a fly-sized robot. We propose a feedback controller that applies torque in proportion to the angular velocity of the source of light estimated by the ocelli. We demonstrate theoretically and empirically that this is sufficient to stabilize the robot's upright orientation. This constitutes the first known use of onboard sensors at this scale. Dipteran flies use halteres to provide gyroscopic velocity feedback, but it is unknown how other insects such as honeybees stabilize flight without these sensory organs. Our results, using a vehicle of similar size and dynamics to the honeybee, suggest how the ocelli could serve this role. Introduction example theory to fly cars ROBOT as like as drone tech ___________________________________________________________________ Flying robots on the scale of and inspired by flies may provide insights into the mechanisms used by their biological counterparts. These animals' flight apparatuses have evolved for millions of years to find robust and high-performance solutions that exceed the capabilities of current robotic vehicles. Dipteran flies, for example, are superlatively agile, performing millisecond turns during pursuit or landing inverted on a ceiling . Moreover, these feats are performed using the resources of a relatively small nervous system, consisting of only 105–107 neurons processing information received from senses carried onboard. It is not well understood how they do this, from the unsteady aerodynamics of their wings interacting with the surrounding fluid to the sensorimotor transductions in their brain . An effort to reverse-engineer their flight apparatus using a robot with similar characteristics could provide insights that would be difficult to obtain using other methods such as fluid mechanics models or experimentally probing animal behaviour. The result will be robot systems that will eventually rival the extraordinary capabilities of insects. Creating a small flying autonomous vehicle the size of a fly such as that shown as like as honey bee .As vehicle size diminishes, many conventional approaches to lift, propulsion, sensing and control become impractical because of the physics of scaling. For example, propulsion based on rotating motors is inefficient, because heat dissipation per unit mass in the magnetic coils of a rotary electric motor increases , where is some characteristic length of the vehicle such as wingspan. Combined with increased friction losses owing to an increased surface-to-volume ratio, exacerbated by the need for significant gearing, this results in very low power densities in small electromagnetic motors. In addition, the lift-to-drag ratio for fixed aerofoils decreases at small scales because of the greater effect of viscous forces relative to lift-generating inertial forces at low Reynolds numbers. A robot the size of a fly uses a light sensor to stabilize flight, the first demonstration of onboard sensing in a flying robot at this scale. (inset) The visual sensor, a pyramidal structure mounted at the top of the vehicle, measures light using four phototransistors and is inspired by the ocelli of insects (scale bar, 5 mm). (main) Frames taken at 60 ms intervals from a video of a stabilized flight in which the only feedback came from the onboard vision sensor. The sensor estimates pitch and roll rates by measuring changes in light intensity arriving from a light source mounted 1 m above (not shown). This is used in a feedback loop actuating the pair of flapping wings to perform continuous corrective manoeuvres to stabilize the upright orientation of the vehicle, which would otherwise quickly tumble. A wire tether transmits control commands and receives sensor feedback, acting as a small disturbance that does not augment stability. From the perspective of autonomous flight control, an ocelli-inspired light sensor is nearly the simplest possible visual sensor, minimizing component mass and computational requirements. A number of previous studies have considered ocelli-inspired sensors on flying robots, insect-sized or larger. In it was shown that an adaptive classifier could be used to estimate the orientation of the horizon from omnidirectional camera images. In the absolute direction of the light source or horizon relative to the vehicle was estimated. But whereas aligning to the absolute direction of a light source or horizon may be a valid approach for larger aircraft that fly above obstacles so there is a relatively clear view to the horizon, smaller vehicles may fly near buildings, under foliage or indoors. In these conditions, the horizon may be obstructed. This causes the direction of light sources to vary significantly . A control law that aligned the vehicle with a light source would most likely yield a tilted vehicle under these conditions, leading to significant lateral acceleration and dynamic instability. In this work, we so alternative approach in which a feedback controller applies torque in proportion to angular velocity of the motion of the source of light. This has two benefits. First, it avoids the need for the light source to be directly above for the sensor to produce a useful result. Second, we show that an angular velocity estimate is all that is needed to stabilize the upright orientation of our flapping-wing robotic fly and many flying insects. The first was confirmed by previous work, inspired by observations of derivative-like responses in insect ocelli , that showed that ocelli simulated in a virtual environment can estimate angular velocity about the pitch and roll axes, regardless of initial orientation . The results also suggested that a linear ocelli response cannot estimate other vehicle motion parameters such as absolute attitude. A motor controller was described that computed a time integral of the ocelli angular velocity estimate. Although this did not require the light to arrive from a known direction, an estimate computed in this manner would slowly drift because of accumulated sensor noise. Here, we build on that work to suggest an alternative approach in which the angular velocity estimate is instead used directly in a feedback controller. By applying torque in proportion to angular velocity only, it is possible to harness this vehicle's flapping-wing dynamics to achieve a stable upright attitude that does not drift and that does not require absolute estimate of attitude. Sensor and robot fly ____________________ The design of the ocelli sensor : Each sensor is inclined roughly 30° above the horizon and captures defocused light from an angular field spanning approximately 180° that is nearly circularly symmetric. The ocelli design consists of four phototransistors soldered to a custom-built circuit board that is folded into a pyramid shape. Each of the four light detectors consists of a phototransistor (KDT00030 from Fairchild semiconductor) in a common-emitter configuration in series with a 27 kΩ surface-mount resistor. The phototransistor has an infrared cut-off filter, reducing its sensitivity to the bright infrared lights emitted by the motion capture system used to measure flight trajectories. The voltage reading is taken from the collector of each transistor and rises with increasing luminance. Robotic fly mechanical characteristics ______________________________________ The flying vehicle used in this work is actuated by a pair of independently moving wings. By altering signals to the piezoactuators driving the wings, they can produce sufficient lift to take-off, as well as produce ‘pitch’ and ‘roll’ torques independently . We define a right-handed coordinate system for the body in which, with the wings extending laterally along the y-axis and the body axis hanging downwards in the negative z-direction, the x-axis (roll) points forward, the y-axis (pitch) points to the left and the z-axis (yaw) points upwards. Roll torque is induced by varying the relative stroke amplitudes of the left versus right wing. Pitch torque is induced by moving the ‘mean stroke angle’—the time-averaged angle of the forward–backward motion of the wings—in front (+x) or behind (−x) the CM . Yaw torque can also, in principle, be modulated , but we do not use this capability in this study. Attitude stabilization using velocity feedback __________________________________________________ We show that knowledge of absolute vehicle attitude is not required to attain stability. Instead, only angular velocity feedback or ‘rate damping’ is needed. More formally, the torque controller : T = - k d w where ω is the angular velocity, is sufficient to stabilize the fly in the upright orientation. This result holds under the following assumptions: (1) Vehicle motions depend only on stroke-averaged forces, that is, forces and torques averaged over the time period of each wing stroke. (2) Aerodynamic drag on the wings is proportional to airspeed in both the forward (x) and lateral (y) directions, with an equal proportionality constant for both directions. (3) The vehicle is symmetric about its x–z plane. --------------------------------------------------------------------------------------------------------------------------------------------------------- Dynamics Modeling and Control of a Quadrotor with Swing Load ______________________________________________________________
The example simple quadcopter electronic circuit : 
 aerial robots have many applications in civilian and military fields to be good inteliggent transport . For example, of these applications is aerial monitoring, picking loads and moving them by different grippers. In R&D, a quadrotor with a cable-suspended load with eight degrees of freedom is considered. The purpose is to control the position and attitude of the quadrotor on a desired trajectory in order to move the considered load with constant length of cable. the purpose of this research is proposing and designing an antiswing control algorithm for the suspended load. To this end, control and stabilization of the quadrotor are necessary for designing the antiswing controller. Furthermore, this paper is divided into two parts. In the first part, dynamics model is developed using Newton-Euler formulation, and obtained equations are verified in comparison with Lagrange approach. Consequently, a nonlinear control strategy based on dynamic model is used in order to control the position and attitude of the quadrotor. The performance of this proposed controller is evaluated by nonlinear simulations and, finally, the results demonstrate the effectiveness of the control strategy for the quadrotor with suspended load in various maneuvers. Quadrotor is a rotorcraft whose flight is based on rotation of two pairs of rotors that rotate opposite to each other. the different movement of quadrotor is created by a difference in the velocity of rotors. If the velocity of rotor 1 (or 2) decreases and the velocity of rotor 3 (or 4) increases, then the roll (or pitch) motion is created and the quadrotor moves along the -axis (or the -axis). Moreover, a quadrotor is an aerial robot which has the potential to hover and take off, fly, and land in small areas. In addition, this robot has applications in different fields, among which are safety, natural risk management, environmental protection, infrastructures management, agriculture, and film protection. Moreover, a quadrotor is an underactuated system since it has six degrees of freedom and only four inputs. However, a quadrotor is inherently unstable and it can be difficult to fly. Thus, the control of this nonlinear system is a problem for both practical and theoretical interest. Many control algorithms are tested and implemented on this aerial robot in order to stabilize and move in different tasks. Among these algorithms are classic control, linear and nonlinear state feedback control, sliding mode control, back stepping control, and fuzzy and neural network control. In 2010, Vazquez and Valenzuela designed a nonlinear control system for the position and attitude control based on the classic control PID; indeed, the quadrotor altitude is controlled by a PI-action controller, .implemented a Linear Quadratic Regulator (LQR) controller for the position control of the quadrotor . In 2004, Hoffmann proposed a sliding mode method for the altitude control and an optimal control method for the attitude control. But many difficulties occurred because of motor vibrations in the high thrust and the chattering phenomena. Also, for realizing the robust control of the quadrotor, a back stepping control algorithm is proposed in [4]. This algorithm could estimate disturbances online and, so, they could improve the robustness of system. Erginer and Altug in 2012 performed dynamics modeling and control of a quadrotor. They obtained the dynamic model of the quadrotor by Newton-Euler method and controlled the quadrotor using a hybrid fuzzy-PD control algorithm. In 2008, Raffo et al. implemented a nonlinear algorithm to control and stabilize the angular motion of the quadrotor. The simulation results show that this nonlinear algorithm can eliminate disturbances and stabilize the rotation motion of the quadrotor So, different control methods have been proposed to control these robots since the suspended load significantly alters the flight characteristics of the quadrotor. These methods are divided into feedback and feed-forward approaches. Feedback control methods use measurements and estimations of system states to reduce the vibration, while feed-forward approaches change actuator commands for reducing the oscillation of system. The feed-forward controller can often improve the performance of feedback controller. Thus, proposing feed-forward algorithms can lead to more practical and accurate control of these systems. One effective feed-forward method is the input shaping theory which has proven to be a practical and effective approach of reducing vibrations . Also, several methods are proposed in order to minimize the residual vibration. Smith proposed the Posicast control of the damped oscillatory systems which is a technique to generate a nonoscillatory response from a damped system to a step input. This method breaks a step of a certain magnitude into two smaller steps, one of which is delayed in time . Swigert proposed shaped torques techniques which consider the sensitivity of terminal states to variation in the model parameters. Recently, in the control of overhead cranes, This moment solved a minimum time control problem for swing free velocity profiles, which resulted in an open loop control . -------------------------------------------------------------------------------------------------------------------------------------------------------- Dynamics Modeling _________________ The quadrotor slung load system . It is considered to be a system consisting of two rigid bodies connected by massless straight-line links which support only forces along the link. The system is characterized by mass and inertia parameters of rigid bodies and suspension’s attachment point locations. In this section, dynamics equations of the quadrotor slung load system are presented by Newton-Euler method. The following assumptions are made for modeling the quadrotor with a swinging load. (i)Elastic deformation and shock of the quadrotor are ignored. (ii)Inertia matrix is time-invariant. (iii)Mass distribution of the quadrotor is symmetrical in the - plane. (iv)Drag factor and thrust factor of the quadrotor are constant. (v)Air density around of the quadrotor is constant. (vi)Thrust force and drag moment of each propellers are proportional to the square of the propeller speed. (vii)Both bodies are assumed to be rigid. This assumption excludes an elastic quadrotor and rotor modes such as flapping and nonrigid loads. (viii)The cable mass and aerodynamic effects on the load are neglected. (ix)The cable is considered to be inelastic. These assumptions are considered to be sufficient for the realistic representation of the quadrotor with a swinging load system which is used for a nonaggressive trajectory tracking. look like concept to cars fly : 1. Aerodynamics of Rotor and Propeller 2. Dynamics Equations of Motion : a. Kinematics Equation of Quadrotor b. Newton-Euler Equation of Quadrotor c. Lagrange Equation of Quadrotor d. Model Verification 3. Controller Design : a. Position and Attitude Control of Quadrotor b. Simulation Results of Designed Position and Attitude Controller c. Antiswing Control of Quadrotor d. Simulation Results of Designed Antiswing 4. final concept : the problem of the quadrotor flying is addressed with a suspended load which is widely used for different kinds of cargo transportation. The suspended load is also known as either the slung load or the sling load. Also, flying with a suspended load can be a very challenging and sometimes hazardous task because the suspended load significantly alters the flight characteristics of the quadrotor. So, many different control algorithms have been proposed to control these systems. To this end, dynamic model of this system was obtained and verified by comparing two Newton-Euler and Lagrange methods. Next, a control algorithm was presented for the position and attitude of the quadrotor. In this algorithm, swinging object’s oscillation may cause danger in the work space and it can make instability in the quadrotor flight. Using comprehensive simulation routine, it was shown that this designed controller could control the robot motion on the desired path but could not reduce the load oscillation in noncontinuous and nondifferentiable paths. To deal with this issue, a feed-forwarded control algorithm was introduced for reducing or canceling swinging load’s oscillation. This controller was designed by implementing the input shaping theory which convolves the reference command with a sequence of impulses. Finally, it was shown that the feed-forward controller could actively improve the performance of the feedback controller. -------------------------------------------------------------------------------------------------------------------------------------------------------- Flying cars could cut emissions, replace planes, and free up roads __________________________________________________________________ we need necessary technology and practical uncertainties beyond the cars’ promising physics mean that they may not arrive in time to be a large-scale solution to the energy crisis and congestion – if at all. How to make a car fly It might at first seem crazy that a flying car could be more efficient than a road car, especially when conventional planes have such a reputation as gas guzzlers. But flying isn’t inherently inefficient – after all, birds can fly between continents without eating. There are many ways to make a car fly, but most are too problematic to get off the ground. Perhaps the most promising option is that taken in this study, based on the physics of vertical take-off and landing (VTOL) aircraft . VTOL, something like a Harrier Jump Jet probably springs to mind, with two huge engines directing thrust that can be tilted vertically or horizontally. But these much smaller and lighter flying cars operate differently, with lots of tiny electric fans blowing air from many places. This fast-developing distributed electric propulsion (DEP) technology is key for efficiency when cruising, and it also creates possibilities for quieter take-off and hovering, as multiple small noise sources can be better managed. Wing and propeller design can also be optimised to be long, thin, and have lots of moving surfaces, just as birds do to make their flying efficient. The aim of all of these technical enhancements is to achieve maximum lift for minimum drag – the force that opposes an object’s motion through air and slows it down. A better lift-to-drag ratio means lower power consumption, and therefore lower emissions. These energy-saving innovations make cruising a breeze – but they don’t help much with take-off, hovering, or landing, which are still inherently inefficient. So while VTOL flying vehicles are still viable for short intra-city travel and pizza deliveries, they will not solve the energy crisis.As journey distance increases, so too do the efficiency gains over stop-start road cars, which have to deal with rolling resistance and less efficient airflow. Problems in practice In focusing entirely on the physics of flying cars, the paper steers clear of a number of practicalities that must be considered before we embrace VTOL flying cars as a sustainable form of transport for the future. For example, it is important to consider the carbon costs of production, maintenance and down time, known as Life-Cycle Analysis (LCA). Electric vehicles have been criticised for both the energy and environmental costs of mining primary materials for batteries, such as lithium and cobalt. Added infrastructure required for flight may worsen the problem for flying cars. And of course, a grid powered by low-carbon sources is essential to make battery-powered vehicles part of the solution to our climate crisis. Aircraft also have highly stringent criteria for maintenance and downtime, which can often offset gains in performance and emissions. As an entirely new breed of planes, it’s impossible to predict how much it might cost to keep them air-worthy. ------------------------------------------------------------------------------------------------------------------------------------------------------- Applications of robotics and artificial intelligence to reduce risk and improve effectiveness _____________________________________________________________________________________________ 1.Companion robots 2.Robots in medicine 3.Robots Drone Tech 4.Retail robots 5.SoftBank Robotics 6.Military robots 7.Delivery robots 8.Food robots 

Rotary Wing Aircraft

The primary advantage of a rotary wing is its capacity for vertical take-off and landing (VTOL). Why is this an advantage? Because it removes the need for a runway. As WIRED recently noted, “VTOL technology means aircraft can theoretically take off and land almost anywhere, making them far more flexible.”

In addition to VTOL capabilities, another benefit is ease of control. A helicopter can hover in place pretty accurately, which makes it safe to navigate an urban environment where you might need to stop at several waypoints, and where speed needs to be constantly modulated to respond to external conditions.

rotary wing vehicle features illustration diagram

So what’re the disadvantages of rotary wing aircraft?

  • Inefficiency: Hovering in place requires a lot of power to keep the rotors turning to generate the required lift. As a consequence, the allowed payload is drastically reduced, and is the flight time.
  • Lack of Speed: Rotary wing vehicles are considerably slower than airplanes.
  • Noise: Rotating blades are very noisy.
  • Control challenges: Helicopters have a major flaw—if they should lose power, you need a highly-trained helicopter pilot who can execute autorotation to land the vehicle.

Fixed-Wing Aircraft

Let’s now consider fixed-wing aircraft (aka a “typical” airplane). The advantages of fixed-wing aircraft are numerous:

Fixed wing airplane features illustration diagram

  • Speed: An airplane goes much faster than a helicopter.
  • Efficiency: Instead of using a motor to spin the blades, a fixed-wing aircraft uses its motion through the air to keep air flowing through the wings and generate lift.
  • Payload and distance: Thanks to its speed and efficiency, this vehicle can carry much heavier payloads and travel longer distances than a rotary wing.
  • Control: If motors fail, the vehicle will not fall from the sky; a pilot still has control and can take it to the ground safely.

As to the main disadvantages of fixed-wing aircraft, there are two:

  1. They require long runways for takeoff and landing.
  2. Hover is not possible, making air traffic operations trickier in the presence of many vehicles.

It should be noted that the features that a flying car utilizes from a fixed-wing aircraft and a rotary wing aircraft are not necessarily a perfect reconciliation; noise and weight disadvantages of a rotary wing vehicle remain. But it’s pretty close to ideal, and it’s why flying cars represent such a compelling mechanical breakthrough.

If you want to know more about the technical aspects of path planning, controls, flight estimation, and autonomous flight  is the place for you to learn the technical skills, alongside the tools you need to create real-world applications. 

The VTOL Multirotor

There are two visions of the flying car. The most common is VTOL — vertical takeoff and landing — something that may have no wheels at all because it’s more a helicopter than a car or airplane. The recent revolution in automation and stability for multirotor helicopters — better known as drones — is making people wonder when we’ll get one able to carry a person. Multirotors almost exclusively use electric motors because you must adjust speed very quickly to get stability and control. You also want the redundancy of multiple motors and power systems, so you can lose a rotor or a battery and still fly.

This creates a problem because electric batteries are heavy. It takes a lot of power to fly this way. Carrying more batteries means more weight — and thus more power needed to carry the batteries. There are diminishing returns, and you can’t get much speed, power or range before the batteries are dead. OK in a 3 kilo drone, not OK in a 150 kilo one.

Lots of people are experimenting with combining multirotor for takeoff and landing, and traditional “fixed wing” (standard airplane) designs to travel any distance. This is a great deal more efficient, but even so, still a challenge to do with batteries for long distance flight. Other ideas including using liquid fuels some way. Those include just using a regular liquid fuel motor to run a generator (not very efficient) or combining direct drive of a master propeller with fine-control electric drive of smaller propellers for the dynamic control needed.

Another interesting option is the autogyro, which looks like a helicopter but needs a small runway for takeoff.

The traditional aircraft

Some “flying car” efforts have made airplanes whose wings fold up so they can drive on the road. These have never “taken off” — they usually end up a compromise that is not a very good car or a very good plane. They need airports but you can keep driving from the airport. They are not autonomous.

Robocars offer an interesting alternative. You can build a system where a robocar takes you from home to the best local short airstrip, taking you right out to an autonomous aircraft that is sitting waiting. You transfer, and it immediately takes off and flies you to another short airstrip, where another robocar awaits you. This allows you to travel in a car that’s a car and a plane that’s a plane, with no compromise.

The big challenges

Automating the intense level of safety and equipment reliability

In general, planes today are not fast modes of travel for their pilots. A typical small aircraft owner going out to fly has to drive to an airport that’s not very convenient, park and get their plane. (If they planned ahead, the hangar crew has taken their plane out and done the basics on it.) Even with the prep, there is a fairly long pre-flight check to do, assuring everything is just so, checking fuel levels with your eyes as well as instruments and more. Then you go through a dance with the control tower, taxi around (possibly in line behind others) and eventually get to take off and start climbing. Only then are you on your way. At the other end, you do it all in reverse, tie down and hangar your plane, and find your way to a rental car or ground transportation. For trips of under 100 miles, it’s not usually worth it.

Autonomous flying cars require more than just well built and superbly safe flying systems. (Flying itself is actually a pretty easy robotics problem.) It’s all the other stuff that will be the challenge. Because failures of equipment while up in the air can be so dangerous, vehicles must be maintained and checked to a level that is orders of magnitude greater than what we do with cars. If your car engine conks out, you pull off to the side of the road. If your brakes go out, it’s bad, but you apply the emergency brake and call a tow truck.

We’ll demand fail-safe operation for all parts of the flying car. It will have to be able to lose any major component and get you down safely.

Noise

Problem number one for VTOL is noise. Helicopters are not anywhere near silent. You might crave one for yourself, but no way you’ll accept your neighbours constantly flying helicopters in and out of their backyard, next to yours, at all hours. Not compared to the silence of the electric car.

Even if we have VTOL cars, we might still limit their operations (especially at night) to special landing yards. Your robotaxi could get you to the landing yard so it’s not as much of a burden, but using your own yard (unless you have a large estate, or live in a high-rise building with a heliport on top) is going to be difficult.

Energy

Right now, multirotor aircraft use a lot of energy to fly. Ground cars can be much more efficient. Society as a whole is seeking to greatly improve the efficiency of our transportation, not make it worse. Unless we make the flying car super efficient, it will be relegated only to speciality use, where the ground car just won’t work.

Fixed wing aircraft can be more efficient. Jets are very wasteful but lower speed aircraft can be efficient after takeoff.

Crowded skies

If personal flight becomes very popular, we would face the prospect of a sky seriously crowded with the vehicles in urban spaces. Computer systems could probably handle management of the traffic, since in 3 dimensions you get extra room, though you want much longer headways than cars use. In addition to being a visible blight and a noise source, there will be some safety concerns. Even a tiny number of these vehicles falling out of the sky and hitting things (or people) on the ground will cause more concern than cars do, even though they depart the road and hit people. This would be added to the large traffic in cargo drones.

The traffic management is non-trivial, but I believe it can be solved. There are still issues even after it’s solved.

Tourism

One of the places we might see radical change quickly is in tourism. If it’s cheap and easy, tourists will want to see everything from a flying car, especially one that can hover. Every amazing view, every scene, every architectural wonder, every city, will probably be best viewed from the air, or certainly desirable to view from the air as well as the ground. Every hiking trail you’ve not taken to some interesting sight will become a potential place people would like to go in their flying car.

Outside the cities, the problems of the flying cars are less present. The flights will be short and slow. You can travel to special locations for takeoff and landing, and make noise there. The territory will be rural or parkland in many cases, with more modest crowds and nobody to fall on in the event of rare safety failures. 

Public transport

Since we can’t make a multirotor for a single person, talking about group vehicles is even more premature, but we already have lots of public transit aviation today. Right now it’s done at airports, and never used for short distances because you spend far more time going through the airport than in the air.  A Robocar Airport it’s possible to make a much more efficient airport even for traditional planes. It would be great to go further and imagine the “flying bus” — an automated vehicle for a small group which is less like an airplane and more like one of the vans . There, travel is coordinated and 10-20 single person robocars would converge within a minute of one another next to the autonomous flying bus. They would quickly get in — no security for something this small and fast — and within one minute be taking off down the runway.

Such a service might be better than things like high-speed rail for travel in big cities. Because it can go from any airport to any airport — or with VTOL from any landing yard to any landing yard — such vehicles would offer superior travel times, free from congestion. If a flying bus service took you from Silicon Valley to San Francisco’s ferry terminal in 15 minutes at a decent price, it would be quite popular and displace car traffic.

Specialty uses

If we don’t let everybody fly all the time, who will be the special cases we let fly? Will it simply be the rich who pay a high fee for the opportunity? (The fee can’t be so high as to match the cost of a helicopter today.)

  • The flying ambulance is an obvious win — though we’re not at the level of building electric multirotors that could fly something that heavy. The lack of emergency vehicles on the regular roads will also improve traffic for others.
  • Some delivery will go by drone, though perhaps only the light and urgent packages.
  • There could be a lottery or other allocation, letting people fly some days, but not most.
  • Government officials will certainly want to claim they have the importance to justify this. In some cases (like VIPs so big they close roads for their motorcades) this is a win for all.
  • The police will clearly do this, as will some portion of fire crews (those not carrying heavy gear.) Anybody who uses helicopters today.
  • People who live or work in remote country locations who can make what noise they want at their home, and mostly fly over uninhabited country.
  • People populating mountainsides in crowded cities, though possibly only to transfer to a robotaxi in the flatlands.
  • People living on islands in seaside cities, though possibly only to transfer to a robotaxi on shore.
  • Flying carpooling (above and beyond the transit described below.) This requires multi-person flying cars. 

robocars :

Example Kitty Hawk — a prototype flying car. 

This is yet another piece of evidence that making a personal flying drone is certainly doable and going to happen. we even think the air traffic control problems can be solved .

 ------------------------------------------------------------------------------------------------------------------------------------------------------- SKY WINDOWS STREET to be come on Gen. Mac Tech break through the time window timeline SIGN IN AMNIMARJESLOW GOVERNMENT 2020 / SIGNATURE 

  
 --------------------------------------------------------------------------------------------------------------------------------------------------------