Kamis, 10 Desember 2020
" STATION KEEPING " Advances in satellite technology " SATCOM " AMNIMARJESLOW GOVERNMENT 17th to be relaxed, polite, pure, one hundred percent ; Repositioning , Refreshing , Reborn and Revalue added ****
When the world becomes easier, transparent and more dynamic as it is today, the ability to communicate between humans and other forms of life can be done and that is something possible with advances in pure electronics and sensors, the progress we are getting today is the development of satellites. Communication known as SATCOM, SATCOM is a combination of performance capabilities in the field of electronic theory and materials and control instrumentation, namely the development of oscilloscopes, power supply and network analyzers as well as electronic learning machines or so-called artificial intelligence. SATCOM in the future can be used for long-distance navigation for spacecraft that are several light years speed and penetrate dimensional space outside our solar system. In the future, the development of satellites can become a wave guide for space shuttle travel and move around the aircraft following the motion of the space ship.
Let's look at the basic concept of network analysis using electronic sensor sensors on the Surface mount technology machine, which already uses a very meticulous integrated communication system.
*** SAT COM ***
_________________
All space vehicles will require some source of electrical power for operation of communication equipment, instrumentation, environment controls, and so forth. In addition, vehicles using electrical propulsion systems like ion rockets will have very heavy power requirements.
Current satellites and space probes have relatively low electrical power requirements-of the order of a few watts. Bolder and more sophisticated space missions will lead to larger power needs. For example, a live television broadcast from the Moon, may require kilowatts of power.1 Over the distance from Earth to Mars at close approach, even a low-capacity instrumentation link might easily require hundreds of kilowatts of power.2 The net power needs of men in space vehicles are less clearly defined, but can probably be characterized as "large."3 Electrical propulsion systems will consume power at the rate of millions of watts per pound of thrust.
Power supply requirements cannot be based on average power demands alone. A very important consideration is the peak demand. For example, a radio ranging device may have an average power of only 2 watts, but it may also require a 600-watt peak. Unfortunately most foreseeable systems are severely limited in their ability to supply high drain rates; consequently they must be designed with a continuous capacity nearly equal to the peak demand.
A third important consideration is the voltage required. Voltage demand may be low for motors or high for various electronic applications. Furthermore, alternating current may be required or may be interchangeable with direct current. Transformations of voltage and/or direct to alternating current may be effected, but with a weight penalty .
BATTERIES
___________
The power supply most readily available is the battery, which converts chemical to electrical energy. The theoretical performance figures refer to cells in which all of the cell material enters completely into the electrochemical reaction.These theoretical limits, are, of course, unobtainable in practice because of the necessity for separators, containers, connectors etc. The hydrogen-oxygen (H2-O2) system refers to a fuel cell.
ASTRONAUTICS AND ITS APPLICATIONS
___________________________________
Hydrogen and oxygen, stored under pressure, take the p]ace of standard electrodes in a battery reaction, and about 60 percent of the heat of combustion is available as electrical energy.6 The figures listed under "Currently available performance" in table 1 refer to long discharge rates (in excess of 24 hours) and normal temperatures.
Batteries as prime energy sources do not give really long lifetimes; they do not operate well at low ambient temperatures or under heavy loads. Batteries are best suited to be storage devices to supply peak loads to supplement some other prime source of energy.
Other factors to be considered with regard to chemical batteries are: (1) They are essentially low-voltage devices, a battery pack being limited to about 10 kilovolts by reliability limitations; (2) a high-vacuum environment and some forms of solar radiation may have deleterious effects; and (3) many batteries form gas during charge and have to be vented, which is in conflict with the need for hermetic sealing to eliminate loss of electrolyte in the vacuum environment of space.
SOLAR POWER
___________
Solar energy arrives in the neighborhood of the Earth at the rate of about 1.35 kilowatts per square meter. This energy can be tapped by direct conversion into electricity through the use of solar cells (solar batteries), or collected 7 to heat a working fluid which can then be used to run some sort of engine to deliver electrical energy.
Solar cells are constructed of specially treated silicon wafers, and are very expensive to manufacture. They cost about $100 per watt of power capacity.
ASTRONAUTICS AND ITS APPLICATIONS
_________________________________
been estimated that solar cells should survive for many years in a solar environment.Surface cooling may be necessary for use of solar cells.
For satellite applications of solar cells there is a need to store energy for use during periods of darkness. This storage is the sort of application for which batteries are appropriate. As a rough indication of total weight, a combined installation of solar cells and storage batteries can be expected to weigh about 700 pounds per kilowatt of capacity.
There are a number of possible future improvements in solar cells. Of all the energy striking a solar cell, part is effectively used in producing electrical energy, part is reflected (about 50 percent), and part is actually transmitted through the cell, particularly in the lower wavelength end of the spectrum. Therefore one improvement would be to reduce the reflectivity of the cell; another would be to make the cells thinner. Another possibility might be to actually concentrate solar energy through a lightweight plastic lens. It would be desirable to develop cells for high-temperature operation. Temperature control problems for solar cells have been investigated in the U.S.S.R. for satellite applications.
The second possibility for utilizing solar energy is through heating of a working fluid. a standard for a half-silvered inflated mylar plastic sphere (about 1 Mil thick) 8.5 feet in radius, might serve as a collector,13 at a weight cost of only about 8 pounds per 30 kilowatts of collected thermal energy.14 An installation of this size would require roughly 100 square feet of radiator to reject waste heat (assuming a 10-percent overall conversion efficiency). This kind of system is roughly similar to a solar cell system as to weight for a given power capacity. Again meteorite effects present an unknown factor, in this case with respect to puncture of the collecting sphere and/or the radiator. The greater the actual meteorite hazard turns out to be, the thicker and consequently the heavier the radiator will have to become. The development potential of solar energy sources seems good.
**** Network and Protocol ****
_______________________________
A network protocol analyzer is a tool used to monitor data traffic and analyze captured signals as they travel across communication channels. Vector Network Analyzers are used to test component specifications and verify design simulations to make sure systems and their components work properly together. Today, the term “network analyzer”, is used to describe tools for a variety of “networks”. For network performance measurement, throughput is defined in terms of the amount of data or number of data packets that can be delivered in a pre-defined time frame. Bandwidth, usually measured in bits per second, is a characterization of the amount of data that can be transferred over a given time period.
Electronic distribution of information is becoming increasingly important, and the complexity of the data exchanged between systems is increasing at a rapid pace. Computer networks today carry all kinds of data, voice, and video traffic. Network applications require full availability without interruption or congestion. As the information systems in a company grow and develop, more networking devices are deployed, resulting in large physical ranges covered by the networked system. It is crucial that this networked system operates as effectively as possible, because downtime is both costly and an inefficient use of available resources. Network and/or protocol analysis is a range of techniques that network engineers and technicians use to study the properties of networks, including connectivity, capacity, and performance. Network analysis can be used to estimate the capacity of an existing network, look at performance characteristics, or plan for future applications and upgrades.
A network analyzer is a device that gives you a very good idea of what is happening on a network by allowing you to look at the actual data that travels over it, packet by packet. A typical network analyzer understands many protocols, which enables it to display conversations taking place between hosts on a network.
Network analyzers typically provide the following capabilities:
•Capture and decode data on a network
•Analyze network activity involving specific protocols
•Generate and display statistics about the network activity
•Perform pattern analysis of the network activity.
Packet capture and protocol decoding is sometimes referred to as “sniffing.” This term came about because of the nature of the network analyzers ability to “sniff” traffic on the network and capture it.
Electronic distribution of information is becoming increasingly important, and the complexity of the data exchanged between systems is increasing at a rapid pace. Computer networks today carry all kinds of data, voice, and video traffic. Network applications require full availability without interruption or congestion.
As the information systems in a company grow and develop, more networking devices are deployed, resulting in large physical ranges covered by the networked system. It is crucial that this networked system operate as effectively as possible, because downtime is both costly and an inefficient use of available resources.
Network analysis is a range of techniques that network engineers and designers employ to study the properties of networks, including connectivity, capacity, and performance. Network analysis can be used to estimate the capacity of an existing network, look at performance characteristics, or plan for future applications and upgrades.
A network analyzer is a troubleshooting tool that is used to find and solve network communication problems, plan network capacity, and perform network optimization. Network analyzers can capture all the traffic that is going across your network and interpret the captured traffic to decode and interpret the different protocols in use. The decoded data is shown in a format that makes it easy to understand. A network analyzer can also capture only traffic that matches only the selection criteria as defined by a filter. This allows a technician to capture only traffic that is relevant to the problem at hand. A typical network analyzer displays the decoded data in three panes:
▪Summary Displays a one-line summary of the highest-layer protocol contained in the frame, as well as the time of the capture and the source and destination addresses.
▪Detail Provides details on all the layers inside the frame.Hex Displays the raw captured data in hexadecimal format. A network professional can easily use this type of interface to analyze this data.
Network analyzers further provide the ability to create display filters so that a network professional can quickly find what he or she is looking for.
Advanced network analyzers provide pattern analysis capabilities. This feature allows the network analyzer to go through thousands of packets and identify problems. The network analyzer can also provide possible causes for these problems and hints on how to resolve them.
The key to successful troubleshooting is knowing how the network functions under normal conditions. This knowledge allows a network professional to quickly recognize abnormal operations. Using a strategy for network troubleshooting, the problem can be approached methodically and resolved with minimum disruption to customers. Unfortunately, sometimes even network professionals with years of experience have not mastered the basic concept of troubleshooting; a few minutes spent evaluating the symptoms can save hours of time lost chasing the wrong problem.
A good approach to problem resolution involves these steps:
1.Recognizing symptoms and defining the problem
2.Isolating and understanding the problem
3.Identifying and testing the cause of the problem
4.Solving the problem
5.Verifying that the problem has been resolved
A very important part of troubleshooting is performing research. The Internet can be a valuable source of information on a variety of network topics and can provide access to tutorials, discussion forums, and reference materials. As a part of your troubleshooting methodology, you can use the Internet as a tool to perform searches on errors or symptoms that you see on your network.
The first step toward trying to solve a network issue is to recognize the symptoms. You might hear about a problem in one of many ways: an end user might complain that he or she is experiencing performance or connectivity issues, or a network management station might notify you about it. Compare the problem to normal operation. Determine whether something was changed on the network just before the problem started. In addition, check to make sure you are not troubleshooting something that has never worked before. Write down a clear definition of the problem.
Once the problem has been confirmed and the symptoms identified, the next step is to isolate and understand the problem. When the symptoms occur, it is your responsibility to gather data for analysis and to narrow down the location of the problem. The best approach to reducing the problem's scope is to use divide-and-conquer methods. Try to figure out if the problem is related to a segment of the network or a single station. Determine if the problem can be duplicated elsewhere on the network.
The third step in problem resolution is to identify and test the cause of the problem and test your hypothesis. You can use network analyzers and other tools to analyze the traffic. After you develop a theory about the cause of the problem, you must test it.
Once a resolution to the problem has been determined, it should be put in place. The solution might involve upgrading hardware or software. It may call for increasing LAN segmentation or upgrading hardware to increase capacity. The final step is to ensure that the entire problem has been resolved by having the end customer test for the problem. Sometimes a fix for one problem creates a new problem. At other times, the problem you repaired turns out to be a symptom of a deeper underlying problem. If the problem is indeed resolved, you should document the steps you took to resolve it. If, however, the problem still exists, the problem-solving process must be repeated from the beginning.
When a network analyzer reads data from the network it needs to know how to interpret what it is seeing and display the output in an easy to read format. This is known as protocol decoding. Often, the number of protocols a sniffer can read and display determines its strength, thus most commercial sniffers can support several hundred protocols. Ethereal is very competitive in this area with its current support of over 480 protocols. New protocols are constantly being added by various contributors to the Ethereal project. Protocol decodes, also known as dissectors, can be added directly into the code or included as plugins.
How is network quality measured?
Some common metrics used to measure network performance include latency, packet loss indicators, jitter, bandwidth, and throughput.
------------------------------------------------------------------------------------------------------------------------------
***** Oscilosscope *****
__________________________
Oscilloscopes are used in the sciences, medicine, engineering, automotive and the telecommunications industry. General-purpose instruments are used for maintenance of electronic equipment and laboratory work. Special-purpose oscilloscopes may be used to analyze an automotive ignition system or to display the waveform of the heartbeat as an electrocardiogram, for instance.
An oscilloscope, previously called an oscillograph , and informally known as a scope or o-scope, CRO (for cathode-ray oscilloscope), or DSO (for the more modern digital storage oscilloscope), is a type of electronic test instrument that graphically displays varying signal voltages, usually as a calibrated two-dimensional plot of one or more signals as a function of time. The displayed waveform can then be analyzed for properties such as amplitude, frequency, rise time, time interval, distortion, and others. Originally, calculation of these values required manually measuring the waveform against the scales built into the screen of the instrument. Modern digital instruments may calculate and display these properties directly.
The oscilloscope can be adjusted so that repetitive signals can be observed as a persistent waveform on the screen. A storage oscilloscope can capture a single event and display it continuously, so the user can observe events that would otherwise appear too briefly to see directly.
Most modern oscilloscopes are lightweight, portable instruments compact enough for a single person to carry. In addition to portable units, the market offers a number of miniature battery-powered instruments for field service applications. Laboratory grade oscilloscopes, especially older units that use vacuum tubes, are generally bench-top devices or are mounted on dedicated carts. Special-purpose oscilloscopes may be rack-mounted or permanently mounted into a custom instrument housing.
Automotive use
First appearing in the 1970s for ignition system analysis, automotive oscilloscopes are becoming an important workshop tool for testing sensors and output signals on electronic engine management systems, braking and stability systems. Some oscilloscopes can trigger and decode serial bus messages, such as the CAN bus commonly used in automotive applications.
---------------------------------------------------------------------------------------------------------------------------------
****** SATCOM ADVANCE FOR STATION KEEPING ******
___________________________________________________
Advances in satellite technology have given rise to a healthy satellite services sector that provides various services to broadcasters, Internet service providers (ISPs), governments, the military, and other sectors. There are three types of communication services that satellites provide: telecommunications, broadcasting, and data communications. Telecommunication services include telephone calls and services provided to telephone companies, as well as wireless, mobile, and cellular network providers.
Broadcasting services include radio and television delivered directly to the consumer and mobile broadcasting services. DTH, or satellite television, services (such as the DirecTV and DISH Network services in the United States) are received directly by households. Cable and network programming is delivered to local stations and affiliates largely via satellite. Satellites also play an important role in delivering programming to cell phones and other mobile devices, such as personal digital assistants and laptops.
Satellite communications use the very high-frequency range of 1–50 gigahertz (GHz; 1 gigahertz = 1,000,000,000 hertz) to transmit and receive signals. The frequency ranges or bands are identified by letters: (in order from low to high frequency) L-, S-, C-, X-, Ku-, Ka-, and V-bands. Signals in the lower range (L-, S-, and C-bands) of the satellite frequency spectrum are transmitted with low power, and thus larger antennas are needed to receive these signals. Signals in the higher end (X-, Ku-, Ka-, and V-bands) of this spectrum have more power; therefore, dishes as small as 45 cm (18 inches) in diameter can receive them. This makes the Ku-band and Ka-band spectrum ideal for direct-to-home (DTH) broadcasting, broadband data communications, and mobile telephony and data applications.
The International Telecommunication Union (ITU), a specialized agency of the United Nations, regulates satellite communications. The ITU, which is based in Geneva, Switzerland, receives and approves applications for use of orbital slots for satellites. Every two to four years the ITU convenes the World Radiocommunication Conference, which is responsible for assigning frequencies to various applications in various regions of the world. Each country’s telecommunications regulatory agency enforces these regulations and awards licenses to users of various frequencies. In the United States the regulatory body that governs frequency allocation and licensing is the Federal Communications Commission.
Satellites operate in three different orbits: low Earth orbit (LEO), medium Earth orbit (MEO), and geostationary or geosynchronous orbit (GEO). LEO satellites are positioned at an altitude between 160 km and 1,600 km (100 and 1,000 miles) above Earth. MEO satellites operate from 10,000 to 20,000 km (6,300 to 12,500 miles) from Earth. (Satellites do not operate between LEO and MEO because of the inhospitable environment for electronic components in that area, which is caused by the Van Allen radiation belt.) GEO satellites are positioned 35,786 km (22,236 miles) above Earth, where they complete one orbit in 24 hours and thus remain fixed over one spot. As mentioned above, it only takes three GEO satellites to provide global coverage, while it takes 20 or more satellites to cover the entire Earth from LEO and 10 or more in MEO. In addition, communicating with satellites in LEO and MEO requires tracking antennas on the ground to ensure seamless connection between satellites.
A signal that is bounced off a GEO satellite takes approximately 0.22 second to travel at the speed of light from Earth to the satellite and back. This delay poses some problems for applications such as voice services and mobile telephony. Therefore, most mobile and voice services usually use LEO or MEO satellites to avoid the signal delays resulting from the inherent latency in GEO satellites. GEO satellites are usually used for broadcasting and data applications because of the larger area on the ground that they can cover.
Satellites operate in extreme temperatures from −150 °C (−238 °F) to 150 °C (300 °F) and may be subject to radiation in space. Satellite components that can be exposed to radiation are shielded with aluminium and other radiation-resistant material. A satellite’s thermal system protects its sensitive electronic and mechanical components and maintains it in its optimum functioning temperature to ensure its continuous operation. A satellite’s thermal system also protects sensitive satellite components from the extreme changes in temperature by activation of cooling mechanisms when it gets too hot or heating systems when it gets too cold.
The tracking telemetry and control (TT&C) system of a satellite is a two-way communication link between the satellite and TT&C on the ground. This allows a ground station to track a satellite’s position and control the satellite’s propulsion, thermal, and other systems. It can also monitor the temperature, electrical voltages, and other important parameters of a satellite.
The main components of a satellite consist of the communications system, which includes the antennas and transponders that receive and retransmit signals, the power system, which includes the solar panels that provide power, and the propulsion system, which includes the rockets that propel the satellite. A satellite needs its own propulsion system to get itself to the right orbital location and to make occasional corrections to that position. A satellite in geostationary orbit can deviate up to a degree every year from north to south or east to west of its location because of the gravitational pull of the Moon and Sun. A satellite has thrusters that are fired occasionally to make adjustments in its position. The maintenance of a satellite’s orbital position is called “station keeping,” and the corrections made by using the satellite’s thrusters are called “attitude control.” A satellite’s life span is determined by the amount of fuel it has to power these thrusters. Once the fuel runs out, the satellite eventually drifts into space and out of operation, becoming space debris.
A satellite in orbit has to operate continuously over its entire life span. It needs internal power to be able to operate its electronic systems and communications payload. The main source of power is sunlight, which is harnessed by the satellite’s solar panels. A satellite also has batteries on board to provide power when the Sun is blocked by Earth. The batteries are recharged by the excess current generated by the solar panels when there is sunlight.
Data communications involve the transfer of data from one point to another. Corporations and organizations that require financial and other information to be exchanged between their various locations use satellites to facilitate the transfer of data through the use of very small-aperture terminal (VSAT) networks. With the growth of the Internet, a significant amount of Internet traffic goes through satellites, making ISPs one of the largest customers for satellite services.
Satellite communications technology is often used during natural disasters and emergencies when land-based communication services are down. Mobile satellite equipment can be deployed to disaster areas to provide emergency communication services.
One major technical disadvantage of satellites, particularly those in geostationary orbit, is an inherent delay in transmission. While there are ways to compensate for this delay, it makes some applications that require real-time transmission and feedback, such as voice communications, not ideal for satellites.
Satellites face competition from other media such as fibre optics, cable, and other land-based delivery systems such as microwaves and even power lines. The main advantage of satellites is that they can distribute signals from one point to many locations. As such, satellite technology is ideal for “point-to-multipoint” communications such as broadcasting. Satellite communication does not require massive investments on the ground—making it ideal for underserved and isolated areas with dispersed populations.
Satellites and other delivery mechanisms such as fibre optics, cable, and other terrestrial networks are not mutually exclusive. A combination of various delivery mechanisms may be needed, which has given rise to various hybrid solutions where satellites can be one of the links in the chain in combination with other media. Ground service providers called “teleports” have the capability to receive and transmit signals from satellites and also provide connectivity with other terrestrial networks.
The future of satellite communication
In a relatively short span of time, satellite technology has developed from the experimental (Sputnik in 1957) to the sophisticated and powerful. Mega-constellations of thousands of satellites designed to bring Internet access to anywhere on Earth are in development. Future communication satellites will have more onboard processing capabilities, more power, and larger-aperture antennas that will enable satellites to handle more bandwidth. Further improvements in satellites’ propulsion and power systems will increase their service life to 20–30 years from the current 10–15 years. In addition, other technical innovations such as low-cost reusable launch vehicles are in development. With increasing video, voice, and data traffic requiring larger amounts of bandwidth, there is no dearth of emerging applications that will drive demand for the satellite services in the years to come. The demand for more bandwidth, coupled with the continuing innovation and development of satellite technology, will ensure the long-term viability of the commercial satellite industry well into the 21st century.
****** TO BE STATION KEEPING *******
Station keeping may refer to: Orbital station-keeping, manoeuvres used to keep a spacecraft in an assigned orbit. Nautical stationkeeping, maintaining a seagoing vessel in a position relative to other vessels or a fixed point.
The orbit control process required to maintain a stationary orbit is called station-keeping. Station-keeping is necessary to offset the effect of perturbations, principally due to the Earth's triaxiality and lunar and solar attractions, which tend to precess the orbit normal and alter the orbital energy.
Current station keeping equipment (SKE) relies on radio waves for formation flying. A significant limitation of conventional SKE signals is that they are detectable at long distances by adversaries.
In astrodynamics, orbital station-keeping are the orbital maneuvers made by thruster burns that are needed to keep a spacecraft in a particular assigned orbit.
For many Earth satellites, the effects of the non-Keplerian forces, i.e. the deviations of the gravitational force of the Earth from that of a homogeneous sphere, gravitational forces from Sun/Moon, solar radiation pressure and air drag, must be counteracted.
The deviation of Earth's gravity field from that of a homogeneous sphere and gravitational forces from the Sun and Moon will in general perturb the orbital plane. For a sun-synchronous orbit, the precession of the orbital plane caused by the oblateness of the Earth is a desirable feature that is part of mission design but the inclination change caused by the gravitational forces of the Sun and Moon is undesirable. For geostationary spacecraft, the inclination change caused by the gravitational forces of the Sun and Moon must be counteracted by a rather large expense of fuel, as the inclination should be kept sufficiently small for the spacecraft to be tracked by non-steerable antennae.
For spacecraft in a low orbit, the effects of atmospheric drag must often be compensated for, oftentimes to avoid re-entry; for missions requiring the orbit to be accurately synchronized with the earth’s rotation, this is necessary to prevent a shortening of the orbital period.
Solar radiation pressure will in general perturb the eccentricity (i.e. the eccentricity vector); see Orbital perturbation analysis (spacecraft). For some missions, this must be actively counter-acted with manoeuvres. For geostationary spacecraft, the eccentricity must be kept sufficiently small for a spacecraft to be tracked with a non-steerable antenna. Also for Earth observation spacecraft for which a very repetitive orbit with a fixed ground track is desirable, the eccentricity vector should be kept as fixed as possible. A large part of this compensation can be done by using a frozen orbit design, but oftentimes thrusters are needed for fine control manoeuvres.
For spacecraft in a halo orbit around a Lagrange point, station-keeping is even more fundamental, as such an orbit is unstable; without an active control with thruster burns, the smallest deviation in position or velocity would result in the spacecraft leaving orbit completely .
Station-keeping at libration points
Orbits of spacecraft are also possible around Lagrange points—also referred to as libration points—gravity wells that exist at five points in relation to two larger solar system bodies. For example, there are five of these points in the Sun-Earth system, five in the Earth-Moon system, and so on. Small spacecraft may orbit around these gravity wells with a minimum of propellant required for station-keeping purposes. Two orbits that have been used for such purposes include halo and Lissajous orbits.
Orbits around libration points are dynamically unstable, meaning small departures from equilibrium grow exponentially over time. As a result, spacecraft in libration point orbits must use propulsion systems to perform orbital station-keeping.
The attitude & orbit control system (AOCS) controls the attitude and position of a complete space vehicle or satellite. Based on this function, the spacecraft orients its solar generators, thermal radiators, thrusters and particularly its payload units, optical instruments and antennas.
---------------------------------------------------------------------------------------------------------------------------------
*********** We Look FUTURE TECHNOLOGY by ME *************
++ SATELLITE COMMUNICATION TO BE STATION KEEPING FOR ALL ELECTRONICS SENSE CONTROLLING Machine Learning ++
________________________________________________________________________________________________________________________________
Langganan:
Postingan (Atom)