MARIA PREFER in the discovery and development of electronic hardware systems, closing software, and telecommunications that are integrated in the earth's space and time and the discovery of material outside the earth, namely on other planets. to fill the components and identification of the electronic system outside the atomic structure on earth so that it can be a component of an integrated function in the synchronization of earth and extraterrestrial space time. AMNIMARJESLOW GOVERNMENT 91220017 Denshi no sekai wa, chikyū to chikyū-gai jikū no dōki ni tōgō sa reta kinō konpōnento ni narimasu. 02096010014 LJBUS__TMWU TRIP X O X O Center Hole__ Thanks to Lord Jesus About : The Lord is my strength and my psalm, because the world does not see Him and does not know Him who is the Father in heaven, If you were from the world, surely the world would love you as its own. But because you are not of the world, but I have chosen and loved you, love is a response to love__ Gen. Mac Tech Zone MARIA PREFER in the development of electronic hardware contact contacts in integrated electronic telecommunications not as a component but as a structured function.
MARIA PREFER in the development of electronic hardware contact contacts in integrated electronic telecommunications not as a component but as a structured function in an IN system. The development and awesomeness of electronics began with a research and development: 1. The discovery of the conductor cable as a barrier and current-carrying component Electron. 2. The discovery of magnetic theory as an electronic component which is The basic and most important component in electronic circuits, especially for Electric power generation and telecommunications equipment. 3. The discovery of the theory of electric fields and electric currents in delivery and delivery 4. The discovery of new materials in 1973, namely the discovery of semi-conductor component materials. 5. Diodes, transistors, are produced in the form of tubes. 6. Diodes, transistors, produced in the form of silicon and germanium. 7. The theory and concept of EINSTEIN are applied to hardware systems and electronic closing software. EINSTEIN & AMNIMARJESLOW
( Gen. Mac Tech Zone Electronic Contact Inside and Outside Earth )
A basic telecommunication system
consists of three primary units that are always present in some form: A
transmitter that takes information and converts it to a signal. ... A
receiver that takes the signal from the channel and converts it back
into usable information.
Examples of telecommunications
systems are the telephone network, the radio broadcasting system,
computer networks and the Internet. The nodes in the system are the devices we use to communicate with, such as a telephone or a computer.
Telecommunications refers to the exchange of information by electronic and electrical means over a significant distance. ... Telecommunications devices include telephones, telegraph, radio, microwave communication arrangements, fiber optics, satellites and the Internet. Important telecommunications
technologies include the telegraph, telephone, radio, television,
video telephony, satellites, closed computer networks and the public
internet. A network
consists of two or more computers that are linked in order to share
resources (such as printers and CDs), exchange files, or allow
electronic communications. ... Two very common types of networks include: Local Area Network (LAN) Wide Area Network (WAN) . Some of the more notable and familiar examples of optical telecommunication
systems include navigation lights, flares, semaphore communication and
smoke signals. ... Fiber-optics and infrared sensors are also types of optical telecommunication .
Telecommunication
is the transmission of signs, signals, messages, words, writings,
images and sounds or information of any nature by wire, radio, optical
or other electromagnetic systems. Telecommunication occurs when the exchange of information between communication participants includes the use of technology.
Communication device examples
Bluetooth devices.
Infrared devices.
Modem (over phone line)
Network card (using Ethernet)
Smartphone.
Wi-Fi devices (using a Wi-Fi router)
The 7 characteristics of effective communication
Completeness.
Effective communications are complete, i.e. the receiver gets all the
information he needs to process the message and take action. ...
Conciseness. Conciseness is about keeping your message to a point. ...
Consideration. ...
Concreteness. ...
Courtesy. ...
Clearness. ...
Correctness
type of telecommunication hardware allows you access the Web ? Explanation: A network interface card, or NIC, is a piece of hardware that allows individual computers to physically connect to
a network. An NIC contains the electronic circuitry required for a
wired connection (Ethernet) or a wireless connection (Wi-Fi). communication and telecommunication for oppo_site :
is that telecommunication is (uncountable) the science and technology of the communication of messages over a distance using electric, electronic or electromagnetic impulses while communication is the act or fact of communicating anything; transmission.
Communication is used in families, amongst friends, in schools, and in government. The advancement of technology
has helped to advance the ways in which we communicate with each other.
... Cell phones, social networking websites, email, and faxes are a few
examples of electronic communication devices.
"The telephone network is made of three major components: local loops, trunks, and switching offices."
A basic telecommunication system consists of three elements:
A transmitter that takes information and converts it to a signal A
transmission medium that carries the signal; and, A receiver that
receives the signal and converts it back into usable information. The role and function of telecommunication is to provide an exchange of communication or information at a distance between people, satellites or computers.
Telecommunications equipment refers to hardware used mainly for telecommunications
such as transmission lines, multiplexers and base transceiver stations.
It encompasses different types of communication technologies including
telephones, radios and even computers.
Telecommunication companies collect massive amounts of data
from call detail records, mobile phone usage, network equipment, server
logs, billing, and social networks, providing lots of information about
their customers and network, but how can telecom companies use this data to improve their business?
Switching is the
method that is used to establish connections between nodes within a
network. Once a connection has been made, information can be sent.
Telephone switching usually refers to the switching of voice channels. ... Subscriber loops connect to the local switch in that area Some communications are face to face, but others use some type of technology. ... Telecommunication is communication at a distance using electrical signals or electromagnetic waves. Examples of telecommunications systems are the telephone network, the radio broadcasting system, computer networks and the Internet. Electronics and Communication and Electronics and Telecommunication are two different branches of engineering. They are mostly similar to each other. Electronics and Communication is the discipline of engineering that is more preferred by students than Electronics and Telecommunication
Information and communications technology (ICT) is an extensional term for information technology (IT) that stresses the role of unified communications and the integration of telecommunications (telephone lines and wireless signals) and computers, as well as necessary enterprise software, middle ware, storage .
The systems of communication ?
In telecommunication, a communications system or communication system is a collection of individual communications networks, transmission systems,
relay stations, tributary stations, and data terminal equipment (DTE)
usually capable of interconnection and inter operation to form an
integrated whole.
Types of Communication Technology. Technology has enabled a plethora of ways for humans to communicate
with each other and broadcast information to vast audiences. Early
inventions like radio and the telephone have evolved into vast,
worldwide networks of undersea cables and satellites.
New technological trends accompanying microelectronics include dimunition, digitalization, computerization, globalization of communication,
instantization, customatization, automation, robotization, and
leisurization. Each of these has a profound and extensive effect in both
work and social relations . ICT and telecommunication oppo_site
" ICT refers to technologies that provide access to information through telecommunications. ... Data communications refers to the transmission of this digital data between two or more computers and a computer network or data network is a telecommunications network that allows computers to exchange data.
The simplest form of telecommunications
takes place between two stations, but it is common for multiple
transmitting and receiving stations to exchange data among themselves.
Such an arrangement is called a telecommunications network. The internet is the largest example of a telecommunications network.
The types of switching used in telecommunications ? There are basically three types of switching methods are made available. Out of three methods, circuit switching and packet switching are commonly used but the message switching has been opposed out in the general communication procedure but is still used in the networking application.
Telecommunications refers to the exchange of information by electronic and electrical means over a significant distance. ... Telecommunications devices include telephones, telegraph, radio, microwave communication arrangements, fiber optics, satellites and the Internet.
Information communication technology ? Examples are: software applications and operating systems; web-based information
and applications such as distance learning; telephones and other
telecommunications products; video equipment and multimedia products
that may be distributed on videotapes, CDs, DVDs, email, or the World
Wide Web; office products.
In 2019, there is
going to be a convergence of Artificial Intelligence, Machine Learning,
and Deep Learning in business applications. As AI and learning technologies get to work together in order to reach better results, AI will have greater accuracy at all levels. The three types of metadata ?
On the other hand, NISO distinguishes among three types of metadata: descriptive, structural, and administrative. Descriptive metadata
is typically used for discovery and identification, as information to
search and locate an object, such as title, author, subjects, keywords,
publisher.
There are basically three types of switching methods are made available. Out of three methods, circuit switching and packet switching are commonly used but the message switching has been opposed out in the general communication procedure but is still used in the networking application.
Top 10 Trending Technologies
Artificial Intelligence.
Blockchain.
Augmented Reality and Virtual Reality.
Cloud Computing.
Angular and React.
DevOps.
Internet of Things (IoT)
Intelligent Apps (I – Apps)
Modern technology is simply an advancement of old technology. The impact of technology in modern life is unmeasurable, we use technology in different ways and sometimes the way we implement various technologies do more damage than good. ... We use technology on a daily basis to accomplish specific tasks or interests. Switching protocols ? Examples of link layer protocols include Ethernet frame switching (connectionless), ATM cell switching (connection), and multiprotocol label switching (MPLS; connection).
Here are the top five technology trends you need to know to work in any industry.
Internet of Things (IOT) One of the biggest tech trends to emerge in recent years is the Internet of Things. ...
Machine learning. ...
Virtual reality (VR) ...
Touch commerce. ...
Cognitive Technology.
10 Upcoming Technology That May Change The World
Google
Glass. Augmented Reality has already gotten into our life in the forms
of simulated experiment and education app, but Google is taking it
several steps higher with Google Glass. ...
Form 1. ...
Oculus Rift. ...
Leap Motion. ...
Eye Tribe. ...
SmartThings. ...
Firefox OS. ...
Project Fiona.
7 types of technology ?
Terms in this set (7)
Agriculture and Bio-Technology. Developing and using devices and systems to plant, grow, and harvest crops.
Energy and Power Technology. ...
Construction Technology. ...
Manufacturing Technology. ...
Transportation Technology. ...
Medical Technology. ...
Information and Communication Technology.
6 Types of Construction Technology You Will Use in the Future
Types of Construction Technology Impacting the Industry: Mobile Technology. ...
Mobile Technology. Mobile technology isn't just for games anymore. ...
Drones. Drones are the most widely used emerging construction technology. ...
Building Information Modeling (BIM) ...
Virtual Reality and Wearables. ...
3D Printing. ...
Artificial Intelligence.
Digital technology is primarily used with new physical communications media, such as satellite and fiber optic transmission. Digital technology
may refer to using new algorithms or applications to solve a problem
even if computers were used to develop solutions in the past. Digital Technologies: DIGITAL CURRENCIES.
10 of those technologies that could change virtually everything.
Space-Based Solar Power. ...
Mind Uploading. ...
Weather Control. ...
Molecular Assemblers. ...
Geoengineering. ...
Mind-to-Mind Communication. ...
Fusion Power. ...
Artificial Lifeforms.
Ten types of equipment that small businesses now need.
An Internet Modem. ...
A Router. ...
A Network Switch. ...
An Uninterruptable Power Supply (UPS) ...
VoIP Phones. ...
Desktop and Notebook Computers. ...
Headsets. ...
Servers.
The Types of Technology
Mechanical.
Electronic.
Industrial and manufacturing.
Medical.
Communications.
Every technological system makes use of seven types of resources: people, information, materials, tools and machines, energy, capital, and time. Technology comes from the needs of people and people's needs drive technology.
In the spirit of keeping technology relevant, here are 5 products that are technology you can use right now for aging in place.
Smart phones. 5 Examples of Technology You Can Use Now. ...
Automatic lights. ...
Activity and health monitoring. ...
Tablet computers. ...
Automated cabinets.
quick look through history at vintage technologies that we no longer use.
“Super 8/8mm” Handheld Video Cameras. Kodak invented the Super 8/8mm film format in 1965. ...
Betamax. ...
VHS Format. ...
Laser Disc Players. ...
Phonograph. ...
Turntables. ...
HAM Radio. ...
Reel to Reel.
Modern technology is simply an advancement of old technology. The impact of technology in modern life is unmeasurable, we use technology in different ways and sometimes the way we implement various technologies do more damage than good. ... We use technology on a daily basis to accomplish specific tasks or interests.
The four different types of innovation mentioned here – Incremental, Disruptive, Architectural and Radical – help illustrate the various ways that companies can innovate. There are more ways to innovate than these four
nine major trends that will define technological disruption .
5G Networks. ...
Artificial Intelligence (AI) ...
Autonomous Devices. ...
Blockchain. ...
Augmented Analytics. ...
Digital Twins. ...
Enhanced Edge Computing.
Digital technologies are electronic tools, systems, devices and resources that generate, store or process data. Well known examples include social media, online games, multimedia and mobile phones. Digital learning is any type of learning that uses technology.
Soon, these and the other exciting technologies described below will go mainstream, changing the world in the process.
Robot assistants. ...
Augmented and mixed reality. ...
Regenerative medicine. ...
Driverless vehicles. ...
Reusable rockets. ...
Cryptocurrency. ...
Quantum computing. ...
Artificial intelligence and automation.
the most common form of technology ? Laptops, Desktops Most Common Form
of Instructional Tech in the Classroom. Traditional laptops and
desktops are both used in 82 percent of learning environments — making
them the most common form of instructional tech in the classroom, according to Campus Technology's first-ever Teaching with Technology survey. ======================================================================
Telecommunications engineering
Telecommunications engineering is an engineering discipline centered on electrical and computer engineering which seeks to support and enhance telecommunication systems. The work ranges from basic circuit design
to strategic mass developments. A telecommunication engineer is
responsible for designing and overseeing the installation of
telecommunications equipment and facilities, such as complex electronic switching systems, and other plain old telephone service facilities, optical fiber cabling, IP networks, and microwave transmission systems. Telecommunication engineering also overlaps with broadcast engineering.
Telecommunication is a diverse field of engineering connected to electronic, civil and systems engineering.
they help find the cost of money for different types of computers and
technological objects. Ultimately, telecom engineers are responsible for
providing high-speed data transmission
services. They use a variety of equipment and transport media to design
the telecom network infrastructure; the most common media used by wired
telecommunications today are twisted pair, coaxial cables, and optical fibers. Telecommunications engineers also provide solutions revolving around wireless modes of communication and information transfer, such as wireless telephony services, radio and satellite communications, and internet and broadband technologies.
Telecommunication systems are generally designed by telecommunication
engineers which sprang from technological improvements in the telegraph
industry in the late 19th century and the radio and the telephone
industries in the early 20th century. Today, telecommunication is
widespread and devices that assist the process, such as the television,
radio and telephone, are common in many parts of the world. There are
also many networks that connect these devices, including computer
networks, public switched telephone network (PSTN),
radio networks, and television networks. Computer communication across
the Internet is one of many examples of telecommunication.
Telecommunication plays a vital role in the world economy, and the
telecommunication industry's revenue has been placed at just under 3% of
the gross world product.
Telegraph and telephone
Alexander
Graham Bell's big box telephone, 1876, one of the first commercially
available telephones - National Museum of American History
Samuel Morse
independently developed a version of the electrical telegraph that he
unsuccessfully demonstrated on 2 September 1837. Soon after he was
joined by Alfred Vail
who developed the register — a telegraph terminal that integrated a
logging device for recording messages to paper tape. This was
demonstrated successfully over three miles (five kilometres) on 6
January 1838 and eventually over forty miles (sixty-four kilometres)
between Washington, D.C. and Baltimore on 24 May 1844. The patented invention proved lucrative and by 1851 telegraph lines in the United States spanned over 20,000 miles (32,000 kilometres).
The first successful transatlantic telegraph cable
was completed on 27 July 1866, allowing transatlantic telecommunication
for the first time. Earlier transatlantic cables installed in 1857 and
1858 only operated for a few days or weeks before they failed. The international use of the telegraph has sometimes been dubbed the "Victorian Internet".
The first commercial telephone services were set up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Alexander Graham Bell
held the master patent for the telephone that was needed for such
services in both countries. The technology grew quickly from this point,
with inter-city lines being built and telephone exchanges in every major city of the United States by the mid-1880s.
Despite this, transatlantic voice communication remained impossible for
customers until January 7, 1927 when a connection was established using
radio. However no cable connection existed until TAT-1 was inaugurated on September 25, 1956 providing 36 telephone circuits.
In 1880, Bell and co-inventor Charles Sumner Tainter conducted the world's first wireless telephone call via modulated lightbeams projected by photophones.
The scientific principles of their invention would not be utilized for
several decades, when they were first deployed in military and fiber-optic communications.
Over several years starting in 1894 the Italian inventor Guglielmo Marconi built the first complete, commercially successful wireless telegraphy system based on airborne electromagnetic waves (radio transmission). In December 1901, he would go on to established wireless communication between Britain and Newfoundland, earning him the Nobel Prize in physics in 1909 (which he shared with Karl Braun). In 1900 Reginald Fessenden was able to wirelessly transmit a human voice. On March 25, 1925, Scottish inventor John Logie Baird publicly demonstrated the transmission of moving silhouette pictures at the London department store Selfridges. In October 1925, Baird was successful in obtaining moving pictures with halftone shades, which were by most accounts the first true television pictures. This led to a public demonstration of the improved device on 26 January 1926 again at Selfridges. Baird's first devices relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of semi-experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929.
Symbolic representation of the Arpanet as of September 1974
On 11 September 1940, George Stibitz was able to transmit problems using teleprinter to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer or mainframe computer
with remote "dumb terminals" remained popular throughout the 1950s and
into the 1960s. However, it was not until the 1960s that researchers
started to investigate packet switching
— a technology that allows chunks of data to be sent between different
computers without first passing through a centralized mainframe. A
four-node network emerged on 5 December 1969. This network soon became
the ARPANET, which by 1981 would consist of 213 nodes.
ARPANET's development centered around the Request for Comment process and on 7 April 1969, RFC 1
was published. This process is important because ARPANET would
eventually merge with other networks to form the Internet, and many of
the communication protocols that the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol version 4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today.
Optical fiber can be used as a medium for telecommunication and computer networking
because it is flexible and can be bundled into cables. It is especially
advantageous for long-distance communications, because light propagates
through the fiber with little attenuation compared to electrical
cables. This allows long distances to be spanned with few repeaters.
In 1966 Charles K. Kao and George Hockham proposed optical fibers at STC Laboratories (STL) at Harlow,
England, when they showed that the losses of 1000 dB/km in existing
glass (compared to 5-10 dB/km in coaxial cable) was due to contaminants,
which could potentially be removed.
Optical fiber was successfully developed in 1970 by Corning Glass Works, with attenuation low enough for communication purposes (about 20dB/km), and at the same time GaAs (Gallium arsenide) semiconductor lasers were developed that were compact and therefore suitable for transmitting light through fiber optic cables for long distances.
After a period of research starting from 1975, the first
commercial fiber-optic communications system was developed, which
operated at a wavelength around 0.8 µm and used GaAs semiconductor
lasers. This first-generation system operated at a bit rate of 45 Mbps
with repeater spacing of up to 10 km. Soon on 22 April 1977, General
Telephone and Electronics sent the first live telephone traffic through
fiber optics at a 6 Mbit/s throughput in Long Beach, California.
The first wide area network fibre optic cable system in the world
seems to have been installed by Rediffusion in Hastings, East Sussex,
UK in 1978. The cables were placed in ducting throughout the town, and
had over 1000 subscribers. They were used at that time for the
transmission of television channels,not available because of local
reception problems.
The first transatlantic telephone cable to use optical fiber was TAT-8, based on Desurvire optimized laser amplification technology. It went into operation in 1988.
In the late 1990s through 2000, industry promoters, and research
companies such as KMI, and RHK predicted massive increases in demand for
communications bandwidth due to increased use of the Internet, and commercialization of various bandwidth-intensive consumer services, such as video on demand. Internet protocol data traffic was increasing exponentially, at a faster rate than integrated circuit complexity had increased under Moore's Law.
Transmitter (information source) that takes information and converts it to a signal for transmission. In electronics and telecommunications a transmitter or radio transmitter is an electronic device which, with the aid of an antenna, produces radio waves. In addition to their use in broadcasting, transmitters are necessary component parts of many electronic devices that communicate by radio, such as cell phones,
Transmission medium over which the signal is transmitted. For example, the transmission medium for sounds is usually air, but solids and liquids may also act as transmission media for sound. Many transmission media are used as communications channel. One of the most common physical medias used in networking is copper wire.
Copper wire is used to carry signals to long distances using relatively
low amounts of power. Another example of a physical medium is optical fiber,
which has emerged as the most commonly used transmission medium for
long-distance communications. Optical fiber is a thin strand of glass
that guides light along its length.
The absence of a material medium in vacuum may also constitute a transmission medium for electromagnetic waves such as light and radio waves.
Receiver
Receiver (information sink) that receives and converts the signal back into required information. In radio communications, a radio receiver is an electronic device that receives radio waves and converts the information carried by them to a usable form. It is used with an antenna. The information produced by the receiver may be in the form of sound (an audio signal), images (a video signal) or digital data.
Wired communications make use of underground communications cables
(less often, overhead lines), electronic signal amplifiers (repeaters)
inserted into connecting cables at specified points, and terminal
apparatus of various types, depending on the type of wired
communications used.
Wireless communication
Wireless communication involves the transmission of information over a
distance without help of wires, cables or any other forms of electrical
conductors.
Wireless operations permit services, such as long-range communications,
that are impossible or impractical to implement with the use of wires.
The term is commonly used in the telecommunications
industry to refer to telecommunications systems (e.g. radio
transmitters and receivers, remote controls etc.) which use some form of
energy (e.g. radio waves, acoustic energy, etc.) to transfer information without the use of wires.Information is transferred in this manner over both short and long distances.
Roles
Telecom equipment engineer
A
telecom equipment engineer is an electronics engineer that designs
equipment such as routers, switches, multiplexers, and other specialized
computer/electronics equipment designed to be used in the
telecommunication network infrastructure.
Network engineer
A
network engineer is a computer engineer who is in charge of designing,
deploying and maintaining computer networks. In addition, they oversee
network operations from a network operations center, designs backbone infrastructure, or supervises interconnections in a data center.
Central-office engineer
Typical Northern Telecom DMS100 Telephone Central Office Installation
A central-office engineer is responsible for designing and overseeing the implementation of telecommunications equipment in a central office (CO for short), also referred to as a wire center or telephone exchange
A CO engineer is responsible for integrating new technology into the
existing network, assigning the equipment's location in the wire center,
and providing power, clocking (for digital equipment), and alarm
monitoring facilities for the new equipment. The CO engineer is also
responsible for providing more power, clocking, and alarm monitoring
facilities if there are currently not enough available to support the
new equipment being installed. Finally, the CO engineer is responsible
for designing how the massive amounts of cable will be distributed to
various equipment and wiring frames throughout the wire center and
overseeing the installation and turn up of all new equipment.
Sub-roles
As structural engineers,
CO engineers are responsible for the structural design and placement of
racking and bays for the equipment to be installed in as well as for
the plant to be placed on.
As electrical engineers, CO engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity
and loop loss calculations are required to determine cable length and
size required to provide the service called for. In addition, power
requirements have to be calculated and provided to power any electronic
equipment being placed in the wire center.
Overall, CO engineers have seen new challenges emerging in the CO environment. With the advent of Data Centers, Internet Protocol
(IP) facilities, cellular radio sites, and other emerging-technology
equipment environments within telecommunication networks, it is
important that a consistent set of established practices or requirements
be implemented.
Installation suppliers or their sub-contractors are expected to
provide requirements with their products, features, or services. These
services might be associated with the installation of new or expanded
equipment, as well as the removal of existing equipment.
Several other factors must be considered such as:
Regulations and safety in installation
Removal of hazardous material
Commonly used tools to perform installation and removal of equipment
Outside plant
(OSP) engineers are also often called field engineers because they
frequently spend much time in the field taking notes about the civil
environment, aerial, above ground, and below ground.
OSP engineers are responsible for taking plant (copper, fiber, etc.)
from a wire center to a distribution point or destination point
directly. If a distribution point design is used, then a cross-connect box is placed in a strategic location to feed a determined distribution area.
The cross-connect box, also known as a serving area interface,
is then installed to allow connections to be made more easily from the
wire center to the destination point and ties up fewer facilities by not
having dedication facilities from the wire center to every destination
point. The plant is then taken directly to its destination point or to
another small closure called a terminal, where access can also be gained
to the plant if necessary. These access points are preferred as they
allow faster repair times for customers and save telephone operating
companies large amounts of money.
The plant facilities can be delivered via underground facilities,
either direct buried or through conduit or in some cases laid under
water, via aerial facilities such as telephone or power poles, or via
microwave radio signals for long distances where either of the other two
methods is too costly.
Sub-roles
Engineer (OSP) climbing a telephone pole
As structural engineers,
OSP engineers are responsible for the structural design and placement
of cellular towers and telephone poles as well as calculating pole
capabilities of existing telephone or power poles onto which new plant
is being added. Structural calculations are required when boring under
heavy traffic areas such as highways or when attaching to other
structures such as bridges. Shoring also has to be taken into
consideration for larger trenches or pits. Conduit structures often
include encasements of slurry that needs to be designed to support the
structure and withstand the environment around it (soil type, high
traffic areas, etc.).
As electrical engineers,
OSP engineers are responsible for the resistance, capacitance, and
inductance (RCL) design of all new plant to ensure telephone service is
clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity
and loop loss calculations are required to determine cable length and
size required to provide the service called for. In addition power
requirements have to be calculated and provided to power any electronic
equipment being placed in the field. Ground potential has to be taken
into consideration when placing equipment, facilities, and plant in the
field to account for lightning strikes, high voltage intercept from
improperly grounded or broken power company facilities, and from various
sources of electromagnetic interference.
As civil engineers, OSP engineers are responsible for drafting plans, either by hand or using Computer-aided design
(CAD) software, for how telecom plant facilities will be placed. Often
when working with municipalities trenching or boring permits are
required and drawings must be made for these. Often these drawings
include about 70% or so of the detailed information required to pave a
road or add a turn lane to an existing street. Structural calculations
are required when boring under heavy traffic areas such as highways or
when attaching to other structures such as bridges. As civil engineers,
telecom engineers provide the modern communications backbone for all
technological communications distributed throughout civilizations today.
Unique to telecom engineering is the use of air-core cable which
requires an extensive network of air handling equipment such as
compressors, manifolds, regulators and hundreds of miles of air pipe per
system that connects to pressurized splice cases all designed to
pressurize this special form of copper cable to keep moisture out and
provide a clean signal to the customer.
As political and social ambassador,
the OSP engineer is a telephone operating company's face and voice to
the local authorities and other utilities. OSP engineers often meet with
municipalities, construction companies and other utility companies to
address their concerns and educate them about how the telephone utility
works and operates.
Additionally, the OSP engineer has to secure real estate in which to
place outside facilities, such as an easement to place a cross-connect
box.
========================================================================
The Importance of Telecommunications and
Telecommunication Research
How important is telecommunications , How important is telecommunications research .
TELECOMMUNICATIONS—AN EVOLVING DEFINITION
Before the emergence of the Internet and other
data networks, telecommunications had a clear meaning: the telephone
(and earlier the telegraph) was an application of technology that
allowed people to communicate at a distance by voice (and earlier by
encoded electronic signals), and telephone service was provided by the
public switched telephone network (PSTN). Much of the U.S. network was
owned and operated by American Telephone & Telegraph (AT&T); the
rest consisted of smaller independent companies, including some served
by GTE.
Then in the 1960s, facsimile and data services
were overlaid on the PSTN, adding the ability to communicate documents
and data at a distance—applications still considered telecommunications
because they enabled new kinds of communication at a distance that were
also carried over the PSTN.
expanded to include data transport, video
conferencing, e-mail, instant messaging, Web browsing, and various forms
of distributed collaboration, enabled by transmission media that have
also expanded (from traditional copper wires) to include microwave,
terrestrial wireless, satellite, hybrid fiber/coaxial cable, and
broadband fiber transport.
Today consumers think of telecommunications in
terms of both products and services. Starting with the Carterphone
decision by the Federal Communications Commission in 1968,
it has become permissible and increasingly common for consumers to buy
telecommunications applications or equipment as products as well as
services. For example, a customer-owned and customer-installed WiFi
local area network may be the first access link supporting a voice over
Internet Protocol (VoIP) service, and a consumer may purchase a VoIP
software package and install it on his or her personally owned and
operated personal computer that connects to the Internet via an Internet
service provider.
The technologies used for telecommunications have
changed greatly over the last 50 years. Empowered by research into
semiconductors and digital electronics in the telecommunications
industry, analog representations of voice, images, and video have been
supplanted by digital representations. The biggest consequence has been
that all types of media can be represented in the same basic form (i.e.,
as a stream of bits) and therefore handled uniformly within a common
infrastructure (most commonly as Internet Protocol, or IP, data
streams). Subsequently, circuit switching was supplemented by, and will
likely ultimately be supplanted by, packet switching. For example,
telephony is now routinely carried at various places in the network by
the Internet (using VoIP) and cable networks. Just as the PSTN is within
the scope of telecommunications, so also is an Internet or cable TV
network carrying a direct substitute telephony application.
Perhaps the most fundamental change, both in terms
of technology and its implications for industry structure, has occurred
in the architecture of telecommunications networks. Architecture in
this context refers to the functional description of the general
structure of the system as a whole and how the different parts of the
system relate to each other. Previously the PSTN, cable, and data
networks coexisted as separately owned and operated networks carrying
different types of communications, although they often shared a common
technology base (such as point-to-point digital communications) and some
facilities (e.g., high-speed digital pipes shared by different
networks).
How are the new networks different? First, they
are integrated, meaning that all media— be they voice, audio, video, or
data—are increasingly communicated over a single common network. This
integration offers economies of scope and scale in both capital
expenditures and operational costs, and also allows different media to
be mixed within common applications. As a result, both technology
suppliers and service providers are increasingly in the business of
providing telecommunications in all media simultaneously rather than
specializing in a particular type such as voice, video, or data.
Second, the networks are built in layers, from the
physical layer, which is concerned with the mechanical, electrical and
optical, and functional and procedural means for managing network
connections to the data, network, and transport layers, which are
concerned with transferring data, routing data across networks between
addresses, and ensuring end-to-end
connections and reliability of data transfer to
the application layer, which is concerned with providing a particular
functionality using the network and with the interface to the user.2
Both technology (equipment and software) suppliers
and service providers tend to specialize in one or two of these layers,
each of which seeks to serve all applications and all media. As a
consequence, creating a new application may require the participation
and cooperation of a set of complementary layered capabilities. This
structure results in a horizontal industry structure, quite distinct
from the vertically integrated industry structure of the Bell System
era.
All these changes suggest a new definition of telecommunications: Telecommunications isthe suite of technologies, devices, equipment, facilities, networks, and applications that support communication at a distance.
The range of telecommunications applications is
broad and includes telephony and video conferencing, facsimile,
broadcast and interactive television, instant messaging, e-mail,
distributed collaboration, a host of Web- and Internet-based
communication, and data transmission.3
Of course many if not most software applications communicate across the
network in some fashion, even if it is for almost incidental purposes
such as connecting to a license server or downloading updates. Deciding
what is and is not telecommunications is always a judgment call.
Applications of information technology range from those involving almost
no communication at all (word processing) to simple voice
communications (telephony in its purest and simplest form), with many
gradations in between.
As supported by the horizontally homogeneous
layered infrastructure, applications of various sorts increasingly
incorporate telecommunications as only one capability among many. For
example telephony, as it evolves into the Internet world, is beginning
to offer a host of new data-based features and integrates other elements
of collaboration (e.g., visual material or tools for collaborative
authoring). Another important trend is machine-to-machine communication
at a distance, and so it cannot be assumed that telecommunications
applications exclusively involve people.
THE TELECOMMUNICATIONS INDUSTRY
Like telecommunications itself, the
telecommunications industry is broader than it was in the past. It
encompasses multiple service providers, including telephone companies,
cable system operators, Internet service providers, wireless carriers,
and satellite operators. The industry today includes software-based
applications with a communications emphasis and intermediate layers of
software incorporated into end-to-end communication services. It also
includes suppliers of telecommunications equipment and software products
sold directly to consumers and also to service providers, as well as
the telecommunications service providers
It includes companies selling
components or intellectual property predominately of a communication
flavor, including integrated circuit chip sets for cell phones and cable
and digital subscriber line (DSL) modems.
No longer a vertically integrated business, the
telecommunications industry is enabled by a complex value chain that
includes vendors, service providers, and users. The telecommunications
value chain begins with building blocks such as semiconductor chips and
software. These components are, in turn, incorporated into equipment and
facilities that are purchased by service providers and users. The
service providers then, in turn, build networks in order to sell
telecommunications services to end users. The end users include
individuals subscribing to services like telephony (landline and
cellular) and broadband Internet access, companies and organizations
that contract for internal communications networks, and companies and
organizations that operate their own networks. Some major end-user
organizations also bypass service providers and buy, provision, and
operate their own equipment and software, like a corporate local area
network (LAN) or a U.S. military battlefield information system.
Software suppliers participate at multiple points in the value chain,
selling directly not only to equipment vendors but also to service
providers (e.g., operational support systems) and to end users (e.g.,
various PC-based applications for communications using the Internet).
An implication of defining telecommunications
broadly is that every layer involved in communication at a distance
becomes, at least partially, part of the telecommunications industry.
The broad range and large number of companies that contribute to the
telecommunications industry are evident in the following list of
examples:
Networking service providers across
the Internet and the PSTN, wireless carriers, and cable operators.
Examples include AT&T, Comcast, Verizon, and DirecTV.
Communications equipment suppliers that are the primary suppliers to service providers. Examples include Cisco, Lucent, and Motorola.
Networking equipment suppliers
selling products to end-user organizations and individuals. Examples
include Cisco’s Linksys division and Hewlett-Packard (local area
networking products).
Semiconductor manufacturers,
especially those supplying system-on-a-chip solutions for the
telecommunications industry. Examples include Texas Instruments,
Qualcomm, Broadcom, and STMicroelectronics.
Suppliers of operating systems that include a networking stack. Microsoft is an example.
Software suppliers, especially those
selling infrastructure and applications incorporating or based on
real-time media. Examples include IBM, RealNetworks (streaming media),
and BEA (application servers).
Utility or on-demand service providers
selling real-time communications-oriented applications. Examples
include AOL and Microsoft (instant messaging) and WebEx (online
meetings).
Consumer electronics suppliers with
communications-oriented customer-premises equipment and handheld
appliances. Examples include Motorola and Nokia (cell phones), Research
in Motion (handheld e-mail appliances), Polycom (videoconferencing
terminals), Microsoft and Sony (networked video games), and Panasonic
(televisions).
What is striking about this list is how
broad and inclusive it is. Even though many of these firms do not
specialize solely in telecommunications, it is now quite common for
firms in the
larger domain of information technology to offer
telecommunications products or to incorporate telecommunications
capability into an increasing share of their products.
THE IMPORTANCE OF TELECOMMUNICATIONS
Telecommunications and Society
The societal importance of telecommunications is
well accepted and broadly understood, reflected in its near-ubiquitous
penetration and use. Noted below are some of the key areas of impact:
Telecommunications provides a technological foundation for societal communications.
Communication plays a central role in the fundamental operations of a
society—from business to government to families. In fact, communication
among people is the essence of what distinguishes an organization,
community, or society from a collection of individuals.
Communication—from Web browsing to cell phone calling to instant
messaging—has become increasingly integrated into how we work, play, and
live.
Telecommunications enables participation and development.
Telecommunications plays an increasingly vital role in enabling the
participation and development of people in communities and nations
disadvantaged by geography, whether in rural areas in the United States
or in developing nations in the global society and economy.
Telecommunications provides vital infrastructure for national security.
From natural disaster recovery, to homeland security, to communication
of vital intelligence, to continued military superiority,
telecommunications plays a pivotal role. When the issue is countering an
adversary, it is essential not only to preserve telecommunications
capability, but also to have a superior capability. There are potential
risks associated with a reliance on overseas sources for innovation,
technologies, applications, and services.
It is difficult to predict the future
impact of telecommunications technologies, services, and applications
that have not yet been invented. For example, in the early days of
research and development into the Internet in the late 1960s, who could
have foreseen the full impact of the Internet’s widespread use today?
Telecommunications and the U.S. Economy
The telecommunications industry is a major
direct contributor to U.S. economic activity. The U.S. Census Bureau
estimates that just over 3 percent of the U.S. gross domestic income
(GDI) in 2003 was from communications services (2.6 percent) and
communications hardware (0.4 percent)—categories that are narrower than
the broad definition of telecommunications offered above. At 3 percent,
telecommunications thus represented more than a third of the total
fraction of GDI spent on information technology (IT; 7.9 percent of GDI)
in 2003. In fact, the fraction attributable to telecommunications is
probably larger relative to that of IT than these figures suggest, given
that much of the GDI from IT hardware (particularly semiconductors)
could apply to any of several industries (computing, telecommunications,
media, and electronics, for example). If one assumes IT to be the sum
of computers (calculating), computers (wholesale), computers (retail),
and software and services, the total GDI for IT.
Today, however, new wireless applications, low-cost
manufacturing innovations, and handset design are some of the areas in
which the Asian countries are outinvesting the United States in R&D
and are seeing resulting bottom-line impacts to their economies. For the
United States to compete in the global marketplace—across industries—it
needs the productivity that comes from enhancements in
telecommunications. If the telecommunications infrastructure in the
United States were to fall significantly behind that of the rest of the
world, the global competitiveness of all other U.S. industries would be
affected. Conversely, the growth in U.S. productivity has been based in
part on a telecommunications infrastructure that is the most advanced in
the world.
U.S. leadership in telecommunications did not come
by accident—success at the physical, network, and applications levels
was made possible by the U.S. investment in decades of research and the
concomitant development of U.S. research leadership in
communications-related areas. Telecommunications has been and likely
will continue to be an important foundation for innovative new
industries arising in the United States that use telecommunications as a
primary technological enabler and foundation. Recent examples of
innovative new businesses leveraging telecommunications include Yahoo!,
Amazon, eBay, and Google. Telecom .
Finally, telecommunications is an important
component of the broader IT industry, which is sometimes viewed as
having three technology legs:
processing (to transform or change information), storage (to allow
communication of information from one time to another), and
communications (to transmit information from one place to another). The
boundaries between these areas are not very distinct, but this
decomposition helps illustrate the breadth of IT and the role that
telecommunications plays. Increasingly IT systems must incorporate all
three elements to different degrees,
and it is increasingly common for companies in any sector of IT to
offer products with a communications component, and often with a
communications emphasis. The IT industry’s overall strength depends on
strength across communications, processing, and storage as well as
strength in all layers of technology—from the physical layer (including
communications hardware, microprocessors, and magnetic and optical
storage), to the software infrastructure layers (operating systems and
Web services), to software applications.
Telecommunications and Global Competitiveness
In this era of globalization, many companies are
multinational, with operations—including R&D—conducted across the
globe. For example, IBM, HP, Qualcomm, and Microsoft all have research
facilities in other countries, and many European and Asian companies
have research laboratories in the United States. Increasing numbers of
businesses compete globally. Every company and every industry must
assess the segments and niches in which it operates to remain globally
competitive.
Telecommunication is the transmission of signs, signals, messages, words, writings, images and sounds or information of any nature by wire, radio, optical or other electromagnetic systems.
Telecommunication occurs when the exchange of information between communication participants includes the use of technology. It is transmitted through a transmission media, such as over physical media, for example, over electrical cable, or via electromagnetic radiation through space such as radio or light. Such transmission paths are often divided into communication channels which afford the advantages of multiplexing. Since the Latin term communicatio is considered the social process of information exchange, the term telecommunications is often used in its plural form because it involves many different technologies.
Homing pigeons have occasionally been used throughout history by different cultures. Pigeon post had Persian roots, and was later used by the Romans to aid their military. Frontinus said that Julius Caesar used pigeons as messengers in his conquest of Gaul.
The Greeks also conveyed the names of the victors at the Olympic Games to various cities using homing pigeons. In the early 19th century, the Dutch government used the system in Java and Sumatra. And in 1849, Paul Julius Reuter started a pigeon service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed.
In the Middle Ages, chains of beacons
were commonly used on hilltops as a means of relaying a signal. Beacon
chains suffered the drawback that they could only pass a single bit of
information, so the meaning of the message such as "the enemy has been
sighted" had to be agreed upon in advance. One notable instance of their
use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London.
In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system (or semaphore line) between Lille and Paris.
However semaphore suffered from the need for skilled operators and
expensive towers at intervals of ten to thirty kilometres (six to
nineteen miles). As a result of competition from the electrical
telegraph, the last commercial line was abandoned in 1880.
Telegraph and telephone
On 25 July 1837 the first commercial electrical telegraph was demonstrated by English inventorSirWilliam Fothergill Cooke, and English scientistSirCharles Wheatstone. Both inventors viewed their device as "an improvement to the [existing] electromagnetic telegraph" not as a new device.
Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. His code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed on 27 July 1866, allowing transatlantic telecommunication for the first time.
The conventional telephone was patented by Alexander Bell in 1876. Elisha Gray
also filed a caveat for it in 1876. Gray abandoned his caveat and
because he did not contest Bell's priority, the examiner approved Bell's
patent on March 3, 1876. Gray had filed his caveat for the variable
resistance telephone, but Bell was the first to write down the idea and
the first to test it in a telephone.[88] Antonio Meucci
invented a device that allowed the electrical transmission of voice
over a line nearly thirty years before in 1849, but his device was of
little practical value because it relied on the electrophonic effect requiring users to place the receiver in their mouths to "hear".
The first commercial telephone services were set-up by the Bell
Telephone Company in 1878 and 1879 on both sides of the Atlantic in the
cities of New Haven and London.
Radio and television
Starting in 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the then newly discovered phenomenon of radio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean. This was the start of wireless telegraphy by radio. Voice and music were demonstrated in 1900 and 1906, but had little early success.
Millimetre wave communication was first investigated by Bengali physicist Jagadish Chandra Bose during 1894–1896, when he reached an extremely high frequency of up to 60GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901.
World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated development of radio for the wartime purposes of aircraft and land communication, radio navigation and radar. Development of stereo FM broadcasting
of radio took place from the 1930s on-wards in the United States and
displaced AM as the dominant commercial standard by the 1960s, and by
the 1970s in the United Kingdom.
On 25 March 1925, John Logie Baird was able to demonstrate the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning 30 September 1929. However, for most of the twentieth century televisions depended upon the cathode ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on 7 September 1927. After World War II,
the experiments in television that had been interrupted were resumed,
and it also became an important home entertainment broadcast medium.
On 11 September 1940, George Stibitz transmitted problems for his Complex Number Calculator in New York using a teletype, and received the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer (mainframe) with remote dumb terminals remained popular well into the 1970s. However, already in the 1960s, researchers started to investigate packet switching, a technology that sends a message in portions to its destination asynchronously without passing it through a centralized mainframe. A four-nodenetwork emerged on 5 December 1969, constituting the beginnings of the ARPANET, which by 1981 had grown to 213 nodes. ARPANET eventually merged with other networks to form the Internet. While Internet development was a focus of the Internet Engineering Task Force (IETF) who published a series of Request for Comment documents, other networking advancements occurred in industrial laboratories, such as the local area network (LAN) developments of Ethernet (1983) and the token ring protocol (1984)
Realization and demonstration, on 29 October 2001, of the first digital cinema transmission by satellite in Europe of a feature film by Bernard Pauchon, Alain Lorentz, Raymond Melwig and Philippe Binant.
Key concepts
Modern
telecommunication is founded on a series of key concepts that
experienced progressive development and refinement in a period of well
over a century.
Basic elements
Telecommunication technologies may primarily be divided into wired and wireless methods. Overall though, a basic telecommunication system consists of three main parts that are always present in some form or another:
A receiver that takes the signal from the channel and converts it back into usable information for the recipient.
For example, in a radio broadcasting station the station's large power amplifier is the transmitter; and the broadcasting antenna
is the interface between the power amplifier and the "free space
channel". The free space channel is the transmission medium; and the
receiver's antenna is the interface between the free space channel and
the receiver. Next, the radio receiver is the destination of the radio signal, and this is where it is converted from electricity to sound for people to listen to.
Sometimes, telecommunication systems are "duplex" (two-way systems) with a single box of electronics working as both the transmitter and a receiver, or a transceiver. For example, a cellular telephone is a transceiver.
The transmission electronics and the receiver electronics within a
transceiver are actually quite independent of each other. This can be
readily explained by the fact that radio transmitters contain power
amplifiers that operate with electrical powers measured in watts or kilowatts, but radio receivers deal with radio powers that are measured in the microwatts or nanowatts.
Hence, transceivers have to be carefully designed and built to isolate
their high-power circuitry and their low-power circuitry from each
other, as to not cause interference.
Telecommunication over fixed lines is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast communication because it is between one powerful transmitter and numerous low-power but sensitive radio receivers.
Telecommunications in which multiple transmitters and multiple
receivers have been designed to cooperate and to share the same physical
channel are called multiplex systems.
The sharing of physical channels using multiplexing often gives very
large reductions in costs. Multiplexed systems are laid out in
telecommunication networks, and the multiplexed signals are switched at
nodes through to the correct destination terminal receiver.
Analog versus digital communications
Communications signals can be sent either by analog signals or digital signals. There are analog communication systems and digital communication
systems. For an analog signal, the signal is varied continuously with
respect to the information. In a digital signal, the information is
encoded as a set of discrete values (for example, a set of ones and
zeros). During the propagation and reception, the information contained
in analog signals will inevitably be degraded by undesirable physical noise.
(The output of a transmitter is noise-free for all practical purposes.)
Commonly, the noise in a communication system can be expressed as
adding or subtracting from the desirable signal in a completely random way. This form of noise is called additive noise,
with the understanding that the noise can be negative or positive at
different instants of time. Noise that is not additive noise is a much
more difficult situation to describe or analyze, and these other kinds
of noise will be omitted here.
On the other hand, unless the additive noise disturbance exceeds a
certain threshold, the information contained in digital signals will
remain intact. Their resistance to noise represents a key advantage of
digital signals over analog signals.
Telecommunication networks
A telecommunications network is a collection of transmitters, receivers, and communications channels that send messages to one another. Some digital communications networks contain one or more routers that work together to transmit information to the correct user. An analog communications network consists of one or more switches that establish a connection between two or more users. For both types of network, repeaters may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from the noise.
Another advantage of digital systems over analog is that their output is
easier to store in memory, i.e. two voltage states (high and low) are
easier to store than a continuous range of states.
Communication channels
The
term "channel" has two different meanings. In one meaning, a channel is
the physical medium that carries a signal between the transmitter and
the receiver. Examples of this include the atmosphere for sound communications, glass optical fibers for some kinds of optical communications, coaxial cables for communications by way of the voltages and electric currents in them, and free space for communications using visible light, infrared waves, ultraviolet light, and radio waves.
Coaxial cable types are classified by RG type or "radio guide",
terminology derived from World War II. The various RG designations are
used to classify the specific signal transmission applications.
This last channel is called the "free space channel". The sending of
radio waves from one place to another has nothing to do with the
presence or absence of an atmosphere between the two. Radio waves travel
through a perfect vacuum just as easily as they travel through air, fog, clouds, or any other kind of gas.
The other meaning of the term "channel" in telecommunications is seen in the phrase communications channel,
which is a subdivision of a transmission medium so that it can be used
to send multiple streams of information simultaneously. For example, one
radio station can broadcast radio waves into free space at frequencies
in the neighborhood of 94.5 MHz
(megahertz) while another radio station can simultaneously broadcast
radio waves at frequencies in the neighborhood of 96.1 MHz. Each radio
station would transmit radio waves over a frequency bandwidth of about 180 kHz (kilohertz), centered at frequencies such as the above, which are called the "carrier frequencies".
Each station in this example is separated from its adjacent stations by
200 kHz, and the difference between 200 kHz and 180 kHz (20 kHz) is an
engineering allowance for the imperfections in the communication system.
In the example above, the "free space channel" has been divided into communications channels according to frequencies,
and each channel is assigned a separate frequency bandwidth in which to
broadcast radio waves. This system of dividing the medium into channels
according to frequency is called "frequency-division multiplexing". Another term for the same concept is "wavelength-division multiplexing", which is more commonly used in optical communications when multiple transmitters share the same physical medium.
Another way of dividing a communications medium into channels is
to allocate each sender a recurring segment of time (a "time slot", for
example, 20 milliseconds
out of each second), and to allow each sender to send messages only
within its own time slot. This method of dividing the medium into
communication channels is called "time-division multiplexing" (TDM),
and is used in optical fiber communication. Some radio communication
systems use TDM within an allocated FDM channel. Hence, these systems
use a hybrid of TDM and FDM.
Modulation
The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analog waveform. This is commonly called "keying"—a term derived from the older use of Morse Code in telecommunications—and several keying techniques exist (these include phase-shift keying, frequency-shift keying, and amplitude-shift keying). The "Bluetooth" system, for example, uses phase-shift keying to exchange information between various devices.
In addition, there are combinations of phase-shift keying and
amplitude-shift keying which is called (in the jargon of the field) "quadrature amplitude modulation" (QAM) that are used in high-capacity digital radio communication systems.
Modulation can also be used to transmit the information of
low-frequency analog signals at higher frequencies. This is helpful
because low-frequency analog signals cannot be effectively transmitted
over free space. Hence the information from a low-frequency analog
signal must be impressed into a higher-frequency signal (known as the "carrier wave") before transmission. There are several different modulation schemes available to achieve this [two of the most basic being amplitude modulation (AM) and frequency modulation
(FM)]. An example of this process is a disc jockey's voice being
impressed into a 96 MHz carrier wave using frequency modulation (the
voice would then be received on a radio as the channel "96 FM"). In addition, modulation has the advantage that it may use frequency division multiplexing (FDM).
Optical fiber provides cheaper bandwidth for long distance communication.
In a broadcast system, the central high-powered broadcast tower transmits a high-frequency electromagnetic wave
to numerous low-powered receivers. The high-frequency wave sent by the
tower is modulated with a signal containing visual or audio information.
The receiver is then tuned so as to pick up the high-frequency wave and a demodulator
is used to retrieve the signal containing the visual or audio
information. The broadcast signal can be either analog (signal is varied
continuously with respect to the information) or digital (information
is encoded as a set of discrete values).
The broadcast media industry
is at a critical turning point in its development, with many countries
moving from analog to digital broadcasts. This move is made possible by
the production of cheaper, faster and more capable integrated circuits.
The chief advantage of digital broadcasts is that they prevent a number
of complaints common to traditional analog broadcasts. For television,
this includes the elimination of problems such as snowy pictures, ghosting
and other distortion. These occur because of the nature of analog
transmission, which means that perturbations due to noise will be
evident in the final output. Digital transmission overcomes this problem
because digital signals are reduced to discrete values upon reception
and hence small perturbations do not affect the final output. In a
simplified example, if a binary message 1011 was transmitted with signal
amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9
0.2 1.1 0.9] it would still decode to the binary message 1011— a perfect
reproduction of what was sent. From this example, a problem with
digital transmissions can also be seen in that if the noise is great
enough it can significantly alter the decoded message. Using forward error correction
a receiver can correct a handful of bit errors in the resulting message
but too much noise will lead to incomprehensible output and hence a
breakdown of the transmission.
In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards; the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2.
The choice of modulation also varies between the schemes. In digital
audio broadcasting, standards are much more unified with practically all
countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception is the United States which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission that allows digital information to "piggyback" on normal AM or FM analog transmissions.
However, despite the pending switch to digital, analog television
remains being transmitted in most countries. An exception is the United
States that ended analog television transmission (by all but the very
low-power TV stations) on 12 June 2009
after twice delaying the switchover deadline. Kenya also ended analog
television transmission in December 2014 after multiple delays. For
analog television, there were three standards in use for broadcasting
color TV These are known as PAL (German designed), NTSC (American designed), and SECAM
(French designed). For analog radio, the switch to digital radio is
made more difficult by the higher cost of digital receivers. The choice of modulation for analog radio is typically between amplitude (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM, and quadrature amplitude modulation is used for stereo AM or C-QUAM.
The Internet is a worldwide network of computers and computer networks that communicate with each other using the Internet Protocol (IP). Any computer on the Internet has a unique IP address
that can be used by other computers to route information to it. Hence,
any computer on the Internet can send a message to any other computer
using its IP address. These messages carry with them the originating
computer's IP address allowing for two-way communication. The Internet
is thus an exchange of messages between computers.
It is estimated that 51% of the information flowing through
two-way telecommunications networks in the year 2000 were flowing
through the Internet (most of the rest (42%) through the landline telephone).
By the year 2007 the Internet clearly dominated and captured 97% of all
the information in telecommunication networks (most of the rest (2%)
through mobile phones). As of 2008,
an estimated 21.9% of the world population has access to the Internet
with the highest access rates (measured as a percentage of the
population) in North America (73.6%), Oceania/Australia (59.5%) and
Europe (48.1%). In terms of broadband access, Iceland (26.7%), South Korea (25.4%) and the Netherlands (25.3%) led the world.
The Internet works in part because of protocols
that govern how the computers and routers communicate with each other.
The nature of computer network communication lends itself to a layered
approach where individual protocols in the protocol stack run
more-or-less independently of other protocols. This allows lower-level
protocols to be customized for the network situation while not changing
the way higher-level protocols operate. A practical example of why this
is important is because it allows an Internet browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi
connection. Protocols are often talked about in terms of their place in
the OSI reference model (pictured on the right), which emerged in 1983
as the first step in an unsuccessful attempt to build a universally
adopted networking protocol suite.
For the Internet, the physical medium and data link protocol can
vary several times as packets traverse the globe. This is because the
Internet places no constraints on what physical medium or data link
protocol is used. This leads to the adoption of media and protocols that
best suit the local network situation. In practice, most
intercontinental communication will use the Asynchronous Transfer Mode
(ATM) protocol (or a modern equivalent) on top of optic fiber. This is
because for most intercontinental communication the Internet shares the
same infrastructure as the public switched telephone network.
At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the World Wide Web, these "IP addresses" are derived from the human readable form using the Domain Name System
(e.g. 72.14.207.99 is derived from www.google.com). At the moment, the
most widely used version of the Internet Protocol is version four but a
move to version six is imminent.
At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol
(UDP). TCP is used when it is essential every message sent is received
by the other computer whereas UDP is used when it is merely desirable.
With TCP, packets are retransmitted if they are lost and placed in order
before they are presented to higher layers. With UDP, packets are not
ordered nor retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by.Because certain application-level protocols use certain ports,
network administrators can manipulate traffic to suit particular
requirements. Examples are to restrict Internet access by blocking the
traffic destined for a particular port or to affect the performance of
certain applications by assigning priority.
Above the transport layer, there are certain protocols that are
sometimes used and loosely fit in the session and presentation layers,
most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that data transferred between two parties remains completely confidential. Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and XMPP (instant messaging).
Voice over Internet Protocol (VoIP) allows data packets to be used for synchronous
voice communications. The data packets are marked as voice type packets
and can be prioritized by the network administrators so that the
real-time, synchronous conversation is less subject to contention with
other types of data traffic which can be delayed (i.e. file transfer or
email) or buffered in advance (i.e. audio and video) without detriment.
That prioritization is fine when the network has sufficient capacity for
all the VoIP calls taking place at the same time and the network is
enabled for prioritization i.e. a private corporate style network, but
the Internet is not generally managed in this way and so there can be a
big difference in the quality of VoIP calls over a private network and
over the public Internet.
Local area networks and wide area networks
Despite the growth of the Internet, the characteristics of local area networks
(LANs)—computer networks that do not extend beyond a few
kilometers—remain distinct. This is because networks on this scale do
not require all the features associated with larger networks and are
often more cost-effective and efficient without them. When they are not
connected with the Internet, they also have the advantages of privacy
and security. However, purposefully lacking a direct connection to the
Internet does not provide assured protection from hackers, military
forces, or economic powers. These threats exist if there are any methods
for connecting remotely to the LAN.
Wide area networks
(WANs) are private computer networks that may extend for thousands of
kilometers. Once again, some of their advantages include privacy and
security. Prime users of private LANs and WANs include armed forces and
intelligence agencies that must keep their information secure and
secret.
In the mid-1980s, several sets of communication protocols emerged
to fill the gaps between the data-link layer and the application layer
of the OSI reference model. These included Appletalk, IPX, and NetBIOS with the dominant protocol set during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point, but it was typically only used by large government and research facilities.
As the Internet grew in popularity and its traffic was required
to be routed into private networks, the TCP/IP protocols replaced
existing local area network technologies. Additional technologies, such
as DHCP,
allowed TCP/IP-based computers to self-configure in the network. Such
functions also existed in the AppleTalk/ IPX/ NetBIOS protocol sets.
Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label
Switching (MPLS) are typical data-link protocols for larger networks
such as WANs; Ethernet and Token Ring
are typical data-link protocols for LANs. These protocols differ from
the former protocols in that they are simpler, e.g., they omit features
such as quality of service guarantees, and offer collision prevention. Both of these differences allow for more economical systems.
Despite the modest popularity of IBM Token Ring in the 1980s and
1990s, virtually all LANs now use either wired or wireless Ethernet
facilities. At the physical layer, most wired Ethernet implementations
use copper twisted-pair cables (including the common 10BASE-T
networks). However, some early implementations used heavier coaxial
cables and some recent implementations (especially high-speed ones) use
optical fibers. When optic fibers are used, the distinction must be made between multimode fibers and single-mode fibers. Multimode fibers
can be thought of as thicker optical fibers that are cheaper to
manufacture devices for, but that suffers from less usable bandwidth and
worse attenuation—implying poorer long-distance performance.
New media are forms of media that are native to computers,
computational and rely on computers for redistribution. Some examples of
new media are telephones, computers, virtual worlds, single media, website games, human-computer interface, computer animation and interactive computer installations.
New media are often contrasted to "old media",
such as television, radio, and print media, although scholars in
communication and media studies have criticized rigid distinctions based
on oldness and novelty. New media does not include television programs (only analog broadcast), feature films, magazines, books, – unless they contain technologies that enable digital generative or interactive processes.
Wikipedia, an online encyclopedia, is a good example of New Media, combining Internet
accessible digital text, images and video with web-links, creative
participation of contributors, interactive feedback of users and
formation of a participant community of editors and donors for the
benefit of non-community readers. Facebook is another type of New Media, belonging to the category of social media model, in which most users are also participants. Another type of New Media is Twitter
which also belongs to the social media category, through which users
interact with one another and make announcements to which the public
receives. Both Facebook and Twitter have risen in usage in recent years
and have become an online resource for acquiring information .
In the 1950s, connections between computing and radical art began to grow stronger. It was not until the 1980s that Alan Kay and his co-workers at Xerox PARC began to give the computability of a personal computer
to the individual, rather than have a big organization be in charge of
this. "In the late 1980s and early 1990s, however, we seem to witness a
different kind of parallel relationship between social changes and computer design. Although causally unrelated, conceptually it makes sense that the Cold War and the design of the Web took place at exactly the same time."
Writers and philosophers such as Marshall McLuhan were instrumental in the development of media theory during this period. His now famous declaration in Understanding Media: The Extensions of Man (1964) that "the medium is the message"
drew attention to the too often ignored influence media and technology
themselves, rather than their "content," have on humans' experience of
the world and on society broadly.
Until the 1980s media relied primarily upon print and analogbroadcast models, such as those of television and radio.
The last twenty-five years have seen the rapid transformation into
media which are predicated upon the use of digital technologies, such as
the Internet and video games. However, these examples are only a small representation of new media. The use of digital computers has transformed the remaining 'old' media, as suggested by the advent of digital television and online publications. Even traditional media forms such as the printing press have been transformed through the application of technologies such as image manipulation software like Adobe Photoshop and desktop publishing tools.
Andrew L. Shapiro
(1999) argues that the "emergence of new, digital technologies signals a
potentially radical shift of who is in control of information,
experience and resources" (Shapiro cited in Croteau and Hoynes 2003:
322). W. Russell Neuman
(1991) suggests that whilst the "new media" have technical capabilities
to pull in one direction, economic and social forces pull back in the
opposite direction. According to Neuman, "We are witnessing the
evolution of a universal interconnected network of audio, video, and
electronic text communications that will blur the distinction between
interpersonal and mass communication and between public and private
communication" (Neuman cited in Croteau and Hoynes 2003: 322). Neuman
argues that new media will:
Alter the meaning of geographic distance.
Allow for a huge increase in the volume of communication.
Provide the possibility of increasing the speed of communication.
Provide opportunities for interactive communication.
Allow forms of communication that were previously separate to overlap and interconnect.
Consequently, it has been the contention of scholars such as Douglas Kellner and James Bohman that new media, and particularly the Internet, provide the potential for a democratic postmodern
public sphere, in which citizens can participate in well informed,
non-hierarchical debate pertaining to their social structures.
Contradicting these positive appraisals of the potential social impacts
of new media are scholars such as Edward S. Herman and Robert McChesney who have suggested that the transition to new media has seen a handful of powerful transnationaltelecommunications corporations who achieve a level of global influence which was hitherto unimaginable.
Scholars, such as Lister et al. (2003), have highlighted both the
positive and negative potential and actual implications of new media
technologies, suggesting that some of the early work into new media
studies was guilty of technological determinism –
whereby the effects of media were determined by the technology
themselves, rather than through tracing the complex social networks
which governed the development, funding, implementation and future
development of any technology.
Based on the argument that people have a limited amount of time
to spend on the consumption of different media, Displacement theory
argue that the viewership or readership of one particular outlet leads
to the reduction in the amount of time spent by the individual on
another. The introduction of New Media, such as the internet, therefore
reduces the amount of time individuals would spend on existing "Old"
Media, which could ultimately lead to the end of such traditional media .
Definition
Although there are several ways that New Media on Electronics contact may be described,
New Media versus Cyberculture –
Cyberculture is the various social phenomena that are associated with
the Internet and network communications (blogs, online multi-player
gaming), whereas New Media is concerned more with cultural objects and
paradigms (digital to analog television, iPhones).
New Media as Computer Technology Used as a Distribution Platform –
New Media are the cultural objects which use digital computer
technology for distribution and exhibition. e.g. (at least for now)
Internet, Web sites, computer multimedia, Blu-ray disks etc. The problem
with this is that the definition must be revised every few years. The
term "new media" will not be "new" anymore, as most forms of culture
will be distributed through computers.
New Media as Digital Data Controlled by Software – The
language of New Media is based on the assumption that, in fact, all
cultural objects that rely on digital representation and computer-based
delivery do share a number of common qualities. New media is reduced to
digital data that can be manipulated by software as any other data. Now
media operations can create several versions of the same object. An
example is an image stored as matrix data which can be manipulated and
altered according to the additional algorithms implemented, such as
color inversion, gray-scaling, sharpening, rasterizing, etc.
New Media as the Mix Between Existing Cultural Conventions and the Conventions of Software –
New Media today can be understood as the mix between older cultural
conventions for data representation, access, and manipulation and newer
conventions of data representation, access, and manipulation. The "old"
data are representations of visual reality and human experience, and the
"new" data is numerical data. The computer is kept out of the key
"creative" decisions, and is delegated to the position of a technician.
e.g. In film, software is used in some areas of production, in others
are created using computer animation.
New Media as the Aesthetics that Accompanies the Early Stage of Every New Modern Media and Communication Technology –
While ideological tropes indeed seem to be reappearing rather
regularly, many aesthetic strategies may reappear two or three times ...
In order for this approach to be truly useful it would be insufficient
to simply name the strategies and tropes and to record the moments of
their appearance; instead, we would have to develop a much more
comprehensive analysis which would correlate the history of technology
with social, political, and economical histories or the modern period.
New Media as Faster Execution of Algorithms Previously Executed Manually or through Other Technologies –
Computers are a huge speed-up of what were previously manual
techniques. e.g. calculators. Dramatically speeding up the execution
makes possible previously non-existent representational technique. This
also makes possible of many new forms of media art such as interactive
multimedia and video games. On one level, a modern digital computer is
just a faster calculator, we should not ignore its other identity: that
of a cybernetic control device.
New Media as the Encoding of Modernist Avant-Garde; New Media as Metamedia – Manovich declares that the 1920s are more relevant to New Media than any other time period. Metamedia coincides with postmodernism
in that they both rework old work rather than create new work. New
media avant-garde is about new ways of accessing and manipulating
information (e.g. hypermedia, databases, search engines, etc.).
Meta-media is an example of how quantity can change into quality as in
new media technology and manipulation techniques can recode modernist
aesthetics into a very different postmodern aesthetics.
New Media as Parallel Articulation of Similar Ideas in Post-WWII Art and Modern Computing –
Post WWII Art or "combinatorics" involves creating images by
systematically changing a single parameter. This leads to the creation
of remarkably similar images and spatial structures. This illustrates
that algorithms, this essential part of new media, do not depend on
technology, but can be executed by humans.
Globalization
The
rise of new media has increased communication between people all over
the world and the Internet. It has allowed people to express themselves
through blogs, websites, videos, pictures, and other user-generated
media.
Flew (2002) stated that, "as a result of the evolution of new media technologies, globalization
occurs." Globalization is generally stated as "more than expansion of
activities beyond the boundaries of particular nation states".
Globalization shortens the distance between people all over the world
by the electronic communication (Carely 1992 in Flew 2002) and
Cairncross (1998) expresses this great development as the "death of
distance". New media "radically break the connection between physical
place and social place, making physical location much less significant
for our social relationships" (Croteau and Hoynes 2003: 311).
However, the changes in the new media environment create a series of tensions in the concept of "public sphere".
According to Ingrid Volkmer, "public sphere" is defined as a process
through which public communication becomes restructured and partly
disembedded from national political and cultural institutions. This
trend of the globalized public sphere is not only as a geographical
expansion form a nation to worldwide, but also changes the relationship
between the public, the media and state (Volkmer, 1999:123).
"Virtual communities" are being established online and transcend geographical boundaries, eliminating social restrictions. Howard Rheingold
(2000) describes these globalised societies as self-defined networks,
which resemble what we do in real life. "People in virtual communities
use words on screens to exchange pleasantries and argue, engage in
intellectual discourse, conduct commerce, make plans, brainstorm,
gossip, feud, fall in love, create a little high art and a lot of idle
talk" (Rheingold cited in Slevin 2000: 91). For Sherry Turkle
"making the computer into a second self, finding a soul in the machine,
can substitute for human relationships" (Holmes 2005: 184). New media
has the ability to connect like-minded others worldwide.
While this perspective suggests that the technology drives – and
therefore is a determining factor – in the process of globalization,
arguments involving technological determinism are generally frowned upon by mainstream media studies.
Instead academics focus on the multiplicity of processes by which
technology is funded, researched and produced, forming a feedback loop
when the technologies are used and often transformed by their users,
which then feeds into the process of guiding their future development.
While commentators such as Castells espouse a "soft determinism"
whereby they contend that "Technology does not determine society. Nor
does society script the course of technological change, since many
factors, including individual inventiveness and entrpreneurialism,
intervene in the process of scientific discovery, technical innovation
and social applications, so the final outcome depends on a complex
pattern of interaction. Indeed the dilemma of technological determinism
is probably a false problem, since technology is society and society
cannot be understood without its technological tools." (Castells 1996:5)
This, however, is still distinct from stating that societal changes are
instigated by technological development, which recalls the theses of Marshall McLuhan.
Manovich and Castells
have argued that whereas mass media "corresponded to the logic of
industrial mass society, which values conformity over individuality,"
(Manovich 2001:41) new media follows the logic of the postindustrial or
globalized society whereby "every citizen can construct her own custom
lifestyle and select her ideology from a large number of choices. Rather
than pushing the same objects to a mass audience, marketing now tries
to target each individual separately." (Manovich 2001:42).
As tool for social change
Social movement media has a rich and storied history that has changed at a rapid rate since New Media became widely used. The Zapatista Army of National Liberation of Chiapas,
Mexico were the first major movement to make widely recognized and
effective use of New Media for communiques and organizing in 1994.
Since then, New Media has been used extensively by social movements to
educate, organize, share cultural products of movements, communicate,
coalition build, and more. The WTO Ministerial Conference of 1999 protest activity
was another landmark in the use of New Media as a tool for social
change. The WTO protests used media to organize the original action,
communicate with and educate participants, and was used as an
alternative media source. The Indymedia
movement also developed out of this action, and has been a great tool
in the democratization of information, which is another widely discussed
aspect of new media movement.
Some scholars even view this democratization as an indication of the
creation of a "radical, socio-technical paradigm to challenge the
dominant, neoliberal and technologically determinist model of
information and communication technologies."
A less radical view along these same lines is that people are taking
advantage of the Internet to produce a grassroots globalization, one
that is anti-neoliberal and centered on people rather than the flow of
capital.
Chanelle Adams, a feminist blogger for the Bi-Weekly webpaper The Media
says that in her "commitment to anti-oppressive feminist work, it seems
obligatory for her to stay in the know just to remain relevant to the
struggle." In order for Adams and other feminists who work towards
spreading their messages to the public, new media becomes crucial
towards completing this task, allowing people to access a movement's
information instantaneously. Of course, some are also skeptical of the
role of New Media in Social Movements. Many scholars point out unequal
access to new media as a hindrance to broad-based movements, sometimes
even oppressing some within a movement. Others are skeptical about how democratic or useful it really is for social movements, even for those with access.
New Media has also found a use with less radical social movements such as the Free Hugs Campaign.
Using websites, blogs, and online videos to demonstrate the
effectiveness of the movement itself. Along with this example the use of
high volume blogs has allowed numerous views and practices to be more
widespread and gain more public attention. Another example is the
ongoing Free Tibet Campaign, which has been seen on numerous websites as well as having a slight tie-in with the band Gorillaz in their Gorillaz Bitez clip featuring the lead singer 2D
sitting with protesters at a Free Tibet protest. Another social change
seen coming from New Media is trends in fashion and the emergence of
subcultures such as Text Speak, Cyberpunk, and various others.
Following trends in fashion and Text Speak, New Media also makes way for "trendy" social change. The Ice Bucket Challenge is a recent example of this. All in the name of raising money for ALS (the lethal neurodegenerative disorder also known as Lou Gehrig's disease), participants are nominated by friends via Facebook, Twitter
and ownmirror to dump a bucket of ice water on themselves, or donate to
the ALS Foundation. This became a huge trend through Facebook's tagging
tool, allowing nominees to be tagged in the post. The videos appeared
on more people's feeds, and the trend spread fast. This trend raised
over 100 million dollars for the cause and increased donations by 3,500
percent.
National security
New Media has also recently become of interest to the global espionage community as it is easily accessible electronically in database format and can therefore be quickly retrieved and reverse engineered by national governments. Particularly of interest to the espionage community are Facebook and Twitter,
two sites where individuals freely divulge personal information that
can then be sifted through and archived for the automatic creation of
dossiers on both people of interest and the average citizen.
New media also serves as an important tool for both institutions
and nations to promote their interest and values (The contents of such
promotion may vary according to different purposes). Some communities
consider it an approach of "peaceful evolution" that may erode their own
nation's system of values and eventually compromise national security.
Interactivity
Interactivity has become a term for a number of new media use options evolving from the rapid dissemination of Internet access points, the digitalization of media, and media convergence.
In 1984, Rice defined new media as communication technologies that
enable or facilitate user-to-user interactivity and interactivity
between user and information. Such a definition replaces the "one-to-many" model of traditional mass communication with the possibility of a "many-to-many"
web of communication. Any individual with the appropriate technology
can now produce his or her online media and include images, text, and
sound about whatever he or she chooses.
Thus the convergence of new methods of communication with new
technologies shifts the model of mass communication, and radically
reshapes the ways we interact and communicate with one another. In "What
is new media?" Vin Crosbie
(2002) described three different kinds of communication media. He saw
Interpersonal media as "one to one", Mass media as "one to many", and
finally New Media as Individuation Media or "many to many".
When we think of interactivity and its meaning, we assume that it
is only prominent in the conversational dynamics of individuals who are
face-to-face. This restriction of opinion does not allow us to see its
existence in mediated communication forums. Interactivity is present in
some programming work, such as video games. It's also viable in the
operation of traditional media. In the mid 1990s, filmmakers started
using inexpensive digital cameras to create films. It was also the time
when moving image technology had developed, which was able to be viewed
on computer desktops in full motion. This development of new media
technology was a new method for artists to share their work and interact
with the big world. Other settings of interactivity include radio and
television talk shows, letters to the editor, listener participation in
such programs, and computer and technological programming.
Interactive new media has become a true benefit to every one because
people can express their artwork in more than one way with the
technology that we have today and there is no longer a limit to what we
can do with our creativity.
Interactivity can be considered a central concept in understanding new media, but different media forms possess, or enable different degrees of interactivity, and some forms of digitized and converged media are not in fact interactive at all. Tony Feldmanconsiders digital satellite television
as an example of a new media technology that uses digital compression
to dramatically increase the number of television channels that can be
delivered, and which changes the nature of what can be offered through
the service, but does not transform the experience of television from
the user's point of view, and thus lacks a more fully interactive
dimension. It remains the case that interactivity is not an inherent
characteristic of all new media technologies, unlike digitization and
convergence.
Terry Flew
(2005) argues that "the global interactive games industry is large and
growing, and is at the forefront of many of the most significant
innovations in new media" (Flew 2005: 101). Interactivity is prominent
in these online video games such as World of Warcraft, The Sims Online and Second Life.
These games, which are developments of "new media," allow for users to
establish relationships and experience a sense of belonging that
transcends traditional temporal and spatial boundaries (such as when
gamers logging in from different parts of the world interact). These
games can be used as an escape or to act out a desired life. Will Wright, creator of The Sims, "is fascinated by the way gamers have become so attached to his invention-with some even living their lives through it". New media have created virtual realities that are becoming virtual extensions of the world we live in. With the creation of Second Life and Active Worlds
before it, people have even more control over this virtual world, a
world where anything that a participant can think of can become a
reality.
New Media changes continuously because it is constantly modified
and redefined by the interaction between users, emerging technologies,
cultural changes, etc.
New forms of New Media are emerging like Web 2.0 tools Facebook
and YouTube, along with video games and the consoles they are played on.
It is helping to make video games and video game consoles branch out
into New Media as well. Gamers on YouTube post videos of them playing
video games they like and that people want to watch. Cultural changes
are happening because people can upload their gaming experiences to a
Web 2.0 tool like Facebook and YouTube for the world to see. Consoles
like the Xbox One and the PlayStation 4 have WiFi connectivity and chat
rooms on most of their video games that allow gamer-to-gamer
conversations around the world. They also allow people to connect to
YouTube, so if they stream/record a gamer, it allows for easy uploading
to YouTube for the world to see. Even the older video game consoles are
becoming new media because YouTube can display the walkthroughs and
let's plays of the game. YouTube gaming is evolving because some
YouTubers are getting wealthy and earning money from their videos. The
more people that become YouTube members, the popular YouTube becomes and
the more it starts emerging as a new source of media, along with video
games and consoles. The chat room/online gaming/WiFi consoles are
getting the highest increase in popularity because they are not only the
most advanced, but because of the newest video games being created that
the majority of the gaming community wants to buy, play and watch. The
older video games and consoles also get popularity, but from YouTube's
capabilities of uploading them to the gamer's channels for everyone to
see. The older games get popularity from the communities nostalgia of
the game(s), and the old school graphics and gameplay that made people
see how old-school technology was the best at some point in time.
Facebook helps those video games and consoles get popularity as well.
People can upload the videos they create to Facebook as well. Facebook
is a much larger website with a lot more users, so people use Facebook
to spread their gaming content as well.
Industry
The new media industry shares an open association with many market segments in areas such as software/video game design, television, radio, mobile and particularly movies, advertising and marketing, through which industry seeks to gain from the advantages of two-way dialogue with consumers primarily through the Internet. As a device to source the ideas, concepts, and intellectual properties of the general public, the television
industry has used new media and the Internet to expand their resources
for new programming and content. The advertising industry has also
capitalized on the proliferation of new media with large agencies
running multimillion-dollar interactive advertising subsidiaries. Interactive websites and kiosks have become popular. In a number of cases advertising agencies have also set up new divisions to study new media. Public relations firms are also taking advantage of the opportunities in new media through interactive PR practices. Interactive PR practices include the use of social media[34] to reach a mass audience of online social network users.
With the rise of the Internet, many new career paths were
created. Before the rise, many technical jobs were seen as nerdy. The
Internet led to creative work that was seen as laid-back and diverse
amongst sex, race, and sexual orientation. Web design, gaming design,
webcasting, blogging, and animation are all creative career paths that
came with this rise. At first glance, the field of new media may seem
hip, cool, creative and relaxed. What many don't realize is that
working in this field is tiresome. Many of the people that work in this
field don't have steady jobs. Work in this field has become
project-based. Individuals work project to project for different
companies. Most people are not working on one project or contract, but
multiple ones at the same time. Despite working on numerous projects,
people in this industry receive low payments, which is highly contrasted
with the techy millionaire stereotype. It may seem as a carefree life
from the outside, but it is not. New media workers work long hours for
little pay and spend up to 20 hours a week looking for new projects to
work on.
The ideology of new media careers as an egalitarian and
stress-free environment is a myth. It is a game of networking and
thriving at what you are capable of. Many workers face job instability.
Inequality within this field exists due to the informality and
flexibility of this career path.
Within the Industry, many Companies have emerged or transformed
to adapt to the fast moving exciting opportunities that new media
offers. The following companies are great examples of the changing
landscape of companies/agencies whom have redeveloped, added or changed
services to offer new media services.
Brand New Media
Adcore Creative
Seven West Media
Youth
Based on nationally representative data, a study conducted by Kaiser Family Foundation
in five-year intervals in 1998–99, 2003–04, and 2008–09 found that with
technology allowing nearly 24-hour media access, the amount of time
young people spend with entertainment media has risen dramatically,
especially among Black and Hispanic youth.
Today, 8- to 18-year-olds devote an average of 7 hours and 38 minutes
(7:38) to using entertainment media in a typical day (more than 53 hours
a week) – about the same amount most adults spend at work per day.
Since much of that time is spent 'media multitasking' (using more than
one medium at a time), they actually manage to spend a total of 10 hours
and 45 minutes worth of media content in those 7½ hours per day.
According to the Pew Internet & American Life Project,
96% of 18- to 29-year-olds and three-quarters (75%) of teens now own a
cell phone, 88% of whom text, with 73% of wired American teens using
social networking websites, a significant increase from previous years.
A survey of over 25000 9- to 16-year-olds from 25 European countries
found that many underage children use social media sites despite the
site's stated age requirements, and many youth lack the digital skills
to use social networking sites safely.
The development of the new digital media demands a new educational model by parents and educators. The parental mediation become a way to manage the children's experiences with Internet, chat, videogames and social network.
A recent trend in internet is Youtubers Generation. Youtubers are
young people who offer free video in their personal channel on YouTube.
There are videos on games, fashion, food, cinema and music, where they
offers tutorial or comments.
The role of cellular phones, such as the iPhone, has created the inability to be in social isolation,
and the potential of ruining relationships. The iPhone activates the
insular cortex of the brain, which is associated with feelings of love.
People show similar feelings to their phones as they would to their
friends, family and loved ones. Countless people spend more time on
their phones, while in the presence of other people than spending time
with the people in the same room or class
Observational Research
One
of the major issues for observational research is whether a particular
project is considered to involve human subjects. A human subject is one
that “is defined by federal regulations as a living individual about
whom an investigator obtains data through interaction with the
individual or identifiable private information”.
Moreno et al. (2013) note that if access to a social media site is
public, information is considered identifiable but not private, and
information gathering procedures do not require researchers to interact
with the original poster of the information, then this does not meet the
requirements for human subjects research. Research may also be exempt
if the disclosure of participant responses outside the realm of the
published research does not subject the participant to civic or criminal
liability, damage the participant's reputation, employability or
financial standing.
Given these criteria, however, researchers still have considerable
leeway when conducting observational research on social media. Many
profiles on Facebook, Twitter, LinkedIn, and Twitter are public and
researchers are free to use that data for observational research.
Users have the ability to change their privacy settings on most
social media websites. Facebook, for example, provides users with the
ability to restrict who sees their posts through specific privacy
settings.
There is also debate about whether requiring users to create a username
and password is sufficient to establish whether the data is considered
public or private. Historically, Institutional Review Boards considered
such websites to be private,[50]
although newer websites like YouTube call this practice into question.
For example, YouTube only requires the creation of a username and
password to post videos and/or view adult content, but anyone is free to
view general YouTube videos and these general videos would not be
subject to consent requirements for researchers looking to conduct
observational studies.
Interactive Research
According
to Romano et al. (2013), interactive research occurs when "a researcher
wishes to access the [social media website] content that is not
publicly available" (pg. 710). Because researchers have limited ways of
accessing this data, this could mean that a researcher sends a Facebook
user a friend request, or follows a user on Twitter in order to gain
access to potentially protected tweets (pg.711). While it could be
argued that such actions would violate a social media user's expectation
of privacy, Ellison, Steinfield and Lampe (2007) argued that actions
like "friending" or "following" an individual on social media
constitutes a "loose tie" relationship and therefore not sufficient to
establish a reasonable expectation of privacy since individuals often
have friends or followers they have never even met.
Survey and Interview Research
Because
research on social media occurs online, it is difficult for researchers
to observe participant reactions to the informed consent process. For
example, when collecting information about activities that are
potentially illegal, or recruiting participants from stigmatized
populations, this lack of physical proximity could potentially
negatively impact the informed consent process.
Another important consideration regards the confidentiality of
information provided by participants. While information provided over
the internet might be perceived as lower risk, studies that publish
direct quotes from study participants might expose them to the risk of
being identified via a Google search
The Basic Concept of Electronic Telephone exchange ( PABX = The Basic Logic Circuit of INTERNET )
A telephone exchange or telephone switch is a telecommunications system used in the public switched telephone network or in large enterprises. It interconnects telephone subscriber lines or virtual circuits of digital systems to establish telephone calls between subscribers.
In historical perspective, telecommunication terms have been used with different semantics over time. The term telephone exchange is often used synonymously with central office, a Bell System term. Often, a central office is defined as a building used to house the inside plant
equipment of potentially several telephone exchanges, each serving a
certain geographical area. Such an area has also been referred to as the
exchange or exchange area. In North America, a central office location
may also be identified as a wire center, designating a facility from which a telephone obtains dial tone.[1] For business and billing purposes, telephony carriers define rate centers,
which in larger cities may be clusters of central offices, to define
specified geographical locations for determining distance measurements.
In the United States and Canada, the Bell System established in
the 1940s a uniform system of identifying central offices with a
three-digit central office code, that was used as a prefix to subscriber
telephone numbers. All central offices within a larger region,
typically aggregated by state, were assigned a common numbering plan area code.
With the development of international and transoceanic telephone
trunks, especially driven by direct customer dialing, similar efforts of
systematic organization of the telephone networks occurred in many
countries in the mid-20th century.
For corporate or enterprise use, a private telephone exchange is often referred to as a private branch exchange (PBX), when it has connections to the public switched telephone network.
A PBX is installed in enterprise facilities, typically collocated with
large office spaces or within an organizational campus to serve the
local private telephone system and any private leased line circuits.
Smaller installations might deploy a PBX or key telephone system in the office of a receptionist.
A telephone operator manually connecting calls with cord pairs at a telephone switchboard.
A modern central office, equipped for voice communication and broadband data.
1903 manual switch for four subscriber lines (top) with four cross-bar
talking circuits (horizontal), and one bar to connect the operator (T).
The lowest cross-bar connects idle stations to ground to enable the
signaling indicators (F) .
Technologies
Many terms used in telecommunication technology differ in meaning and usage among the various English speaking regions.
For the purpose of this article the following definitions are made:
Manual service is a condition in which a human telephone operator routes calls inside an exchange without the use of a dial.
Dial service is when an exchange routes calls by a switch interpreting dialed digits.
A telephone switch is the switching equipment of an exchange.
A concentrator is a device that concentrates traffic, be it remote or co-located with the switch.
An off-hook condition represents a circuit that is in use, e.g., when a phone call is in progress.
An on-hook condition represents an idle circuit, i.e. no phone call is in progress.
A wire center is the area served by a particular switch or central office.
Central office originally referred to switching equipment and
its operators, it is also used generally for the building that houses
switching and related inside plant equipment. In United States telecommunication jargon, a central office (C.O.) is a common carrierswitching centerClass 5 telephone switch in which trunks and local loops are terminated and switched.
In the UK, a telephone exchange means an exchange building, and is also the name for a telephone switch.
Early automatic exchanges
A rural telephone exchange building in Australia
Automatic exchanges, or dial service, came into existence in the early 20th century. Their purpose was to eliminate the need for human switchboard operators who completed the connections required for a telephone call.
Automation replaced human operators with electromechanical systems and
telephones were equipped with a dial by which a caller transmitted the
destination telephone number to the automatic switching system.
A telephone exchange automatically senses an off-hook condition of the telephone when the user removes the handset from the switchhook or cradle. The exchange provides dial tone at that time to indicate to the user that the exchange is ready to receive dialed digits. The pulses or DTMF
tones generated by the telephone are processed and a connection is
established to the destination telephone within the same exchange or to
another distant exchange.
The exchange maintains the connection until one of the parties hangs up. This monitoring of connection status is called supervision. Additional features, such as billing equipment, may also be incorporated into the exchange.
The Bell System dial service implemented a feature called automatic number identification (ANI) which facilitated services like automated billing, toll-free 800-numbers, and 9-1-1
service. In manual service, the operator knows where a call is
originating by the light on the switchboard jack field. Before ANI, long
distance calls were placed into an operator queue and the operator
asked the calling party's number and recorded it on a paper toll ticket.
Early exchanges were electromechanical systems using motors, shaft drives, rotating switches and relays. Some types of automatic exchanges were the Strowger switch or step-by-step switch, All Relay, X-Y, panel switch, Rotary system and the crossbar switch.
Electronic switches
Electronic switching systems gradually evolved in stages from electromechanical hybrids with stored program control to the fully digital systems. Early systems used reed relay-switched metallic paths under digital control. Equipment testing, phone numbers reassignments, circuit lockouts and similar tasks were accomplished by data entry on a terminal.
Examples of these systems included the Western Electric1ESS switch, Northern Telecom SP1, Ericsson AXE, Philips PRX/A, ITT Metaconta, British GPO/BTTXE
series and several other designs were similar. Ericsson also developed a
fully computerized version of their ARF crossbar exchange called ARE.
These used a crossbar switching matrix with a fully computerized control
system and provided a wide range of advanced services. Local versions
were called ARE11 while tandem versions were known as ARE13. They were
used in Scandinavia, Australia, Ireland and many other countries in the
late 1970s and into the 1980s when they were replaced with digital
technology.
These systems could use the old electromechanical signaling
methods inherited from crossbar and step-by-step switches. They also
introduced a new form of data communications: two 1ESS exchanges could
communicate with one another using a data link called Common Channel Interoffice Signaling, (CCIS). This data link was based on CCITT 6, a predecessor to SS7.
In European systems R2 signalling was normally used.
Digital switches
A typical satellite PABX with front cover removed
Digital switches work by connecting two or more digital circuits, according to a dialed telephone number or other instruction. Calls are set up between switches. In modern networks, this is usually controlled using the Signalling System 7 (SS7) protocol, or one of its variants. Many networks around the world are now transitioning to voice over IP technologies which use Internet-based protocols such as the Session Initiation Protocol (SIP). These may have superseded TDM and SS7 based technologies in some networks.
The concepts of digital switching were developed by various labs
in the United States and in Europe from the 1930s onwards. The first
prototype digital switch was developed by Bell Labs
as part of the ESSEX project while the first true digital exchange to
be combined with digital transmission systems was designed by LCT
(Laboratoire Central de Telecommunications) in Paris. The first digital switch to be placed into a public network was the Empress Exchange in London, England which was designed by the General Post Office research labs. This was a tandem switch that connected three Strowger exchanges in the London area. The first commercial roll-out of a fully digital local switching system was Alcatel's E10 system which began serving customers in Brittany in Northwestern France in 1972.
Prominent examples of digital switches include:
Ericsson's AXE telephone exchange
is the most widely used digital switching platform in the world and can
be found throughout Europe and in most countries around the world. It
is also very popular in mobile applications. This highly modular system
was developed in Sweden in the 1970s as a replacement for the very
popular range of Ericsson crossbar switches ARF, ARM, ARK and ARE used by many European networks from the 1950s onwards.
Alcatel developed the E10 system in France during the late 1960s
and 1970s. This widely used family of digital switches was one of the
earliest TDM switches to be widely used in public networks. Subscribers
were first connected to E10A switches in France in 1972. This system is
used in France, Ireland, China, and many other countries. It has been
through many revisions and current versions are even integrated into All IP networks.
Alcatel also acquired ITT System 12
which when it bought ITT's European operations. The S12 system and E10
systems were merged into a single platform in the 1990s. The S12 system
is used in Germany, Italy, Australia, Belgium, China, India, and many
other countries around the world.
Finally, when Alcatel and Lucent merged, the company acquired Lucent's 5ESS and 4ESS systems used throughout the United States of America and in many other countries.
NEC NEAX used in Japan, New Zealand and many other countries.
MarconiSystem X originally developed by GPT and Plessey is a type of digital exchange used by BT Group in the UK public telephone network.
A digital exchange (NortelDMS-100) used by an operator to offer local and long distance services in France. Each switch typically serves 10,000–100,000+ subscribers depending on the geographic area
Digital switches encode the speech going on, in 8,000 time slices per second. At each time slice, a digital PCM
representation of the tone is made. The digits are then sent to the
receiving end of the line, where the reverse process occurs, to produce
the sound for the receiving phone. In other words, when someone uses a
telephone, the speaker's voice is "encoded" then reconstructed for the
person on the other end. The speaker's voice is delayed in the process
by a small fraction of one second — it is not "live", it is
reconstructed — delayed only minutely.
Individual local loop telephone lines are connected to a remote concentrator.
In many cases, the concentrator is co-located in the same building as
the switch. The interface between remote concentrators and telephone
switches has been standardised by ETSI as the V5
protocol. Concentrators are used because most telephones are idle most
of the day, hence the traffic from hundreds or thousands of them may be
concentrated into only tens or hundreds of shared connections.
Some telephone switches do not have concentrators directly
connected to them, but rather are used to connect calls between other
telephone switches. These complex machines (or a series of them) in a
central exchange building are referred to as "carrier-level" switches or
tandem switches.
Some telephone exchange buildings in small towns now house only remote or satellite
switches, and are homed upon a "parent" switch, usually several
kilometres away. The remote switch is dependent on the parent switch for
routing and number plan information. Unlike a digital loop carrier, a remote switch can route calls between local phones itself, without using trunks to the parent switch.
Telephone switches are usually owned and operated by a telephone service provider or carrier
and located in their premises, but sometimes individual businesses or
private commercial buildings will house their own switch, called a PBX,
or Private branch exchange.
Map of the Wire Center locations in the US
Map of the Central Office locations in the US
The switch's place in the system
Telephone
switches are a small component of a large network. A major part, in
terms of expense, maintenance, and logistics of the telephone system is outside plant,
which is the wiring outside the central office. While many subscribers
were served with party-lines in the middle of the 20th century, it was
the goal that each subscriber telephone station were connected to an
individual pair of wires from the switching system.
A typical central office may have tens of thousands of pairs of wires that appear on terminal blocks called the main distribution frame
(MDF). A component of the MDF is protection: fuses or other devices
that protect the switch from lightning, shorts with electric power
lines, or other foreign voltages. In a typical telephone company, a
large database tracks information about each subscriber pair and the
status of each jumper. Before computerization of Bell System records in
the 1980s, this information was handwritten in pencil in accounting
ledger books.
To reduce the expense of outside plant, some companies use "pair gain"
devices to provide telephone service to subscribers. These devices are
used to provide service where existing copper facilities have been
exhausted or by siting in a neighborhood, can reduce the length of
copper pairs, enabling digital services such as Integrated Services Digital Network (ISDN) or digital subscriber line (DSL).
Pair gain or digital loop carriers
(DLCs) are located outside the central office, usually in a large
neighborhood distant from the CO. DLCs are often referred to as Subscriber Loop Carriers (SLCs), after a Lucent proprietary product.
DLCs can be configured as universal (UDLCs) or integrated (IDLCs). Universal DLCs
have two terminals, a central office terminal (COT) and a remote
terminal (RT), that function similarly. Both terminals interface with
analog signals, convert to digital signals, and transport to the other
side where the reverse is performed.
Sometimes, the transport is handled by separate equipment. In an Integrated DLC,
the COT is eliminated. Instead, the RT is connected digitally to
equipment in the telephone switch. This reduces the total amount of
equipment required.
Switches are used in both local central offices and in long distance centers. There are two major types in the Public switched telephone network (PSTN), the Class 4 telephone switches designed for toll or switch-to-switch connections, and the Class 5 telephone switches
or subscriber switches, which manage connections from subscriber
telephones. Since the 1990s, hybrid Class 4/5 switching systems that
serve both functions have become common.
Another element of the telephone network is time and timing.
Switching, transmission and billing equipment may be slaved to very high
accuracy 10 MHz standards
which synchronize time events to very close intervals. Time-standards
equipment may include Rubidium- or Caesium-based standards and a Global Positioning System receiver.
Switch design
Face of a 1939 rotary dial showing the telephone number LA-2697 which includes the first two letters of Lakewood, New Jersey.
Long distance switches may use a slower, more efficient switch-allocation algorithm than local central offices,
because they have near 100% utilization of their input and output
channels. Central offices have more than 90% of their channel capacity
unused.
Traditional telephone switches connected physical circuits (e.g.,
wire pairs) while modern telephone switches use a combination of space- and time-division switching. In other words, each voice channel is represented by a time slot
(say 1 or 2) on a physical wire pair (A or B). In order to connect two
voice channels (say A1 and B2) together, the telephone switch
interchanges the information between A1 and B2. It switches both the
time slot and physical connection. To do this, it exchanges data between
the time slots and connections 8,000 times per second, under control of
digital logic that cycles through electronic lists of the current
connections. Using both types of switching makes a modern switch far
smaller than either a space or time switch could be by itself.
The structure of a switch
is an odd number of layers of smaller, simpler subswitches. Each layer
is interconnected by a web of wires that goes from each subswitch, to a
set of the next layer of subswitches. In some designs, a physical
(space) switching layer alternates with a time switching layer. The
layers are symmetric, because in a telephone system callers can also be called. Other designs use time-switching only, throughout the switch.
A time-division subswitch reads a complete cycle of time slots
into a memory, and then writes it out in a different order, also under
control of a cyclic computer memory. This causes some delay in the
signal.
A space-division subswitch switches electrical paths, often using some variant of a nonblocking minimal spanning switch, or a crossover switch.
Switch control algorithms
Fully connected mesh network
One way is to have enough switching fabric to assure that the pairwise allocation will always succeed by building a fully connected mesh network. This is the method usually used in central office switches, which have low utilization of their resources.
Clos's nonblocking switch algorithm
The connections between layers of subswitches of telephone switching
system are scarce resources, allocated by special control logic in a fault tolerant manner. Clos networks are often used.
Fault tolerance
Composite
switches are inherently fault-tolerant. If a subswitch fails, the
controlling computer can sense it during a periodic test. The computer
marks all the connections to the subswitch as "in use". This prevents
new calls, and does not interrupt old calls that remain working. As
calls in progress end, the subswitch becomes unused, and new calls avoid
the subswitch because it's already "in use." Some time later, a
technician can replace the circuit board. When the next test succeeds,
the connections to the repaired subsystem are marked "not in use", and
the switch returns to full operation.
To prevent frustration with unsensed failures, all the connections between layers in the switch are allocated using first-in-first-out lists
(queues). As a result, if a connection is faulty or noisy and the
customer hangs up and redials, they will get a different set of
connections and subswitches. A last-in-first-out (stack) allocation of connections might cause a continuing string of very frustrating failures.
The central exchange, due to the system's design, is almost always a single point of failure for local calls. As the capacity of individual switches and the optical fibre
which interconnects them increases, potential disruption caused by
destruction of one local office will only be magnified. Multiple fibre
connections can be used to provide redundancy to voice and data
connections between switching centres, but careful network design is
required to avoid situations where a main fibre and its backup both go
through the same damaged central office as a potential common mode failure.
PBX is an acronym for "private branch exchange," an in-house telephone
switching system used to interconnect both internal and external
telephone extensions and networks. Its functions include least-cost
routing for external calls, conference calling, call forwarding and call
accounting. A PBX switchboard is a telephone system that uses switches,
indicators and a controlling apparatus for electric circuits to monitor
telephone lines and networks.
History
The Private Manual Branch Exchange (PBMX), the
earliest PBX switchboard, was first used in Richmond, Virginia in 1882.
The PBMX was exceptionally manual, and lawyers in the area used it for
switching calls. In 1888 electromechanical and then electronic switching
systems replaced the PBMX. Electronic switching systems, also known as
private automatic branch exchanges (PABX), slowly started gaining
popularity, and by 1910 police patrols began using these electronic
systems.
Components
All PBX switchboard systems
include an internal switching network -- a microcontroller for data
processing, control and logic. System cards include switching and control cards,
a logic card, power cards and related devices that power switchboard
operations. Telephone lines and exterior Telco trunks for signal
delivery must be available for proper operations. Uninterruptible power supply
(UPS) is also crucial in case of shortage or power interruption. Other
smaller, but vital, components include wiring, closets, vaults, cabinets and housings.
Function
PBX
switchboards are used primarily for processing connections according to
user requirements, such as to establish circuit connections between the
telephones of two or more users. The PBX switchboard is also used
maintain consistency of all connections as long as the users need them
by channeling voice signals from one user to another. It also provides
information necessary for accounting purposes, including material calls.
Other call capabilities include automatic attendant and dialing,
automatic call distributor and directory services, automatic call ring
back, blocking, waiting, and call park and transfer.
Advantages
Physical
PBX hubs are typically low-profiled, only taking up a small space.
Further, PBX switchboard systems reduce communication costs by
incorporating emerging technologies such as VoIP to replace expensive
hardware. The systems are programmable, and, therefore, can support
complicated installation and integration requirements; this means that
you can expand your system as your company grows. Newer PBX switchboard
systems contain a number of improved features including fax-to-mail,
caller ID and music on hold. Although PBX systems are typically
proprietary, hosted PBX systems -- switchboards managed by external
companies -- are also available.
Hard-Wired Phone Service
Telephone communication is achieved through a variety of platforms,
formats, media and devices in the digital age, sometimes even without
the use of a telephone. Though cellular phones, smartphones and even
computers are used to conduct voice conversations involving two or more
parties, the standard upon which telephone calls have been based since
the inception of the technology is a hard-wired connection and service.
Definition
A
hard-wired telephone service is a connection for telephone
communications in which the telephone is directly connected to the
wiring that transmits the audio from the call to its recipient and
allows the user to receive incoming calls. The user attaches a telephone
cord from the telephone to its incoming/outgoing point, which can be a
standard telephone jack, an Internet modem or other capable device, to
have access to the activated phone line.
Landline Technology
A
hard-wired telephone connection is also known as a landline, as it
relies on terrestrial-based cables running to specific locations through
the telecommunications networks in place to connect the call from its
source phone to its endpoint. Traditional hard-wired landline phone
services utilize twisted copper wires as the core of the cables that
transmit the voice information, though newer, higher quality conduits
such as coaxial and fiber-optic cable are also used in certain areas
with certain providers.
Providers
Hard-wired
telephone users rely on service providers to receive access to the
telephone lines. The user selects a specific calling plan, which
generally includes local calling access as a base, with optional
long-distance and other usage plans and a variety of features that can
be added, such as caller ID, call waiting and three-way calling. A
provider can be a traditional telephone company or other multimedia
entity such as a cable TV and/or Internet provider. The telephone can be
bundled with other selected services.
Variations
A
hard-wired connection does not mean that the entire phone must be
connected with wires, as a cordless phone can also be considered a
hard-wired phone, provided the base/receiver is connected to the access
point via telephone cord. Also, audio telephone signals can be
transmitted in an analog manner, through standard telephone cables, or
in a format known as Voice over Internet Protocol (VoIP), in which the sound is converted into packets of digital data for transmission through the home Internet connection.
What Is Digital Voice Phone Service?
Telephony has evolved since Alexander Graham Bell's acclaimed cry,
"Watson, come here. I want to see you," in 1876. For over 100 years, the
traditional method of telephone service has consisted of analog sound
waves transmitting across copper wires strung from telephone to central
switchboard to telephone. Digital phone service, commonly called Voice over Internet Protocol, or VoIP, is telephone service transmitted over the Internet. VoIP features many benefits over traditional telephone networks, but it has a few disadvantages as well.
Background
The concept of digital phone service
developed with the advent of high-speed broadband Internet service, but
progress was initially slow as a result of the scarcity of broadband
service. In 1995, Israeli company Vocaltec introduced the first VoIP software for computer-to-computer connections. Five years later, only 3 percent of telephone calls were made using VoIP
technology. As broadband Internet service became available and as
technology afforded better equipment to produce clearer audio, VoIP has become a increasingly essential commodity, especially for businesses.
Function
Voice
over Internet Protocol digitizes analog voice data into Internet
capable TCP/IP data packets. The data packets are routed to a call
handler, or VoIP server, maintained by a VoIP service provider. Special VoIP
equipment, such as an IP phone or telephone adapter that converts
digital data to analog data, transmits and distributes audio data to the
listener. Some Internet service providers (ISPs) combine VoIP service
with their high-speed Internet service, while other companies, such as
Skype and Vonage, charge customers for the use of their VoIP servers.
Advantages
Advantages abound, which explains why demand for VoIP service is soaring. VoIP
offers the same high quality audio for a fraction of the cost of
traditional telephone service. Because data and voice transmissions use
the same digital network, VoIP users experience reduced infrastructure costs and unlimited long-distance calling. VoIP
incorporates special features such as caller ID, conference calling and
combined voice and data transmissions for no charge compared to the
expensive traditional phone service features. Specialized VoIP transmissions allow encryption of calls to provide secure data transmission when necessary.
Disadvantages
When first developed, quality of sound and latency issues hindered clear VoIP transmissions. Improved VoIP devices have solved the problem, although older or inferior VoIP devices may still experience transmission difficulty. VoIP requires special VoIP phone equipment and an active, broadband Internet connection. In the case of a power outage, VoIP
telephone service is lost, unlike traditional telephone service, which
continues operating because it does not rely on the power supply system.
As with all digital data transmissions, unsecured VoIP data can be intercepted or the equipment hacked.
How Does Telephone Call Routing Work?
Introduction
Call routing is a basic phone system
service that is available for personal or professional communication
needs. Most telecommunication providers have made the service available
for mobile, land line and business phone systems for a nominal charge,
and in some cases, for free; as part of a product service plan. To
implement the service a user must know the phone number or extention of
the person receiving the routed call. Additionally, some mobile
companies allow users to temporarily route all incoming calls (a start
and end date and time must be specified period prior to activation).
This period can be limited to a lunch break, vacation period, or leave
of absence from a job or residence. During the specified time, all calls
would be routed from the mobile phone, to a land line, for example,
where the call would be received by a secretary or other person able to
receive it.
Professional Routing Services
For business needs,
consider a professional routing service. Many of these professional
routing services do not require hardware. Acting as a hosting
administator, professional routing services use a computer to connect
and route missed or scheduled calls to a second phone number without the
need for an internal or third party. The system functions independently
to route specified phone numbers, all phone calls received during a
specified "transfer" period of time, and phone calls that are schedule
to be received from a particular residence, business, area code, or
phone number.
Never Miss Another Phone Call
Transfer
faxes, or incoming calls to never miss another important call again.
Transfer from within a business, or route calls from a land line to a
mobile, or vice versa. Contact your mobile provider when routing calls
from a mobile number to a second number. See the manufacturer's
instruction book to determine if this feature is available with your phone service, and to learn how to initiate the routing feature.
Determine
if all calls will be automatically routed or only after a specified
number of rings or attempts. When leaving for a vacation, route all
calls from a land line to a mobile or hotel phone to guarantee all calls
are received. Notify your service providers if you require additional
assistance or want to stop the routing service.
Telecommunication Protocols
In telecommunications terminology, the word protocol is used to address
some specified sets of rules and regulations that govern the process of
digital information exchange between two distant entities. Further, as
telecommunications is a vast field that embodies many communication
technologies, multiple protocols work in different areas of digital
communication to serve the process of data exchange. Some major areas
include PSTN (public switched telephone network) communication
protocols, cellular communication protocols, data networking protocols
and hybrid communication protocols.
PSTN Protocol
PSTN
protocols (or telephonic communication protocols) are the oldest
communication protocols in action, and generally regulate the process of
communication between local exchanges, home telephones and gateway
exchanges. Some major protocols include SS7 (signaling system 7), SCTP
(stream control transmission protocol), V5.1/V5.2 protocols and ISDN
(integrated services digital network) protocols. These protocols
primarily serve the voice communication procedures within a PSTN.
Cellular Communication Protocols
Cellular
communication employs transmitting data towards mobile data units
roaming within the coverage area. This communication procedure is a
prevalent method of data communications, and it has many different
specially designed protocols for its data exchanging and transmission
controlling procedures. Some common cellular communication protocols
include BSMAP (base station management application part), BSSMAP (base
station subsystem management application part), BSSAP (base station
subsystem application part), DTAP (direct transfer application part),
SMSTP (short message service transfer layer protocol), BTSM (base
transceiver station management) and MAP (mobile application part).
Data Communication Protocols
The
domain of data communication includes all the local, private and public
networks that employ computers for exchanging data among users. Some
examples of these are local area networks, metropolitan area networks,
wide area networks and the network of networks -- the Internet. These
networks use various sets of communication protocols to regulate and
control the process of data communication, and some major ones include
TCP (transmission control protocol), IP (Internet protocol),
FTP (file transport protocol), HTTP (hyper-text transfer protocol), POP
(post office protocol), SMTP (simple mail transfer protocol), DHCP
(dynamic host control protocol), BGP (border gateway protocol), UDP
(unified datagram protocol), RTP (real time protocol) and RSVP
(reservation control protocol).
VoIP Protocols
VoIP (Voice over Internet Protocol)
is a communication technology that provides solutions of transferring
voice, multimedia, and text-based data simultaneously over a single
carrier. This technology is developed by amalgamating data networks and
PSTN, and for this reason, it uses many protocols from both mentioned
categories of telecommunication systems. It has some specific protocols
developed for its core operations as well, which mainly include MGCP
(media gateway control protocol), SIP (session initiation protocol),
H.323, H.248, SDP (session description protocol), SAP (session
announcement protocol), MIME (multipurpose Internet mail extensions) and
SGCP (signaling gateway control protocol).
Types of Mainframe Computers
Mainframe computers are large-scale systems designed for processing and
storing huge amounts of data that smaller systems such as PCs can't
handle. They are frequently used by extremely large companies, banks and
government agencies that have enormous processing and storing needs,
and can handle hundreds of users at the same time. Named for the
framework on which the computers used to be hung, mainframes are also
known as "big iron" and have continued to adapt and evolve beyond their
original limitations to keep pace with technological advancement.
Size
A
mainframe computer's size depends primarily on its age. Most mainframes
produced before 2000 are sprawling leviathans, consisting of upwards of
10,000 square feet of rack-hung computers spanning one or more floors
in a company's offices or off-site facility. With the miniaturization of
computing elements, the modern mainframe is considerably smaller --
often approximately the size of a large refrigerator. Depending on the
scale of the mainframe, the needs of the company and the associated
costs, mainframes can see many years of service before they're unable to
handle the workload.
Purpose
Mainframe
computers were designed to handle large-scale processing, data storage
and other tasks too resource-intensive for the average computer or
small-scale network to handle -- for example, a bank's transactions. The
exact processes handled tend to vary based on the users, but mainframes
generally shift huge amounts of data that would tax smaller systems
beyond the breaking point -- and they do it quickly and reliably in
order to facilitate the needs of users on an enterprise scale.
Primary Manufacturers
Because
of the prohibitive cost of development and deployment, only a handful
of manufacturers make and develop mainframes. Primary producers of
mainframe equipment and software include IBM, Hewlett-Packard, Unisys,
Fujitsu, Hitachi and NEC. These manufacturers supply mainframe equipment
to clients around the world, to clients in both the public and private
sectors. Mainframes are an extremely costly investment -- in 2012, IBM
released a "lower-price" mainframe system starting at $75,000.
Typically, mainframes cost even more, with prices varying based on
equipment-type purchased and the scale of the mainframe.
Terminals
Mainframe
computers are primarily accessed and controlled via terminals --
computer workstations that superficially resemble standard computers but
typically have no CPU of their own. Instead, they are networked to the
mainframe and act as an access point for users.
Operating Systems
The
operating system installed on a mainframe varies depending on the
manufacturer. Most mainframes use Unix, Linux variants or versions of
IBM's zOS operating system. These operating systems are often configured
specifically for the mainframe on which they run and offer users any
necessary interface capabilities.
Centralized vs. Distributed Computing
Traditional
mainframe computers use a "centralized" computing scheme -- the
mainframe is an insular system in which only directly connected
terminals are capable of accessing information. As Internet
communication and operation has gained prevalence, centralized
mainframes have become increasingly more open towards a "distributed"
computing scheme. Distributed mainframes can be accessed by computers
outside of the mainframe itself, allowing users to access material from
their homes or on the go via the Internet.
The Difference Between PDA Cell Phones & Smartphones
When shopping for handheld technology, it can be confusing when you come
across industry terms like "PDA" and "smartphone." What is the
difference and which type of device better suits your needs? There are
no standard definitions that completely separate the two; in fact, they
are becoming more alike as the technology evolves. To best understand
what differences do exist, you can examine the origin of the two terms
and what roles they have played in the industry.
Traditional PDAs
PDA
stands for "personal digital assistant." They function like small
computers, running operating systems and software; most are also capable
of Internet access. Traditionally they require the use of a stylus to
operate, but many now have keyboards or touchscreens. Originally they
were not designed as phones but primarily as pocket computers. PDAs with
phone call capabilities came later.
Smartphones
Smartphones
are primarily cell phones with some computing capabilities. You need a
wireless provider to use one. They are usually smaller than PDAs and
have less computing power. Users direct operations with a keypad or
touchscreens. Smartphones generally do not have the same software
capabilities and computing power as PDAs --- such as the ability to view
and edit documents and spreadsheets.
PDA Cell Phones
As
the industry has evolved and consumers demand powerful,
multi-functional and compact devices, PDAs have changed to operate more
like smartphones. While many models traditionally did not have phone
capabilities, most new models do. Their primary function remains
computing, however, so PDA cell phones are generally bigger in size and
more powerful than smartphones.
Merging Definitions
As
technology advances, smartphones are becoming more powerful and PDAs
are becoming more compact. Sometimes the terms are used interchangeably.
In the future there will likely be little to no difference between the
two devices.
What Is an Intel Chip set?
An Intel chip set is a computer component housed on motherboards
compatible with Intel brand processors. The chip set consists of two
chips, the north bridge and the south bridge, that control communication
between the processor, memory, peripherals and other components attached
to the motherboard. According to Ron White, author of “How Computers
Work,” the chip set is second only to the processor in determining the
performance and capabilities of a PC
Northbridge Function
The northbridge, sometimes
referred to as Graphics and AGP Memory Controller (GMCH) on some Intel
machines, works with the graphics card to relieve the processor of some
of the burden of high-demand operations associated with video editing
and gaming software. It also links the processor to the Random Access
Memory (RAM) modules installed on the motherboard, thus providing the
processor with the data it needs to execute the instructions needed by
any application in use. The northbridge synchronizes the operation of
all the above-mentioned devices by controlling the main pathway between
them, the “front side bus.”
Southbridge Function
Unlike
the northbridge, the southbridge, or, in some cases, the I/O Controller
Hub (ICH), is not directly connected to the processor. It coordinates
the function of slower hardware devices; enabling the hard drive, mouse,
keyboard, USB and FireWire devices to communicate with the northbridge
as needed by software demands. The southbridge also controls fans,
temperature sensors and the BIOS. The main difference between a
traditional southbridge and the ICH used in some Intel chipsets is that
the ICH controls the PCI bus; a pathway used to communicate with
hardware devices (controlled by the northbridge in other
configurations). Also, the bus used to transfer data to and from the ICH
is twice as fast as that of a conventional southbridge.
Considerations
The
chipset is the most limiting factor in a computer’s upgrading
potential. It determines what models and speeds of CPU and the type and
amount of RAM that can be installed. It also dictates the user’s options
in terms of graphics cards, sound cards and number of USB ports. Lower
end Intel motherboards often include an integrated graphics card that
cannot be changed. Higher end motherboards include graphics card slots
and chipsets designed to work with different cards.
Data Protection
For
those using more than one hard drive, some of Intel’s Express Chipsets
include a “Matrix Storage Technology” that stores copies of data on
multiple drives. Thus, a drive can fail without data loss.
High Definition Audio
As of mid-2010,
the latest Intel Express Chipsets utilize a High Definition audio
interface for decoding and encoding digital and analogue sound signals.
The interface is capable of handling up to eight channels of audio,
resulting in higher quality surround sound with multiple channels.
Intel’s HD Audio also allows one computer to play two or more different
audio streams in different locations simultaneously.
Security
Certain
Express chipsets allow users to enable and disable individual USB ports
and SATA hard drive ports.This feature helps prevent malicious use of
USB ports and the unauthorized addition or removal of data to or from
the hard drives.
we can gettingvarious components and parts supporting the telecommunication and
office automation equipment such as Fax Machine, Telephone, Mobile
Phone, Copier Machine, Scanner, Printers, PABX, Personal Computers,
Projectors, etc.
It is used to denote the Intel Pentium II line of processors, introduced in 1997. See FSB.
P2P
This is an abbreviation for peer-to-peer, a particular type of networking protocol.
P3
1. P3 is AOL's data transfer protocol. It is comparable to ZMODEM. The same technology is used, as of 1996, by other ISPs also.
2. It is also often used to denote the Intel Pentium III line of processors; it was introduced in 1999. See FSB.
P4
P4 is used to denote the new Intel Pentium IV line of processors, released to the public in November 2000. See FSB. There are several different versions of the processor line.
PABX
An acronym for Private Automatic Branch eXchange; telephony jargon. A
phone system used to switch telephones between extensions and to outside
lines. For incoming and outgoing (dial 9) calls. Sometimes just called PBX though a PBX system does not have to be automatic or may not even have the capability for total automation.
packet
A unit of data sent across a network. When a large block of data is to
be sent over a network, it is broken up into several packets, sent, and
the reassembled at the other end. Packets often include checksum codes to detect transmission errors. The exact layout of an individual packet is determined by the protocol
and network architecture being used. In many cases, it could be also
called a sub-unit of a data stream; a grouping of information that
includes a header (containing information like address destination) and,
in most cases, user data. This is not to be confused with "Pack It!", a
term many arrogant programmers have heard from many supervisors over
the years.
packet reflection
This error message is received when a packet of data could not be transmitted properly and was sent back (reflected) to the origin. This is a network based error.
packet sniffing
The intentional and usually illegal act of intercepting packets
of data being transmitted over the Internet and searching them for
information. This can be done without the sender's or recipient's
knowledge. It is the equivalent of line-tapping.
packet switched network or PSN
A network sub-architecture that does not establish a dedicated path
through the network for the duration of a session, opting instead to
transmit data in units called packets in a connectionless manner; data
streams are broken into packets
at the front end of a transmission, sent over the best available
network connection of possibly many, and then reassembled in their
original order at the destination endpoint.
packet switching
A switching system that uses a physical communications connection only
long enough to transit a data message; data messages are disassembled
into packets
and reassembled at the receiving end of the communication link; packets
may travel over many diverse communications links to get to the common
endpoint. This is most often contrasted with circuit switching in data
communications, where all data messages transmitted during a session are
transmitted over the same path for the duration of the session.
pad
1. A specially surfaced material to allow the users of mice, a place to optimize operation.
2. A graphics tablet for data input into programs such as CAD/CAM. They are often mouse-like in function but stationary with a pointer that moves over them.
3. An acronym for Packet Assembler/Disassembler. The hardware device
used to connect simple devices (like character mode terminals) that do
not support the full functionality of a particular protocol to a
network. PADs buffer data and assemble and disassemble packets sent to such end devices.
4. A digitizer.
5. A place where many nerdy type programmers lived in the '60s.
pad character
1. A character used to fill empty space. (In some cases, it could be considered "education"...) Many applications
have fields that must be a particular length. For example, in a
database application, you may have a field that is ten characters in
length. If you use only four of the allotted characters, the program
itself must fill in the remaining six characters with pad
characters. Some applications allow you to choose the character to be
used as padding. Most padding by default is done with a space character,
as issued by the spacebar.
2. A "home body" from the 1960's.
pagination
The process, in most word processors, of calculating the properties of a
page in order to assign page breaks and page characteristics.
PAL
Excite's online instant message service, (as of mid-2001) now defunct. See others like PAL here.
Palmtop
Palmtops are a class of portable personal computers (generally with PDA
software) that fit in the palm of your hand. One of the most well-known
palmtops is the Pilot, developed by PalmOS and marketed originally by
US Robotics, now 3COM.
Panasonic
A leading manufacturer and supplier in all commercial electronics
industries worldwide. They have several International manufacturing and
operations locations. See them at HTTP://WWW.PANASONIC.COM.
PANS
A telephone industry slang jargon acronym term for Pretty Amazing New Stuff. This fits with the industry term POTS.
PAP
1. An acronym for Password Authentication Protocol. A means of
authenticating passwords which is defined in RFC 1334. PAP uses a
two-way handshaking procedure. The validity of the password is checked
at login. See also CHAP, the exact opposite of PAP.
2. A sometimes used acronym for Plug And Play, though the most often used is PNP.
Paper Mail
Some E-Mail services offer this service so that you can send Internet E-Mail to people who don't even own computers. A form of snail-mail. A loose reference to the US Mail service. See mail.
paradigm
A paradigm is a pattern or an example of something. The word also
connotes the ideas of a mental picture and pattern of thought. The
entire concept of computers is a paradigm in that computers always
follow programming. See logic.
paradox
1. A paradox is a statement or concept that contains conflicting ideas.
Some people think computers themselves are a paradox in concept. In logic,
a paradox is a statement that contradicts itself; for example, the
statement "I never tell the truth" is a paradox because if the statement
is true (T), it must be false (F) and if it is false (F), it must be
true (T). In everyday language, a paradox is a concept that seems absurd
or contradictory, yet is true. In a Windows environment, for instance,
it is a paradox that when a user wants to shut down their computer, it
is necessary to click "start".
2. In nautical terms, two places to put your boat.
parallel
1. A form of data transfer and communications, most often used with printers. It is the opposite of and an alternative to serial
communications. Printers and other devices are said to be either
parallel or serial. Parallel means the device is capable of receiving
more than one bit at a time (that is, it receives several bits in
parallel). Most modern printers are parallel or USB. Here is the pin information for PC parallel printers.
2. A type of bus connection, transferring data in a similar means as a parallel printer connection.
3. In electronics, two components can be connected together in two different ways, series
and parallel. Each component has two different ends or poles. They can
be positive and negative but may not be. For identification, they are
known as Alpha and Beta. While the nomenclature is not exactly original,
it serves the purpose. If similar components, such as a resistor and another resistor, or a capacitor
and another capacitor, are in parallel in a circuit, the alpha pole of
one is connected to the alpha pole of the other, directly, while the
beta pole and other beta pole also connect directly. See our Parallel Resistance Calculator and our Parallel Capacitance Calculator to resolve values for either resistance or capacitance.
parameter
A guideline or limitation of software or process functions. In the case of search software, parameters are Boolean factors, the words or letters you are trying to find, where you want the search to include in the looking process and the like.
Parental Control
Parental Control is an ISP feature that allows a parent to restrict a child's access to certain areas of ISP provided services and the Ineternet. Such control is also available in most modern browsers and is available as a separate application. While it is not foolproof, it is a good thing. There are also some standards being set on the web. See Net Nanny, SafeSurf or Recreational Software Advisory Council for more information. These are certainly not all involved but a representative group. This site CSGNETWORK.COM, is rated for users of all age groups and is devoted to keeping unfit material off the Internet, at least filtering it.
parity
A method of data integrity checking that adds a single bit to each byte
of data. The parity bit is responsible for checking for errors in the
other 8 bits (or less, depending on the arrangement). Parity is logic
that detects the presence of an error in memory. Generally, a single
parity bit is used for each byte (8 bits) of data. The most commonly
used forms of parity are even parity, odd parity, and checksums.
parse
To search through a stream of text and either break it up into useful chunks of information or reformat it in some other manner.
partition
1. A portion or a segment of memory or storage memory, such as a hard
disk. Most commonly used as a section of a hard drive. Hence the name
partition. When you format a hard drive, you can assign the number of
partitions you want on a physical hard disk. The computer will recognize
each partition as a separate drive, and they will show up as different
drives under most operating systems; a logical drive.
2. The act of creating a logical (as opposed to physical) drive.
3. To break into smaller sections, such as a hard drive.
Most often, smaller but multiple partitions can improve the efficiency
of your hard drive. On larger drives, the cluster, or block sizes (the
minimum amount of space a file can take up), are also very large, which
can result in a waste of disk space. So multiple partitions can actually
give you more usable space.
4. Partitioning can also be used to allow multiple operating systems on
the same drive of a given computer. Most of the 32 bit file structures
do not allow that, for native operation, but earlier OS software, such
as Windows 85, 98 (original), OS2, DOS and early NT did.
5. A form of computer office and work area segregation so that hardware
people can be isolated from software people, for example. Often referred
to as cubicle world.
PASC Mail
An acronym for Portable Applications Standards Committee; a group within
the IEEE. PASC is the group that has and continues to develop the POSIX family of standards. See them at HTTP://WWW.PASC.ORG.
password
Your password is like a key to your home. It is one of the primary forms
of online security. It is needed for you to get online, and to change
your billing information. A different password is often needed for
applications that are online. NEVER GIVE YOUR PASSWORD(s) TO ANYONE.
Most ISP's staff will NEVER ask you for your password. If someone does ask for that password, challenge them as to who they really are.
password surfing
Like any large community, all ISP
services have their share of undesirable characters. On most services,
they make themselves known by masquerading as ISP employees. Frequently
they will send a new member an instant message or E-Mail
claiming that the system has lost their information, or offering them
free service for a year. This is called password surfing. These people
are NOT ISP employees, and they are trying to steal from you. To protect
yourself, NEVER give your password or billing information to anyone.
Most ISP employees will never ask you for your password or billing
information.
PAT
An acronym for Port Address Translation, a technique used to share a single IP address to provide Internet access to a LAN,
from the outside. The process is usually handled by a router, firewall
or another computer, usually a server of some type. PAT associates an
internal network address to an appropriate outside published network IP,
followed by a port number, which is the key to routing communications
into the LAN. For instance, a computer on a LAN has the dynamic or, most
often, static address of 192.168.0.10 internally but is seen as
38.111.28.5:10 by the outside world. See NAT.
patch
1. A software fix for a bug in a program, often called a zap.
It is usually a section of code that an installed program places in the
patch area of generated code so that you do not have to install an
entire program or library. See kluge.
2. A particular type of cable for networking that is a through cable; pin 1 to pin1, pin2 to pin 2 and so on.
path
1. The hierarchical description of where a directory, folder, or file is located on your computer or on a network.
2. The marketing version of the Yellow Brick Road, where many computer sales people lead you; often preceded by Primrose.
payload
1. A telephony term that describes the portion of a frame or cell that
carries user traffic. It is effectively what remains in the frame or
cell if you take out all headers or trailers.
2. In computer virus jargon, the virus itself, after having been deposited by a trojan horse.
PBX
1. A telephony acronym denoting Private Branch Exchange, a physical private telephone network.
2. A small telephone switching system (exchange) usually on a customer's
premises that also connects to the public switched telephone network.
See PABX.
In roughly the middle of 1998, Intel introduced the BX chip set to their
motherboard designs. One element in this new architecture included an
increase in the PC main memory bus speed (Host bus) from 66 to 100 MHz,
called PC100. To match the 100MHz bus speed, 100MHz SDRAM modules are
used. These modules are often referred to as PC100 compliant. See personal computer (PC).
PC133
In roughly the middle of 2000, Intel upgraded all current chip sets in
their motherboard designs to this standard. One element in this new
architecture included an increase in the PC main memory bus speed (Host
bus) up to 133 MHz, called PC133. To match the 133MHz bus speed, 133MHz
SDRAM modules are used. These modules are often referred to as PC133
compliant. A further extension of the PC100
specification, the PC133 specification details the requirements for
SDRAM used on 133MHz FSB motherboards. PC133 SDRAM can be used on 100MHz
FSB motherboards but will not yield a performance advantage over PC100
memory at 100MHz. See personal computer (PC).
PCAV
1. An acronym for Partial Constant Angular Velocity. See (CAV) and (CLV).
2. An acronym for Personal Computer Anti-Virus.
PCB
A component made up of layers of copper and fiberglass; the surface of a
PCB features a pattern of copper lines, or "traces," that provide
electrical connections for chips and other components that mount on the
surface of the PCB. A term used in the electronics industry to denote a
RAW (non-populated) Printed Circuit Board. Once components are populated
on it, the board is sometimes called a card. See motherboard, systemboard or mainboard.
Acronym for Peripheral Component Interconnect, a local bus computer
standard developed by Intel Corporation and others associated with the
industry. Most modern PCs include a bus that is only PCI capable; early
PCI designs incorporated the PCI bus in addition to a more general ISA
expansion bus. Those are now called Legacy capable motherboards. Many
analysts, however, believe that PCI will eventually supplant ISA
entirely (as of April 2002, it has not completely but is well on the
way); it appears that non ISA systems are now the norm rather than the
exception in the year 2000. PCI is also used on newer versions of the
Macintosh computer. PCI is a 64-bit bus, though it is usually
implemented as a 32-bit bus. It can run at clock speeds of 33, 66, 100
and 133 MHz. At 32 bits and 33 MHz, it yields a throughput rate of 133
MBps. Board
pin density is also greater and for confusion avoidance, boards will
not interchange in ISA and PCI slots. Although it was initially
primarily developed by Intel,
PCI is not tied to any particular family of microprocessors. As of
March 2000, the current specification is 3.0 and it is an evolutionary
release of the PCI specification that includes edits to provide better
readability and incorporate Engineering Change Notices (ECNs) that have
been developed since the release of version 2.3. The Conventional PCI
3.0 specification also completes the migration to 3.3V-only slots by
removing support for 5.0V-only keyed add-in cards. Version 3.0 is the
current standard for Conventional PCI, to which vendors should be
developing products. All PCI variations and specifications can be
reviewed at PCI-SIG.
PCI-E
Acronym for Peripheral Component Interconnect - Express, a local bus computer standard extension of PCI.
Designed in 2002, and surfaced for retail in 2004, the design was not
accepted well, even though well engineered for control and video
purposes. It was initially supposed to be backward compatible with PCI
but it appears that not all offerings are that. The main claim to fame
is that processing is not just serial as is PCI and PCI-X, but parallel.
All PCI variations and specifications can be reviewed at PCI-SIG.
PCI-X
Acronym for Peripheral Component Interconnect - Extended, a local bus computer standard extension of PCI.
It was engineered, beginning in 1996 but not reaching popularity until
1999, primarily for video characteristics that could be faster than PCI
or AGP. All PCI variations and specifications can be reviewed at PCI-SIG.
PCL
The acronym for Printer Control Language, a product of HP. This was originally designed by HP
for the LaserJet+. It is now the base, in revision 6, of all of the
printers in the HP line. It is an interpreted language, similar to but
more intelligent than PostScript. See the history of PCL here.
PCM
A telephony term describing a particular type of modulation, Pulse Code Modulation.
PCMCIA
An acronym meaning Personal Computer Memory Card Industry Association. A
standard that allows interchangeability of various computing components
on the same connector. The PCMCIA standard is designed to support
input/output (I/O) devices, including memory, Fax/modem, SCSI, and
networking products. Many laptop computers use these devices as modems or network
interfaces. It is an organization consisting of some 500 companies that
has developed a standard for small, credit card-sized devices, called
PC Cards.
Originally designed for adding memory to portable computers, the
somewhat loose PCMCIA standard has been expanded several times and is
now suitable for many types of devices. There are in fact three types of
PCMCIA cards. All three have the same rectangular size (85.6 by 54
millimeters), but different widths:
Type I cards can be up to 3.3 mm thick, and are used primarily for adding additional ROM or RAM to a computer.
Type II cards can be up to 5.5 mm thick. These cards are often used for NIC, modem and fax modem cards.
Type III cards can be up to 10.5 mm thick, which is sufficiently large for portable disk drives.
As with the cards, PCMCIA slots also come in three sizes:
A Type I slot can hold one Type I card
A Type II slot can hold one Type II card or two Type I cards
A Type III slot can hold one Type III card or a Type I and Type II card.
A full house beats three of a kind! So much for the details.
In general, though there are always exceptions, you can exchange PCMCIA
Cards on the fly, without rebooting your computer. For example, you can
slip in a Fax
modem card when you want to send a fax and then, when you're done,
replace the Fax modem card with a memory card. You can also fit (and
use) smaller cards into larger slots but not the reverse. They are
currently (as of mid-1999) just known as PC Cards.
Personal Computer (IBM PC) Operating System, a coined shorthand name for the DOS
only software from several companies running a low level platform on
compatibles. This was originally the name, though also known as PCDOS,
given to IBM's version of the first IBM produced operating for PCs. They
soon gave way to Microsoft produced DOS. They have made several other
efforts at PC operating systems but have not been able to produce one
that was competitive to Microsoft.
PCS
1. An acronym for Personal Communications Service, often called personal
cellular service, though incorrectly. It is a wireless phone service
very similar to cellular phone service, but with an emphasis on personal
service and extended mobility. The term "PCS" is often used in place of
"digital cellular," but true PCS means that other services like paging,
caller ID and e-mail are bundled into the service. PCS phones use
frequencies between 1.85 and 1.99 GHz.
2. A telephony term describing wireless communications technology that
operates between 1850 and 1990 MHz. A loosely defined future, currently
in the infancy stages, ubiquitous telecommunication service that will
allow "anytime, anywhere" voice and data communication with personal
communication with personal communications devices.
PDA
1. An acronym for Personal Digital Assistant. A small, totally portable device that combines computing, telephone/fax, and networking features. A typical PDA can function as a cellular phone, Fax
sender, and personal organizer. Unlike portable computers, most PDAs
use a pen-like stylus rather than a keyboard for input. This means that
they also feature handwriting recognition. Some PDAs also make use of
voice recognition technologies. Apple Computer
pioneered the field of PDA by introducing the Newton MessagePad in
1993. Shortly thereafter, several other manufacturers offered similar
products. To date, PDAs have had only modest success in the marketplace,
due to their high price tags and limited applications. However, many
experts believe that PDAs will eventually become common gadgets.
2. PDA is a term for any small mobile hand-held device that provides
computing and information storage and retrieval capabilities for
personal or business use, often for keeping schedule calendars and
address book information handy. Most PDAs have a small keyboard. Some
PDAs have an electronically sensitive pad on which handwriting can be
received. Some PDAs offer a variation of the Microsoft Windows operating
system called Windows CE. Other products have their own or another
operating system.
pdf files
Adobe's Portable Document Format (pdf) is a translation format used primarily for distributing files, such as documentation, across a network,
or on a web site. This is an inexpensive way for CD distributors to
include documentation with a CD based program or suite of programs.
Files with a .pdf extension have been created in another application and
then translated into .pdf files so they can be viewed by anyone,
regardless of platform. The Adobe Acrobat PDF Reader software is
necessary to view these files, and can be obtained free at many sites,
provided by Adobe, or get it here, Acrobat PDF Reader central. Adobe can be accessed through HTTP://WWW.ADOBE.COM.
PDL
The acronym for Pure Dumb Luck, a programmer's or system engineer's best friend when it comes to fixing things.
PDM
The acronym for Pure Digital Monitor, one that has no analog capability.
PDN
1. The acronym for Public Data Network. Network operated either by a
government (as in Europe) or by a private organization or association to
provide computer communications to the public, usually for a fee. PDNs
enable small organizations to create a WAN without the equipment costs
of long distance circuits.
2. An acronym for Packet Data Network. This can be either a public or
private packet based network, such as an IP or X.25 network.
peer-to-peer
A simple, small type of network in which each workstation has the ability for equivalent capabilities and responsibilities. Each station can be a server and each can be a client at the same time. This differs from client/server
architectures, in which some computers are dedicated to serving the
others. Peer-to-peer networks are generally simpler and less expensive,
but they usually do not offer the same performance under heavy loads. In
fact, they are a compromise at best in either way they are used. But
they are efficient with high enough horsepower hardware and with a good
network integrated operating system. A major player in the early LAN
days, offering this type technology, was Artisoft's LANtastic.
Currently, all Windows operating systems of 95 and up offer this
technology built in. It is just referred to as the Microsoft network
when used with NetBEUI.
One of Intel's family of microprocessors; introduced in 1993. A class of microprocessor made by Intel.
The series has come from the early Pentium (1993) 60 Mhz, bus speeds of
66 Mhz and a 64kb cache, to the Pentium II (1997) series which began at
233 Mhz to 450 Mhz with bus speeds of 100 Mhz and a L2 chache of 512kb
with full speed capability, to the Pentium III series (1999), from 450
Mhz well into the Ghz speed, 133 Mhz bus with a 512kb to 2mb L2 cache
with full speed capability. (As of August 2000, a 1.5 Ghz Pentium IV has
been announced for November release. Early chips have been problematic.
As of February, 2002, the P4 is in production of 2.2 Ghz versions. As
of January 2006, a 3.6 Ghz version is out.) There has also been a
Pentium Pro (1995) with speeds in the 166 Mhz to 266 Mhz range, and
Pentium XEON
(1998) to add to the group; a revised XEON version was made in 1999 in
Pentium III configuration with speeds in the 800+ Mhz range, and another
in P4 form in 2001. There have also been low power consumption versions
for laptops. In 1999, a lower performance version was released, called
the Celeron; it was designed to lower the overall cost of computers that
did not need ultra high performance. The Celeron has a slower bus,
smaller cache and less efficient (slower) decision making path. The
Pentium series CPUs were designed to run Windows but will obviously run other OS software as well. See FSB. The Itanium series was released in 2001.
people connection
The People Connection, or similar name, is most ISP's main chatting forum. There are always hundreds or thousands of people chatting about something.
perigee
A term to describe planetary distance from an orbiting body. A typical
orbit of a body around a planet, for instance, the moon around the
Earth, is that of an ellipse. The point at which the moon is closet to the Earth is called the perogee. The opposite is the apogee.
Perl
A programming language whose acronym stands for "Practical Extraction
and Report Language". Perl is a powerful, yet unstructured language that
is especially good for writing quick and dirty programs that process
text files. Because of these abilities, Perl is a common choice of
programmers for writing CGI scripts to automate input and output from web pages. It is one of the very few languages still used today that is based on an interpreter rather than a compiler.
Perl was invented in 1986 by Larry Wall and is available to anyone at
no charge. The library is now Perl5. We strongly suggest that you may
want to visit the site of a company in Colorado, SPADS at HTTP://WWW.SPADS.COM,
that deals entirely in Perl scripts; talk to Vince and tell him we sent
you. They are great people and seem to know their stuff. We also write
custom CGI scripts in Perl.
Here is the Perl version of "Hello World!":
print "Hello World\n";
permanent virtual circuit or PVC
1. A PVC is a permanent channel connection between two ATM
devices. PVC’s allow network transmissions to be started without having
to first establish a connection with the end point ATM device. When a
PVC is constructed, the end points of the connection will agree upon a
path in which data will travel, and therefore agree upon the route that
data will travel to reach its destination.
2. A type of conduit pipe made of plastic used to tunnel or route network and phone cables in some installations.
personal computer - PC
The original personal computer model introduced by IBM in 1981. Because
IBM was late to enter the desktop computer field, it created the PC with
an "open architecture" so that it could compete with the then popular
Apple II computers. This open architecture meant that any computer
manufacturer could legally manufacture PC-compatible machines that could
run the same software as IBM's PC. Since IBM purchased its CPU chips
from Intel and its operating system (DOS) from Microsoft, makers of
PC-compatibles (called clones at the time) were able to utilize the same
chips and OS as IBM. As a result, PCs became the most popular home
computer, IBM's fortunes dropped, and Microsoft and Intel became the
multi-million dollar companies that they are today. Current popular
usage of the term PC refers to both IBM produced personal computers and
PC-compatible computers produced by other manufacturers.
Personal Preferences
Personal choices and preferences is your ISP's
online preference utility. All services have some such device. With it
you can change your multimedia preferences or change your screen names,
as well as other things. What you select is what you get!
Personal Filing Cabinet or PFC
Your ISP's file organization tool, through your browser or service.
Personal Finance & Management
A channel of most ISP online services that is dedicated to your money, helping you to keep it and making it grow.
A number that is the literal equivalent of 2 to the 50th power
(1,125,899,906,842,624) bytes; it is a quadrillion in the American
system. A petabyte is equal to 1,024 terabytes. It is a noun and not an action (it has nothing to do with pets that bite). Don't know your KB from your MB? Try our memory and storage converter. (Also see powers of ten, kilobyte, megabyte, gigabyte, exabyte, zettabyte and yottabyte.)
1. An individual process in a group of processes, a portion of an
overall job. Software projects are often broken down into phases.
2. In electricity, the type of electrical service, most often single
phase for residential use or three phase for industrial use. In the case
of single phase, it is relating to a circuit energized by a single
alternating electromotive force; in three phase, it is relating to, or
operating by means of a combination of three circuits energized by
alternating electromotive forces (EMF) that differ in shift phase by one third of a cycle.
Philips
A leading manufacturer and supplier in all commercial electronics
industries worldwide. They have several International manufacturing and
operations locations. See them at HTTP://WWW.PHILIPS.COM.
Phish
1. Trying to illegally obtain someone's password by false
representations. Frequently, "phishers" will send instant messages or
E-Mail to new members claiming that they are ISP employees and need the
password because of a system problem. GENERALLY, NO ISP STAFF MEMBER
WILL EVER ASK FOR YOU PASSWORD. If you are asked for your password, the
person asking you is not an ISP staff member, and they should be reported or ignored.
2. The technique is also often used to secure credit card numbers, bank
account information, brokerage information, and generally anything that
could yield a financial gain in line with fraud operations. This is a
tremendous source of identity theft. See this site for the latest scams and advice.
The "phunny pharm" name for Phlash Phelps, of XMRadio (if you are not familiar with what XMRadio is, check out the information here...);
Phlash is the "on the air" personality and DJ extraordinaire, certainly
our phavorite XM host. (You know that you have "made it" when you get
into the CSG Glossary!) His 60s shows are a wonderful contradiction of
the technology of satellite radio and the yesteryear of music in the era
that I enjoyed youth. His knowledge of the era appears to be second to
none. We chat with him from time to time from the CSG Phlying Machine, listening to XM. Take a look at his website HTTP://WWW.PHLASHPHELPS.COM, view the HTTP://WWW.XMRADIO.COM site or send him a note from the XMRadio webmail function on the Sixties Page. Be sure to say Hi from CSG in warm and sunny Southern California....
Phoenix
The industry name for Phoenix Technologies, an industry pioneer and giant in the BIOS business for computers, hand held devices and phones; best know for BIOS work. See them at HTTP://WWW.PHOENIX.COM.
photoconductor
This is a special type of resistor and is sometimes called
an LDR. Photoconductors are made so that their resistance decreases as the level of light falling on them increases.
photodiode
This type of diode reacts to light. It is mounted so that
the cathode is more 'positive' than the anode.
It relies on the fact that all diodes leak a small amount of current
back out of the anode. The amount of current leaking through depends on
the amount of light falling onto the diode. It is not the same an an LED.
phototransistor
This type of transistor has only two pins, the collector and the emitter. Because they have no base pin, they can sometimes be
mistaken for a normal diode or an LED.
The amount of light falling on them acts instead of the base current
and once a certain level is reached, the transistor is switched on.
PHP
Personal Home Page is a server-side (SSI), HTML
embedded scripting language used to create dynamic Web pages. In an
HTML document, PHP script (similar syntax to that off Perl or C ) is
enclosed within special PHP tags. Because PHP is embedded within tags,
the author can jump between HTML and PHP (similar to ASP and Cold
Fusion) instead of having to rely on heavy amounts of code to output
HTML. And, because PHP is executed on the server, the client cannot view
the PHP code. PHP can perform any task any CGI program can do, but its
strength lies in its compatibility with many types of databases. Also,
PHP can talk across networks using IMAP, SNMP, NNTP, POP3, or HTTP. PHP
was created sometime in 1994 by Rasmus Lerdorf. During mid 1997, PHP
development entered the hands of other contributors. Two of them, Zeev
Suraski and Andi Gutmans, rewrote the parser from scratch to create PHP
version 3 (PHP3). Today, PHP is shipped standard with a number of Web
servers, including RedHat Linux.
physical layer
Layer 1 of the OSI reference model. The physical layer defines the
electrical, mechanical, procedural, and functional specifications for
activating, maintaining, and deactivating the physical link between end
systems. Corresponds with the physical control layer in the SNA model.
PI
Pi, denoted by the Greek letter (),
is the most famous (and controversial) ratio in mathematics, and is one
of the most ancient numbers known to humanity. Pi is approximately
3.14, by definition, the number of times that a circle's diameter will
fit around the circle. Pi goes on forever, and can't be calculated to
perfect precision. On our site and in all of our calculators, 3.14159 is
the value we use. Please see our Piece Of Pi information, and our Historical Computation Of Pi Table. (Please don't leave crumbs on the table.) You can also calculate it yourself using our Pi Calculator. Dividing the Pi(e) has always been a problem but sometimes multiplying with it is also. You can do either here.
pico
A metric prefix that denotes the equivalent of one trillionth, 10 to the
-12th power, in the American system. See the inverse represented by terabyte.
Picture Studio is a place where you can learn about chat events, search
for pictures of your favorite chat partners, or catch up on the latest
hot community news. Most ISP services provide such a service under various names.
PINE
Acronym for Program for Internet News and E-Mail, a character-based E-Mail client for UNIX
systems. Developed at the University of Washington, PINE replaces an
older E-Mail program called elm. They were somewhat similar but not
exactly the same. Both are somewhat antiquated now.
PING
1. Abbreviation for Packet InterNet Groper. A connection testing program
that sends a self-returning packet to a host and times how long it
takes to return.
2. The actual name of a program to measure network latency.
3. Great golf clubs.
PIO
1. Acronym for Programmed Input/Output, a method of transferring data
between two devices that uses the computer's main processor as part of
the data path. ATA uses PIO and defines the speed of the data transfer
in terms of the PIO mode implemented, as shown in the information below:
PIO Mode, Data Transfer Rate (MBps), Standard
0 3.3 ATA
1 5.2 ATA
2 8.3 ATA
3 11.1 ATA-2
4 16.6 ATA-2
ATA-3, ATA/33 and ATA/66 do not have a PIO mode assignment as of yet,
although ATA-3 is often used in PIO4 since it is really a correction to
ATA-2.
pipeline
In DRAMs and SRAMs (memory),
a method for increasing the performance using multistage circuitry to
stack or save data while new data is being accessed. The depth of a
pipeline varies from product to product. For example, in an EDO DRAM,
one bit of data appears on the output while the next bit is being
accessed. In some SRAMs, pipelines may contain bits of data or more.
pixel
A pixel is the smallest unit of space on a computer screen; one pixel is
the smallest area that can be manipulated by the computer. Each little
dot is a pixel. Resolution
is a measure of how many pixels you can fit on your screen. The greater
the resolution, the smaller the images but the more you see on the
screen at one time. The greater the resolution, the longer the screen
refresh time and the slower the overall operation. 640x480, 800x600 and
1024x768 are the most common. Resolution and numbers of colors available
are determined by your computer's video card capability. More often than not, the larger the number of colors, the slower the operation. Most Internet
operations are limited to 256 colors and numbers set for greater than
that do not usually help; however, many sites now are turning to more
intense contrast and color since newer computers have that capability
and also have higher speed Internet connections. Older equipment may
only support 16, 64 or 256 colors.
PKI
An acronym for Public Key Infrastructure. PKI enables users of a
basically unsecure public network such as the Internet to securely and
privately exchange data and money through the use of a public and a
private cryptographic key pair that is obtained and shared through a
trusted authority. The public key infrastructure provides for a digital
certificate that can identify an individual or an organization and
directory services that can store and, when necessary, revoke the
certificates. Although the components of a PKI are generally understood,
a number of different vendor approaches and services are emerging.
Meanwhile, an Internet standard for PKI is being worked on. The public
key infrastructure assumes the use of public key cryptography, which is
the most common method on the Internet for authenticating a message
sender or encrypting a message. Traditional cryptography has usually
involved the creation and sharing of a secret key for the encryption and
decryption of messages. This secret or private key system has the
significant flaw that if the key is discovered or intercepted by someone
else, messages can easily be decrypted. For this reason, public key
cryptography and the public key infrastructure is the preferred approach
on the Internet. (The private key system is sometimes known as
symmetric cryptography and the public key system as asymmetric
cryptography.) A public key infrastructure consists of:
1. A certificate authority (CA) that issues and verifies digital
certificate. A certificate includes the public key or information about
the public key
2. A registration authority (RA) that acts as the verifier for the
certificate authority before a digital certificate is issued to a
requestor
3. One or more directories where the certificates (with their public keys) are held
4. A certificate management system.
PJL
The acronym for Printer Control Language, a printer dependent language release from HP. See more information on PJL here.
PKUnzip
PKUnzip is a standard DOS
decompression utility used to extract files from .ZIP archives. There
are also Windows versions of this architecture; not all are from the
original PK company. PKZip is the compression utility. The resulting files are called .ZIP files.
The initials PK are from the company and architecture founder, Phil
Katz. Phil was a friend of mine and a business associate. We often
discussed the viability of the compression technology in the early
1980s. He felt is would be big; I didn't but we still were close in
sharing technology. Phil passed away in April of 2000. He is missed
already, especially by me. The PK company is at HTTP://WWW.PKWARE.COM.
PKZip
PKZip is a standard DOS
compression utility used to creat .ZIP archives. There are also Windows
versions of this architecture; not all are from the original PK
company. PKUnzip is the decompression utility. The resulting files are called .ZIP files. The PK company is at HTTP://WWW.PKWARE.COM.
PLA
An acronym for Programmable Logic Array, a chip (IC) based programmed
logical program. PLAs are members of a broad category of chips called PLDs.
plasma display
A technology for both TV and video monitors for computers. Often called a
gas plasma display, it works by layering neon gas between two plates.
Each plate is coated with a conductive print. The print on one plate
contains vertical conductive lines and the other plate has horizontal
lines. Together, the two plates form a grid. When electric current is
passed through a horizontal and vertical line, the gas at the
intersection glows, creating a point of light, or pixel. You can think
of a gas-plasma display as a collection of very small neon bulbs. Images
on gas-plasma displays generally appear as orange objects on top of a
black background. Although plasma displays produce very sharp monochrome
images, they require much more power than the more common and less LCD
displays. 2001 and up technology has introduced spectacular color images
from plasma, with far fewer drawbacks. Current plasma displays are
bright, with a wide color gamut, and can be produced in fairly large
sizes, up to 60 inches diagonally. While very thin, usually less 4
inches, plasma displays use twice as much power as a comparable CRT
television, thus limiting the use in laptop computers. While spectacular
in viewing perception, there is prohibitive cost factor, compared to
other flat panel technology. The use of phosphors, as in CRTs, limits
their useful finite life to less than convention CRTs or other flat
panels.
platform
1. A platform is a version of interface software meant for a specific
computer. Examples of such software are for the DOS, Windows, Windows95,
AS400, Data General, Unix, DEC, Magic Link, Casio Zoomer, and Macintosh
platforms. there are many more. Many ISPs only support two or three platforms; some only one.
2. Something that short (height challenged) programmers put chairs on so that they can reach the keyboard on top of the desk.
PLC
An acronym for Programmable Logic Controller.
PLD
An acronym for Programmable Logic Device. While often just called a logic chip, it is an integrated circuit (IC)
that can be programmed, with proper equipment, to perform complex
functions. A PLD consists of arrays of internal AND and OR gates (see Boolean).
A system designer implements a logic design with a device programmer
that blows fuses on the PLD to control gate operation. The logic is
based on which gates are open and which gates are closed. System
designers generally use development software that converts basic code
into programmatic instructions a device programmer needs to create a
design and put it into operation. PLD types can classified into three
groups, PROMs, PALs or GALs and PLAs, and two classifications, Simple PLDs (SPLD) and Complex PLDs, (CPLD).
plug-and-play or PnP
Plug and Play is at best a hopeful name. Long time industry people
renamed the term to Plug and Pray. (it seems there were many clergy
involved in the original development.) PnP is the acronym (there is
always an acronym...) that also has a counterpart, TnT, indicating how
unstable the early PnP cards really were. Since R2 of W95, things have
been better. W2000 was supposed to really have a grip of sorts on PnP
with an entire section of System devoted to it. The theory is that OS
and card, working together, have the ability within a computer system to
automatically configure expansion boards and other devices. You should
be able to plug in a device and play with it, without worrying about
setting DIP switches, jumpers, and other configuration elements. Since
the introduction of the NuBus, the Apple Macintosh has been a
plug-and-play computer. The players involved and the varied options are
limited by Apple's resistance to sharing technology. The Plug and Play
(PnP) specification has made PCs more plug-and-play, although it doesn't
always work as advertised.
plug-in
Plug-ins are software programs that extend the usability of a program
you are using, most often browsers. There are plug-ins for playing real
time audio clips, video clips, animation, and more. Internet plug-ins
work with your ISP service or with your browser.
PNP
1. A class of transistor PNP or NPN, indicating the layers (each connected to a pin) of semi-conductor material polarity of positive-negative-positive. The middle layer is the base. The others are the collector and emitter.
2. See plug-and-play.
podcast
A coined term to define a broadcast of multimedia files to an Apple IPod
or other appropriate receiver. A podcast is an audio or video file that
subscribers can hear or view online. The advantage to a podcast is that
you don't need to remember to go back and get the newest information
from your favorite online source. Once you subscribe to the podcast it
will automatically show up in your reader. The readers are usually free
or at a very low cost. The majority of podcasts are available as audio
files in MP3 format, syndicated through an RSS (XML) file. Other formats
and other types of files, such as video, can also be podcasted. The
content is downloaded to your desktop PC or mobile device. It is not a
streaming format, so you can access the content when you want.
POE
An acronym from the phrase Power Over Ethernet. In this scenario, the
voltage to power a particular device, is powered by DC voltage on pins
4, 5, 7 and 8 of a typical (or special) 8 pin Ethernet cable. The power
is from a central point and usually powers something where conventional
ability to have power is unavailable; more often than not, it is a
wireless bridge or access point.
Point of Presence - POP
A site that has a collection of telecommunications equipment, usually
refers to ISP or telephone company sites. This is not to be confused
with POP3, a particular mail server technology supporting Post Office
Protocol.
Point To Point Connection
A data network circuit with one control and one tributary. Also see PPP.
A term referring to the direction of electron flow, and the condition
that creates it. A polar, or polarized circuit, usually has ground as
the negative (-) reference point in the circuit. The positive side (+)
of the circuit is based on the reference to ground, as in the poles of a
battery. There is a positive side and a negative side. See reverse polarity.
PONS
PONS is an acronym for Passive Optical Network. This is a high bandwidth
point to multipoint optical fiber network based on the asynchronous
transfer mode protocol, (ATM). PONs generally consist of an OLT (Optical
Line Termination), which is connected to ONUs (Optical Network Units),
more commonly known as subscriber terminals, using only fiber cables,
optical splitters and other passive components (do not transmit signals
using electricity). At present, a maximum 32 ONUs can be connected to
any one OLT but OLTs can be cascaded. The OLT is located at a local
exchange (CO), and the ONU is located either on the street, in a
building, or even in a user's home. PONs rely on lightwaves instead of
copper wire for data transfer. In a PON, signals are routed over the
local link with all signals along that link going to all interim
transfer points. Optical splitters route signals through the network;
optical receivers at intermediate points and subscriber terminals tuned
for specific wavelengths of light direct signals intended for their
groups of subscribers. At the final destination, a specific residence or
business can detect its own and only its own, specified signal. PONs
are capable of delivering high volumes of upstream and downstream
bandwidth (up to 622 Mbps downstream and 155 Mbps upstream), which can
be changed "on-the-fly" depending on an individual user's needs. This
type of tuning of the bandwidth is a technology that will be very
popular in the near future.
POP
See Point of Presence.
Also a protocol used for E-Mail functions, now in the 2nd revision,
POP3. Most E-Mail applications (sometimes called an E-Mail client) use
the POP protocol, although some can use the newer IMAP (Internet Message
Access Protocol). There are two versions of POP. The first, called POP2
(why did it start with a 2? Why ask why?), became a standard in the
mid-80's and requires SMTP to send messages. The newer version, POP3,
can be used with or without SMTP. Most ISPs still use SMTP for
transmission and only a few do NOT use POP3.
port
1. A physical address on a computer or computer device. This may often
be associated with a mapped mapped memory location to allow certain
types of connections or may also be associated with a physical
connecting device. This is often used in conjunction with Input/Output
devices.
2. A location we all look for in a storm; any will do!
3. A programming beverage for sophisticated programmers, usually enjoyed
with cheese and crackers; most veteran (real!) programmers drink Pepsi
and have moon pies.
portable
1. A term used to describe hardware that is small and lightweight, and
can be battery powered for at least an hour. A portable computer is a
computer small enough to carry. Portable computers include, ranging from
largest to smallest, laptops, notebook and subnotebook computers,
hand-held computers, palmtops, and PDAs.
2. An ambiguous term used to describe software has the ability to run on
a variety of computers. Portable and machine independent mean the same
thing; the software does not depend on a particular type of hardware.
Java is a language that creates such software although there are other
languages that do the same thing. The software may require compiling for
a platform but the native code is the same.
portal
A Web site or service that offers a broad array of resources and
services, most of which, but not all, are on-line, such as e-mail,
forums, search engines, and on-line shopping malls. The first Web
portals were online services, such as AOL and Compuserve, that provided
access to the Web, but by now most of the traditional search engines
have transformed themselves into Web portals to attract and keep a
larger audience. Typically, this sort of service also yields the user a
central place to find what he needs. See vortal.
portfolios
Portfolios are an ISP feature that allows you to keep track of your stocks.
POS
An acronym for Point Of Sale. POS is both the time and place in which a
transaction is made and it describes a special terminal used in
computerized accounting systems. POS computer systems include cash
registers, optical scanners, BAR code equipment, special printers,
magnetic card readers, and special monitors or terminals. Reading
merchandise tags, updating inventory, checking credit and directly or
indirectly interfacing with an accounting system are some of the
operations performed by the point of sale system.
POSIX
An acronym for Portable Operating System Interface for UNIX, a group of
IEEE and ISO standards that more or less define an interface between
programs and hypothetical operating systems. (This has nothing to do
with portable devices.) By designing their programs to conform to POSIX
standards and requirements, software developers have some degree of
assurance that their software can be easily ported to POSIX compliant
operating systems, now and in the future. At present, this includes most
flavors and offerings of UNIX as well as Windows NT. The standard is
loose at the moment but will be more stringent in the future. The POSIX
standards are now maintained by a division of the IEEE called the
Portable Applications Standards Committee (PASC). Considering the impact of portability of operating systems, this may well be an important factor in the future of computing.
post
1. To send a message to a public area like a BBS or newsgroup where it can be read by many others.
2. A programmer's work area; Man your post!
Post Master
The name given to the person in charge of dealing with E-Mail for a
particular site. In the case of mail, it is postmaster (all one word,
lower case). According to convention, mail sent to postmaster@your.com
should be read by a real live person, if you have one.
Post Office
The ISP post office is an area that helps new members acclimate themselves to the world of E-Mail. There are many forms of E-Mail and the exact protocol is different from one ISP to another.
POTS
Short for plain old telephone service (also see PANS),
which refers to the standard telephone service that most homes use. In
contrast, telephone services based on high-speed, digital communications
lines, such as ISDN and FDDI, are not POTS. The main distinctions between POTS and non-POTS services are speed and bandwidth. POTS is generally restricted to about 52 Kbps (52,000 bits per second). The POTS network is also called the public switched telephone network (PSTN).
POTS splitter
A frequency splitting device used on standard POTS lines to invoke
operations involved with others services, such as DSL operations. In the
case of ADSL, the splitter divides the total bandwidth of the line into
three channels, one for fairly high speed downloading, one for medium
speed uploading and one for standard voice. All can take place on the
same standard dialup line at the same time. Each uses a different
frequency.
power
1. A math term for designating a number times itself, x number of times
where x is the power. The power of 2 is referred to as squared and the
power of 3 is referred to as cubed.
2. A general term with the implication of volts present. For example,
when testing an electrical circuit, turn on the power mean to add voltage, sometimes called juice.
3. A indication of work, measured in watts. See our Ohm's Law Calculations With Power.
4. A designation of authority.
power newbie
An enthusiastic newbie (network newcomer) who takes advantage of
educational resources in an effort to become a knowbie. Power newbies
share their knowledge with other newbies both face-to-face and in
bulletin boards and chat rooms. See also newbie and knowbie.
powers of ten
We offer a wonderful page we found at Cal Tech as an understandable
source of information on the powers of ten as related to data. That page
was taken down for some reason but see the general information from it here!. (Also see kilobyte, megabyte, gigabyte, terabyte, exabyte, petabyte, zettaabyte and yottabyte.)
PowerQuest
PowerQuest Corporation, by self-definition, is a leading software
developer and technology pioneer, providing solutions to simplify
complex storage management issues. We think that is modest. We define
them as producing some of the best software available to do things with
disk drives that DOS, Windows, Novell and Linux can probably do, but
take much longer, in many more steps and have far less acceptable
results. We have found that our business cannot get along without them.
See them at WWW.SYMANTEC.COM as they have been taken over; hopefully the software will not go the way of so many others that Symantec has acquired.
power supply
1. The component that supplies power to a computer or other electrical
device. Most personal computers can be plugged into standard electrical
outlets. The power supply then pulls the required amount of electricity
and converts the AC current to DC current. It also regulates the voltage to eliminate voltage or current
spikes and surges common in most electrical systems. Not all power
supplies, however, do an adequate voltage-regulation job, so a computer
is always susceptible to large voltage fluctuations. Power supplies are
rated in terms of the number of watts they generate. The more powerful
the computer, the more watts it can provide to components. In general, 200 watts should be sufficient. See UPS.
2. The term given to an electrical generator, used where power is not available always or at all.
PPP
Point to Point Protocol, one of two standard methods of connecting to
the Internet. With a PPP account, you can connect to some generally
direct connect services over the Internet. As the name implies, it is a protocol.
PPTP
An acronym for Point to Point Tunneling Protocol, a new technology for
creating Virtual Private Networks (VPNs) , developed jointly by
Microsoft Corporation, U.S. Robotics (now 3COM), and several remote
access vendor companies, known collectively as the PPTP Forum. A VPN
is a private network of computers that uses the public Internet to
connect some nodes. Because the Internet is essentially an open network,
the Point to Point Tunneling
Protocol (PPTP) is used to ensure that messages transmitted from one
VPN node to another are secure. With PPTP, users can dial in to their
corporate network via the Internet. Although PPTP has been submitted to
the IETF for standardization, it is currently available only on networks served by a Windows NT 4.0 server and Linux. See L2F and L2TP, two competing but similar technologies.
PQ
An acronym for Priority Queuing. It is the assignment of order of operation.
PRAM
An acronym for Programmable Random Access Memory. A device that has a
stored routine, such as BIOS, that is moved to and executed from RAM for
speed.
precharge
1. On a DRAM (memory),
the amount of time required between a control signal's (such as RAS)
transition to an inactive state and its next transition to an active
state.
2. With your children, it is a time before you allow them to use your
credit cards. The skill of charging is usually taught by the wife in the
family.
Preferences
An ISP software feature that allows you to customize such features as sound and text size. A group of options controlled by you.
presentation layer
Layer 6 of the OSI reference model. This layer ensures that information
sent by the application layer of one system will be readable by the
application layer of another. The presentation layer is also concerned
with the data structures used by programs and therefore negotiates data
transfer syntax for the application layer. Corresponds roughly with the
presentation services layer of the SNA model. See also application layer, LLC, MAC, network layer, physical layer, PQ, session layer, and transport layer.
Pretty Good Privacy - PGP
A program, developed by Phil Zimmerman, that uses cryptography to
protect files and electronic mail from being read by others. PGP also
includes a feature which allows users to digitally "sign" a document or
message, in order to provide non-forgeable proof of authorship. New
technology is under consideration by the government to allow such
actions to be legal and binding.
PRI
An acronym for Primary Rate Interface, an ISDN service providing users
with 23 64 kbps bearer (or B) channels for message information and 1 64
kbps data (or D) channel for signaling and control over an existing
telephone line. This service has been antiquated with the advent of DSL
variations.
1. If you have a printer connected to your computer, you can use the
PRINT option under the FILE menu to print text and some pictures.
2. The fine stuff you didn't bother to read when you signed up for 50
years of Internet service at $50 a month because you thought it was a
great deal!
printer
A hardware device to put text or graphics on paper rather than on the monitor or system display.
A series of instructions that tell a computer what to do. Also, as a verb, to create or revise a program. See programmer.
programmer
1. An individual who creates or revises a program on any sort of device that responds to structured instructions as the control for operations.
2. A device that places instructions into a PROM, ROM or other chip for use in a computerized device.
programming language
A computer language that programmers
utilize to create programs. C, Perl, Java, BASIC, and COBOL are
examples of programming languages. In essence, programming languages are
translators that take words and symbols and convert them to binary
codes that the CPU can understand. A few others are Ada, APL,
AppleScript, assembly language, awk, C++, CODASYL, cxml, Delphi, Eiffel,
FORTRAN, GW-BASIC, MBASIC, NetBASIC, MuBASIC, JavaScript, JScript,
LISP, machine language, P-Code, microcode, Modula-2, K-Man, MUMPS,
Pascal, Prolog, pseudocode, Python, QBASIC, VBASIC, query language, RPG,
Smalltalk, Turtle, BasicA, SQL, Tcl, UML, VBScript, Visual Basic and
Visual C++.
progressive rendering
Progressive rendering is a download method where the file begins to
display itself before the download is completed. Downloading a graphic
with some ISP's latest software and most current browsers use this technique. It is also called Smart Art, streamers, quick grafix and similar "catchy" names.
progressive scan
Progressive scanning is a technology process used in describing how the
electronics of such devices work, but also defines the process used by
image processors and also decodes MPEG-2 formats. Standard NTSC
televisions have been using the "interlaced" technique, breaking each
frame image (480 viewable lines) into two sections called fields
(actually 240 viewable alternating lines), which is simply to scan 480
viewable lines in each pass of the electron beams. The beams run at 60
cycles per second. Because this process occurs at such rapid speed, the
human eye sees a full frame picture. This NTSC standard has been used
since the inception of television. While it is acceptable on a size of a
set up to about 27" viewable, the images start to degrade quickly as
the screen size increases. The introduction of Digital/High-Definition
TV brings the progressive scan technology which has been used in
computer monitors for years. Today's television can scan at double the
frequency of the standard NTSC television. Because much of today's
analog broadcasts are displayed in the interlaced format, manufacturers
of these sets often include a "line doubling" chip, which repeat the
alternating lines to fill the gaps between scan lines, giving the
impression of a brighter image. Digital broadcasts bring new terms,
480p, 1080i, and 720p to the TV specifications. The ideal is to provide
more lines of resolution for better details in the image quality. As of
the 2002 technology for digital TV, these terms are:
480p - Upconverted material from the standard NTSC 480 lines interlaced video.
1080i - 1080 alternating interlaced lines, accepted as the most common
high definition standard with the most line count, and available on
virtually all HD capabe and HDTV units.
720p - 720 progressive lines translates to less resolution, however one
that translates to seeing more on screen in a single pass, with the
intent of eliminating the artifacting process. 720p DTVs use a higher
frequency, and therefore are more difficult and costly to build.
proportional amplifier
A particular type of operational amplifier where the output
voltage is in proportion to the difference between the inputs. Unlike the comparator, which can based on exactly the same IC, the Prop-Amp has two individual inputs instead of one input and one reference value. This use is sometimes called a differential amplifier or subtractor.
protocol
A set of rules that governs how information is to be exchanged between computer systems. See TCP/IP, SLIP or PPP as an example of a protocol used to connect to the Internet. Also used in certain structured chat rooms to refer to the order in which people may speak.
PROM
1. An acronym for Programmable Read-Only Memory. A type of read-only memory (ROM) that allows data to be written into the device with hardware called a PROM programmer,
often termed a burner. After a PROM has been programmed, it is
dedicated to that data, and it cannot be reprogrammed. PROMs are part of
the PLD family of chips.
2. A wonderful social event of the 50's, 60's and 70's.
proxy
A server (actual hardware and software) that sits between a client
application, such as a Web browser, and a real server. It intercepts all
or designated requests to the real server, local or distant, to see if
it can fulfill the requests itself. If not, it forwards the request to
the real server. It is also a first line for privacy.
Proxy servers have two main purposes:
1. Improve Performance: Proxy servers can dramatically improve
performance for groups of users. This is because it saves the results of
all requests for a certain amount of time, in memory buffers of its
own. Consider the case where both user X and user Y access the World
Wide Web through a proxy server. First user X requests a certain Web
page, which we'll call Page 1. Sometime later, user Y requests the same
page. Instead of forwarding the request to the Web server where Page 1
resides, which can be a time-consuming operation, the proxy server
simply returns the Page 1 that it already fetched for user X. Since the
proxy server is often on the same network as the user, this is a much
faster operation than pulling the same information more than once. Real
proxy servers support hundreds or thousands of users. The major online
services such as Compuserve and America Online, for example, employ an
array of proxy servers.
2. Filter Requests: Proxy servers can also be used to filter requests,
usually for security. For example, a company might use a proxy server to
prevent its employees from accessing a specific set of Web sites. Those
types of applications are often used with FIREWALL functions to give
company LANs and servers protection both ways on the Web. See ANALOGX.
PSTN
Short for Public Switched Telephone Network, which refers to the
international telephone system based on copper wires carrying analog
voice data. This is in contrast to newer telephone networks base on
digital technologies, such as ISDN and FDDI. Telephone service carried by the PSTN is often called plain old telephone service (POTS).
Most telephone companies are trying to filter data and streaming
services into one network and leave the PSTN for mostly voice usage.
P-type
A semi-conductor
which has a shortage of conduction electrons, or and excess of "holes",
making it more positive. A semi-conductor can be made into P-type by
adding trace amounts of another element to the original semiconductor
crystal. Virtually all modern transistors and diodes require sections of both P-type and N-type semi-conductors.
PTV
An acronym for Projection TeleVision. This projection definition was the
original technology, from the front of the screen. Current projection
technology is moving toward RPTV. Please see SDTV for more information.
punt
Another phrase for being disconnected during your online session. (i.e. -
I was punted offline last night - probably for good reason!)
purge the cache
The effort to delete the files the web browser has stored (cached) on
your disk. These files were stored on your disk so they could be
retrieved quickly if you returned to the same web sites. Sometimes when
purging the cache, cookies
are also deleted. This usually requires that you again fill out certain
information at key sites you have previously visited and have
authorization to visit regularly. This is not to be confused with "purge
the cash", a term often used and associated with the need to upgrade.
PWS is an abbreviation for one of the many Microsoft products directed
at making the distance from your desktop to the Internet seem smaller,
Personal Web Server. It is also the acronym for Peer Web Services which
is more or less the same thing only based on NT. PWS is the baby brother
of IIS, Internet Information Server. Both products are hybrid
compilations and substitutes for an Internet capable web server. PWS
runs on the local operating system, on the local hard disk, simulating a
separate computer to psuedo-serve pages to an Intranet or LAN, or
possibly the Internet under the most controlled of conditions. PWS has
virtually no security and is an invitation to trouble if used in the
"real world". PWS is a simple HTML server used in a local office peer-to-peer
network that does support Microsoft's Front Page activities and
extensions. It was originally introduced for W95, later migrated to NT4
and works with upward compatible products from Microsoft. There is also a
MAC version. The product has never been terribly popular, probably
because it is far more efficient (and probably far less trouble) to set
up a regular server. Only a couple of pages on Microsoft's vast array of
servers are designated for information about the PWS freebie as far as
making it available to you. Roughly 340 pages are dedicated to
troubleshooting it. Is there a clue there?
Special cases in ring signal wiring on the BASIC LOGIC of SERVER
Electronic ringers ( R O B O = Ringing On Boat On )
The ringer circuits in the modern telephones have the same basic idea,
but the coil controlled bell is replaced by modern electronic ringing
chip and small speaker. The capacitor is still used in series with
ring IC input to make only AC pass to the ring chip. The electronic
ringing circuits are not sensitive to the ringing voltage and they
easily ring with ring signal frequencies between 16 Hz and 60 Hz.
Ring detection circuits in modems
In computer modems the logical signal from ringing is needed instead of
ringing tone. The ring circuit must pass the ring signal information to
modem electronics and still provide electrical isolation between telephone
line and modem electronics. This ring detection is usually done using
one optoisolator circuit, which replaces the raditional ring circuit.
The optoisolator output can be easily connected digital electronics, but
the optoisolator input side needs more electronics: one capacitor
for not letting DC to pass through optoisolator, one resistor to
limit the cirrent passing through optoisolator LED and one reverse
conencted diode in parallel with optoisolator LED to prevent negative
voltages from damaging the LED. This is the basic ring detection circuit.
Usually there is also two zener diodes (usually 10-20V models)
to make sure that the ring detection
circuit does not detect too small AC signals in the line as ring signal.
In the picture below you see a very typical ring detector circuit
for modems. The circuit just gives the idea how modem ring detector circuit
work. The actual component component values selection must be so that
the circuit meets the national telephone regulations (this can be
usually easily done by using suitable zener diodes and maybe chancing
the resistor value a little).
Component list:
C1 470 nF 250V AC
R1 10 kohm 1W
D1,D2 10-20V zener diode (any value in this range), 400 mW power rating
D3 1N4148 diode or equivalent
U1 4N27 optoisolator or similar
NOTE: You can get the circuit work by taking out D1 and D2 and
replacing them with a short circuit. The circuit works after then,
but it is possible that in this case some low voltage noise on the
line can cause the circuit to ring. Different countries have different
specifications on how low voltages should not cause a telephone to
ring at all.
Another apprach for ring detecting is to use a full wave rectifier
circuit to convert the AC sign signal to the DC suitable for optoisolator
and then put current limiting resistor and zener diode to the rectifier
output.
Component list:
C1 470 nF 250V AC
R1 10 kohm 1W
D1 10-20V zener diode (any value in this range), 400 mW power rating
RECT1 Rectivifier bridge 200V voltage ratign, at least 0.1 current rating
U1 4N27 or CNY17 optoisolator
Other ideas to detect telephone ringing
One idea which is proposed in many sources is to use small neon bulb
(like those used as lights in some mains switches) for detecting the
ring signal. The circcuit proposed is to connect one neon bulb and
47kohm resistors in series and connect this to telephone line.
The neon bulb has about 60V trigger voltage to start conducting, so standard
48V telephone battery voltage does not light it. When the AC ring signal
is added to that voltage, the voltage is enough to light the neon bulb.
The neon bulb can be used as visual indicator or electronics can sense
it with LDR photoresistor or phototransistor.
If you don't want to build your own circuit from neon bulb and resistor,
there is
an even easier solution is to go down to the hardware store and get a
"pigtail" tester. It has two nice leads that one normally pokes into the
wall outlet to test for voltage. Wire it instead to the phone line. This
saves the hassle of trying to find the container for the neon lamp, and
the resistor (which is VERY necessary, take my word for it).
One modem schematic I have seen used quite special method for detecting
ringing signals: It had a small capacitor in parallel with on-hook/off-hook
control relay contacts. This capacitor let some small part of the sound
and ring signals pass to the telephone transformer. In this way those
ring signals can be detected as small signal pulses in transformer
secondary (and this circuit can be also used for Caller ID signal detection).
The capacitor was so small that the impedance seen from telephone line
stays high enough not to disturb other equipments in the same telephone line
when modem is no on-line.
Normal audio amplifier and transformer
Very nice variable amplitude ring generator can be built from
audio amplifier designed for driwing 4 or 8 ohm speakers and have
output power of 3W or more, 10 ohm 10 W resistor, 220V to 12V transformer
(few watts), 1000 ohm 3W resistor and function generator.
___________ 10 ohm 1000 ohm
| |----/\/\/\--+ ||(---/\/\/\---
| | | ||(
Sinewave----| Amplifier | )||( Ring voltage out
| | | ||(
|___________|------------+ ||(------------
Transformer
12V:220V
The circuit is easy to build. Connect 10 ohm resistor in series with
transformer's secondary winding and 1000 ohm resistor in series with
primary winding. Connect the primary winding side of the transformer to
amplifier's speaker output. Connect the telephone to the secondary side.
The resistors are in the circuit to limit the current and to keep
the impedance high enough for the amplifier.
When you have done this, connect you function generator to amplifier's
input and set it to generate 20-25 Hz sine wave at suitable level
for amplifier's input. Turn down the volume of the amplifer. Turn
the amplifier on. Turn the volume up until you hear telephone
ringing well. You can check the ringing voltage with multimeter
if you vat to make it to exactly right level.
Modified power inverter circuit
It is possible to make 17 - 25Hz a.c. from d.c. A simple multivibrator will
do it. You then need a power transistor or similar to give the high-current
output. A suitable circuit can be modified from typical
power inverter
circuit by changing the timing components to make the frequency
to 20-25 Hz range. Then the transformer needs to be selected so
that it matches this application (for 12V operation take a
mains centre-tapped 60V (30+30V) secondary and 230V primary).
Generating ring pattern
Normal telephone ringing signal the central office sends is not normally
contirnuous signal, but follows some pattern.
The pattern could be for example ring 2 seconds on, four seconds off
and then again 2 seconds on, 4 second off etc..
The patterns used can vary somewhat from country to country.
If you want to generate this kind of pattern you need
a timer circuit that generates 2 seconds on and 4 secodns
off type output signal. That signal is then used to control
a relay that switches the power from the power source going
to telephone and off.
A 555 timer and one relay can nicely do this.
Basicly you take 555 timer in normal astable mode and
then select the value fo two resistors and one capacitor.
Then connect relay to 555 output, and that should do it.
Gen. Mac Tech Zone MARIA PREFER in the development of electronic hardware contact contacts in integrated electronic telecommunications not as a component but as a structured function.
Ex-lease copier machine We are leading copier service Provider in Auckland. Visit PrinterNeeds store for Great selection of highest quality Copier Machine at the guaranteed lowest price. Call at - 09 829 2000"
Ex-lease copier machine
BalasHapusWe are leading copier service Provider in Auckland. Visit PrinterNeeds store for Great selection of highest quality Copier Machine at the guaranteed lowest price. Call at - 09 829 2000"