Selasa, 05 Desember 2017

electronics equipment in a vast ocean of international financial turnover information AMNARJESLOW GOVERNMENT 91220017 LOR INTERNATIONAL FINANCE TRENDING MATIC 02096010014 LJBUSAF XAM$$ YES AND THEN JESI#



 
Hasil gambar untuk electronics equipment in  international finance
 

                                                         Electronics Logistics 

Moving delicate, high-value electronics requires choosing packaging and transport modes with tender loving care . Revenue from sales of consumer electronics will reach an estimated $209.6 billion in 2013, according to the Consumer Electronics Association. All those smartphones, e-readers, HDTVs, and other popular consumer devices represent just a fraction of the high-tech electronics sector. The category includes just about any product that processes data, whether it's used to produce music, diagnose cancer, keep planes aloft, or perform thousands of other tasks.
Such a vast variety of products relies on an equally vast variety of supply chains. Here's how smart shippers and their partners handle the specialized needs of electronics logistics.
High-tech electronics manufacturers face one looming challenge: the growing need to manage cross-border trade. Seventy-four percent of U.S.-based senior-level high-tech decision makers responding to a 2012 UPS-sponsored survey expect to increase exports of their companies' products during the next two years.

"The increase has been driven by changes in sourcing strategies, and demand for high-tech products among the growing middle class in emerging markets," says Alantria Harris, high-tech segment marketing manager at UPS in Atlanta. U.S.-based electronics manufacturers expect greater demand for their products in India, the Middle East, Africa, Brazil, other parts of South America, Eastern Europe, Korea, China, and elsewhere in the Asia-Pacific region, according to the report, Change in the (Supply) Chain Survey.
Electronics manufacturers already participate in a global marketplace. Given an even greater need to ship product internationally in the future, these companies will have to ensure they have reliable operations in each market where they source components, manufacture products, and distribute to customers.
"Businesses need to establish a presence in their current markets, and examine new markets," says Harris. "Ensuring they have the right transportation providers and suppliers can help minimize risk."
A third-party logistics provider with global capabilities—including access to transportation in many markets, connections to customs brokerage networks, and the ability to optimize transportation modes—may also help an electronics firm meet the challenges of operating in a global marketplace.

Speed vs. Cost

In addition to the challenges of operating in a global market, the speed with which newer, more-capable products push older ones out of the market also puts pressure on supply chains.
"Research and development drives the electronics business," says Jason Cook, managing director in the operations consulting group at Accenture. Electronics manufacturers focus on generating revenue by developing innovative products and getting them into customers' hands fast.
"It's a time-sensitive supply chain," he adds.
Shelf life plays a big role in shippers' transportation decisions. Mobile phone manufacturers, for example, typically keep a handset model on the market for just six to nine months. If a company transports handsets from the manufacturing site via ocean carrier, those products could spend 10 to 20 percent of their product lifecycle in transit.
"Time to market becomes a premium on those products," Cook says. "As a result, shippers tend to use more air freight."
Shipping products from Asian manufacturers to the United States often takes 30 days by ocean—about 10 times as long as the two to three days it takes by air. "But air can cost up to 10 times as much as ocean," Cook explains.
In addition to the need for speed, several other factors may prompt shippers to choose air freight, despite the cost. One such factor is the shipment's value. When introducing a new product, a consumer electronics manufacturer might need to ship freight worth $10 million, $20 million, or more. That's a lot of inventory to tie up for a month.
"Products that have a large channel fill requirement tend to favor air freight because of the inventory investment that original equipment manufacturers (OEMs) make while their product is in transit," Cook notes.
Business arrangements between OEMs and retailers often favor expedited transport as well. OEMs don't usually get paid for electronics until those products arrive in customer distribution centers. That could make air freight well worth the extra expense.
Also, the right pricing strategy can remove the sting of that cost. "If freight terms in their customer contracts allow manufacturers to cover the costs of these transportation services, they're more likely to expedite," Cook says. "They get the benefit of accelerating their revenue cycles, and they recover their investment."
They also give customers extra value by getting the product to market faster.
On the other hand, OEMs that use postponement strategies might opt for ocean freight. These are companies that, for example, don't load software onto their products, or don't do the final assembly, until it's time to ship units to specific customers.
Because these manufacturers don't need to forecast how many units with a specific configuration their customers will need, postponement makes supply chains more flexible.
Companies that ship product into a market, then hold it for final assembly or configuration, often can afford longer transit times, letting them enjoy the cost advantages of ocean transport.
"If they're not postponing—if they're putting product straight into the market—they tend to use air," says Cook.
The choices aren't always simple. Many electronics OEMs use ocean carriers, many use air, and some mix the two.
"For example, they might use air freight to fill the channels, but bring replenishment product into the market by ocean," Cook says.

Better Boxes

Whatever mode companies use to ship high-tech electronics, they must make sure delicate products arrive intact. That makes handling and packaging important concerns.
One company that has given much thought to packaging in recent months is Cox Communications, the third-largest provider of cable and broadband services in the United States. A quest for packaging to better protect its products in transit was a major component of a recent 18-month supply chain transformation initiative, says Ian Burgar, director of customer premise equipment (CPE) operations at the Atlanta-based company.
The term CPE refers to set-top boxes, cable modems, gateways, and other products cable companies provide to end users, plus business-oriented equipment such as phone systems. Some equipment customers receive is new, but much of it is returned by other customers when they no longer need it, then refurnished by the cable company. This "churn inventory" gives cable company supply chains a large reverse logistics component.
Cox buys its products from major OEMs such as Cisco, Motorola, and Arris. New equipment and product returned by customers flow into four regional distribution hubs in Chesapeake, Va.; Baton Rouge; Wichita; and Phoenix.
Employees in the DCs test and refurbish the used equipment, repackage it, and distribute much of it to primary distribution centers (PDCs) in the markets that Cox serves. The company operates a private fleet to move those units. It also uses package carriers to ship units directly to customers who do their own installations, rather than rely on the company's service technicians.
Before the transformation initiative, Cox had no packaging standards for product shipped into or out of the PDCs, either in bulk or via package carrier. Cox collaborated with the International Safe Transit Association (ISTA), a standards organization that works on packaging for transportation. "We designed bulk and direct consumer packaging to pass ISTA specifications," Burgar says.

bulking up

For bulk transportation, Cox developed a carton that holds four units of any product. "Foam padding lines the bottom of the boxes," explains Burgar. "Each divider has foam padding to lock the units in place, so they don't bounce around in transit." The company also bags each product to prevent cosmetic damage.
For direct-to-customer shipments, Cox developed a single-unit carton, which it held to even higher ISTA standards than the bulk carton. Testers at an ISTA certification lab dropped loaded cartons from different angles, compressed them, subjected them to high humidity, and put them on a shaker table to simulate the vibration cargo may experience during a long truck ride.
Since implementing the new packaging, Cox spends less money on cosmetic touch-ups and certain repairs.
For the future, Burgar hopes to persuade OEMs to change their own packaging practices. Today, OEMs box and palletize their products for shipment from overseas manufacturing sites to the United States; then Cox repacks the units into smaller cartons for delivery to individual markets. Cox would like its vendors to use smaller cases that can flow through the supply chain.
Burgar also would like to replace much of the company's cardboard packaging with plastic corrugate—a greener and more cost-effective material than cardboard—because shippers can re-use it for years.

Quest for Perfection

Packaging is also an important issue in business-to-business (B2B) high-tech electronics sales. Many businesses insist that packaging arrive in perfect condition.
"If they see damage to the packaging, they believe there's possible damage to the product," says Danny Stephens, global vice president of transportation at Phoenix-based Avnet. Some refuse product on that basis, without even taking it out of the box.
With a presence in more than 70 countries, Avnet sells electronic components to OEMs, and integrated computer systems to resellers and end users. It operates two major DCs in Chandler, Ariz.—one for each product line—plus several smaller U.S. facilities.
While Avnet experiences few problems with damage in transit, greater customer focus on packaging is making the company strive even harder for perfection. "We keep carrier performance metrics, and work with our carriers and engineering team on packaging," Stephens says.
Protecting product while controlling packaging costs is a continual balancing act. So is mode selection.
For companies such as Avnet that sell components to companies that favor just-in-time manufacturing, choosing a mode for international transportation is easy. "Ninety-nine percent of our international freight comes by air," Stephens says.
But for domestic transportation, the company keeps costs down by moving about 85 percent of its shipments by truck. Exceptions occur when product arrives late from one of Avnet's suppliers, or when lack of components threatens to shut an OEM's production line.
"We are a customer-driven team, so if a customer has an immediate need, the shipment moves by air," Stephens says.

High-Value Handling

The high value of many products Avnet ships, combined with its frequent use of parcel carriers, creates another challenge: moving these valuable products.
"We have no easy, cost-effective way to insure that type of product because of the volumes we ship," Stephens explains.
The carriers that handle Avnet's bulk shipments offer liability coverage more suited to the company's needs. Luckily, Avnet rarely sees product go astray, no matter whose truck it's on.
For its bulk shipments, Avnet has cultivated relationships with carriers whose standard operating procedures align well with its own requirements.
"We ensure they're all certified by the Customs-Trade Partnership Against Terrorism (C-TPAT), and that they have secured facilities wherever they move our product," Stephens says. "We verify that they have handling procedures to support moving either electronic components or integrated computer systems."
LG Electronics—a Korea-based vendor of consumer electronics, mobile communications products, and appliances—also works with carriers to ensure that its products stay safe and secure.
"Trailer theft continues to rise in the United States, so in 2011, LG began taking steps to reduce its transit losses through a layered security approach with its service providers," says Michael Fahey, director of transportation and security C-TPAT at LG Electronics U.S.A. in Englewood Cliffs, N.J.
"We have worked over the years on detailed processes for trailer entry at our DCs, and we've created more stringent guidelines for vetting carriers," he says.
LG scans the licenses of all drivers who serve its DCs. "We also require each trailer and driver to provide a valid commercial driver's license and load number, or the driver will be turned away," Fahey notes.
The varied needs of retailers that receive electronics from LG also pose shipping challenges. "All customers require different stack heights, pallets, floor loading, and number of labels," Fahey says. "This can be labor-intensive for the DC, because the shipment must be compliant, yet it must utilize the space within each trailer."
Complying with the guidelines established by large retailers such as Walmart, Target, and Best Buy is simply business as usual, and LG has always succeeded in this area. "But now even regional retailers have compliance guidelines we must follow," Fahey says.
To meet this growing demand, LG creates a checklist of customer requirements for each order. The company measures its performance monthly to ensure it is meeting customer requirements while also maintaining its own cost objectives.
As information technology takes over more aspects of our lives, the electronics sector will only continue to grow. The special savvy and TLC it takes to ship these products safely and efficiently will become more important than ever. 

Relocation Logistics: Handle with Care

While high-tech electronics manufacturers and vendors tailor logistics strategies to meet the needs of delicate and high-value products, customers that buy those products might need detailed logistics plans of their own. Moving the contents of a data center to a new location, for example, requires a great deal of planning, and specialized knowledge and equipment.
Companies that perform these moves must know how to uninstall equipment at the origin, pack it properly, and secure it in the vehicle—then reverse the process at the other end. They must have well-trained drivers, and operate the right kind of vehicles for the climate.
"In a very humid environment, you need a climate-controlled vehicle, with air conditioning or a humidifier," says Bruce Cardos, principal consultant in the data center relocation practice at Data Link, Eden Prairie, Minn. Equipment moving from a hot location to a cold, wet one needs climate control as well. The key is to keep condensation from forming on equipment during the move.
Because companies rely on data centers for crucial functions such as payroll, time windows for relocations tend to be narrow. When the window is especially small, the equipment might travel by plane.
To move data centers for its clients, Data Link works with Hi-Tech Transportation Inc. of Charlotte, N.C., a carrier that specializes in electronics and medical imaging equipment. Companies moving high-tech electronics often create problems when they fail to develop a complete inventory of their equipment.
"They may pay to move outdated devices, or equipment scheduled for decommissioning," says Samuel Lolla, managing partner at Hi-Tech Transportation. "Properly identifying those devices during the inventory process is key to the future relocation design plan."
Along with good planning, a data center move requires experienced personnel and appropriate vehicles. "Vehicle breakdowns can cause huge delays, as well as unplanned, extended network outages, which ultimately cause operational problems and, in many situations, lost revenue," Lolla says.
Although specialized carriers for high-tech loads tend to be more expensive than common carriers, they also offer significant savings by averting damage. Using a carrier with equipment that offers a gentler ride can also save the shipper money on specialized crating or packaging.
"Companies realize financial gains when aftermarket packaging or crating does not need to be manufactured, or purchased and shipped," notes Lolla. Shippers also save on labor, because they don't need to repack their equipment before shipping. All these precautions add up to a smooth move.



                                XXX  .  V   ELECTRONIC MONEY ORDER 

The International Financial System application (IFS) is a standalone application, which takes care of the electronic transfer of international money orders. IFS uses electronic data interchange (EDI) to send international money order data electronically, using sophisticated data encryption techniques to ensure the integrity of the data sent over the postal network. IFS also helps Posts provide electronic domestic money order services.

IFS is a complete management tool that responds perfectly to the needs of Posts in the electronic transfer of money orders especially at a time when the money transfer market is becoming increasingly competitive.

Its Gross National Index (GNI) based costing structure and low transaction-tax enables a very fast return on investment. The goal of the IFS system is to provide postal enterprises with reliable, secure, and timely electronic financial services, which in turn allows Posts to be more competitive in the global marketplace. The IFS system not only handles all phases of international and domestic money processing but also provides advanced features that facilitate cash management and accounting.
   
 Features
 
  IFS has a wide range of functions, including:
      Money order data collection, such as registration of any processing up to the final payment, reimbursement or cancellation of the transaction
      Tracking and tracing of individual transactions or groups of transactions
      Money order service monitoring and production of statistics
      Quality control measurements
      Bi-lateral agreement definition and automatic validation of transactions against the agreements
      Production of UPU standard international accounting documents
 
 IFS also supports a variety of money order and fund transfer services, from ordinary cash-to-cash orders to urgent wired transfers. On the IFS network, any transfer of data is protected by strong software encryption techniques. Network members are part of a Public Key Infrastructure (PKI) operated by the PTC.

The IFS operational front-end is run as a Web application. This is specifically adapted for easy deployment over an Intranet or even over the Internet. It is possible to secure the protocol between the Web browser and the Application server.

IFS can easily be interfaced with an existing application, such as a counter operations system. In addition, the system user interface can be localised into any language.
  
   
 Advantages
  
 
  The advantages of the International Financial System include:
      IFS can be used as a simple gateway for accessing the postal network or as a complete money order management tool
      IFS complies with all UPU regulations
      IFS is easy to deploy and staff training can be provided during the installation phase. In most cases, no additional hardware is required
      IFS is an end-to-end solution and can be used from the point of sale (at the origin) to the point of final payment (at the destination)
      IFS supports multiple international and domestic services and service definitions are free. Each network member is free to negotiate and configure the service conditions, rules and prices bilaterally with each of its partners
      IFS is scalable ensuring that the same software supports from very low to very high volumes of transactions
      IFS functions are constantly enhanced, based on requests from all network members
      IFS offers various options for connecting to the postal network, including leased line or dial-up over the Internet or using SITA lines



XXX  .  V00 Information Technologies in International Finance and the Role of Cities


 

 

Information Technologies in International Finance and the Role of Cities

 
The emergence of new information technologies is often said to reduce the importance of cities as financial centres. In this paper, several arguments against this view are developed. First, in the financial services industry electronic information transmission, data processing and trading is not an entirely new phenomenon. Second, with the electronic grids of financial institutions spanning the globe being inherently nodal even in the age of virtual finance location still matters. Third, the myth of the "dissolution" of cities is based on the assumption of a perfect separation of virtual and real activities that, at a closer look, does not hold. Financial decisions are made in an "experiential continuum" between the materiality of geographic space and the virtuality of cyberspace.
Keywords: Information technologies; international finance; world cities.

INTRODUCTION

Information transmission in international finance certainly did not start with Nathan Rothschild, but when he was the first in London to hear the news of Napoleon's defeat at Waterloo, allegedly making a fortune of it, the episode soon became one of the industry's favourite legends grounded in information technology. In the early 19th century, in the financial industry information technology was working basically according to the same principles we know today such as digital codes, data compression, error recovery and encryption. The talking is not about computers and the internet, but of "the mother of all networks" as one author puts it, the optical telegraph. The optical telegraph was followed by the electric telegraph, the telephone, the fax machine and, eventually, the computer. As the following chapter shows, each time, financial institutions all around the world eagerly jumped at the opportunity to use the new medium for speeding up communication and trade. The recent mania about electronic exchanges and e-commerce firms offering financial information and the opportunity to buy and sell stocks, bonds, derivatives and other financial instruments online to a broad public sometimes makes forget that this has been the usual way of doing business for large parts of the financial industry for many years.
The emergence of new information technologies and "virtualisation" of activities is often said to reduce the importance of location for the financial industry and diminish the role of world cities such as London, Paris and Frankfurt as financial centres. This would have severe consequences for economic growth. At the beginning of the 21st century cities are generally struggling to cope with the effects of two phenomena, the transition from an industrial and manufacturing-based economy to one in which services and information and communication technologies play a dominant role and the competitive pressures of increasing globalisation. In this situation, financial services are one of those few sectors promising continuing growth, employment and tax incomes to their communities. A shrinking importance of location would threaten to reduce these benefits.
In the literature, the view that virtualisation and new information technologies reduce the role of cities and lessen discrepancies between 'cores' and 'peripheries' in the world economy is widespread (Graham and Marvin, 1996; Moss and Townsend, 1998). But, mapping internet traffic and activity shows the surprising result that, in contrast to this belief, hierarchies appearingly become even more pronounced. This holds in particular for financial centres. Manhattan has some of the highest densities of domain names in the world, Singapore and Hong Kong have become critical nodes in an Asian-Pacific financial network geography, and in Europe, after far-reaching deregulation in the 1980s, London developed into a centre for all kinds of information-rich financial services. In the literature, explanations for this phenomenon stress the role of headquarter and control functions, of a reliance on producer services, and of cities providing a platform for global operations (Daniels 1993, Sassen 1999). While some financial services are decentralising with rising information and communication technologies, the overall need for centralisation of control and coordination in the sector is rather reinforced further enhancing financial agglomeration and concentration. But, the relations between information technology, financial services and city growth are more complex as the study of concepts of the nature of virtual reality, and the linkages between real and virtual processes, shows. In order to explore the implications for the world's financial landscapes an answer has to be found to the following questions: What characterises virtual activities in general and in which way do they differ from traditional ones? What is it that gets lost, and what is won, in the process of virtualisation of financial services and functions? To what extent can electronic flows replace physical activities, and to what extent and in which ways do they complement them, in the world of international finance?
The paper will proceed as follows. Section 2 will present a short chronology of information technologies in the world of finance from ancient to modern times. Since most of the discussion on the relation between finance and IT is limited to the aspects of communication and networking, section 3 will show in detail the implications of the recent convergence of information transmission and processing technologies covering a wide range of areas and activities both in traditional financial markets and new market segments and structures. It will describe developments in wholesale and retail banking and securities trading, trace the industry's process from internationalisation to globalisation and study the implications of the virtual dimension added by electronic linkages and the internet focusing on activities such as customer services, trading, risk management and settlement. Then, section 4 will look at the nature of virtual reality, and the relation between real and virtual spaces and places in general, and analyse how these are altering our view of the world's financial landscapes of centres and peripheries. Section 5 will draw some preliminary conclusions about the lessons for the future prospects of the growth of cities in the age of electronic finance.

IT IN THE WORLD OF INTERNATIONAL FINANCE - A SHORT CHRONOLOGY

The use of information technology to collect, generate and record financial data is nearly as old as the technology itself, spanning thousands of years from the earliest forms of pictorial representation and writing to the latest advances of electronic storage and transmission. From the early beginnings, financial activities were not restricted to particular locations but extended over regions and continents, and great efforts were made to overcome the limits of time and space in long-distance communication in order to transmit news about latest political and economic developments, natural catastrophes and other unforeseen events affecting markets and prices and signalling business opportunities.
Prior to the invention of telegraphy in the 19th century, information was bound to move at the same speed, and over the same distance, as the prevailing transport system would allow (Dicken, 1998, p. 151). Financial activities were largely determined by personal knowledge of people and circumstances (Favier, 1992, pp. 24 ff.). As one author put it:
Success in money and banking operating in a number of countries ... required having a large number of brothers or cousins, with a single combined interest and thinking more or less alike, to solve the agency problem. (Kindleberger, 1993, p. 258)
The most common method to communicate over long distances was to hire a person to deliver a message as fast as possible, either a human runner or a rider on horseback. Safety considerations made early rulers place guards at regular distances along the roads. They became the forerunners of the relay systems. References to messenger systems were found dating back almost 4000 years to ancient Egypt and Babylon. The Romans had relay stations which kept a reserve of 40 horses and riders. And for Asia, Marco Polo reported of a system of post-horses used by the Mongol ruler Kublai Khan (1215-1294) with about 200 horses per post which reached considerable speed - in this it was compared to the Pony Express which operated much later (from April 1860 to October 1861) in the United States covering the 3,200 km distance from Missouri to California in about ten days (Holzmann and Pehrson, 1994). Other means of transmission in history were homing pigeons, mirrors, flags, fire beacons and semaphores. For example, it was a pigeon that brought Nathan Rothschild the news of Napoleon's defeat. Mail was delivered by stage coach, caravans and merchant vessels. Travellers were routinely ask to take messages with them. When young Pierpont Morgan left England to go to school in Vevey, Switzerland, travelling over Calais and Paris in 1854, he was asked by the American minister in London, James Buchanan, to deliver a packet of government papers to Paris which was not unusual. (Strouse, 1999, p. 54)

Telegraph and Ticker Tape

The arrival of the telegraph made all the difference allowing messages to be sent with great speed over very large distances. The first optical telegraph line started operating in France between Paris and Lille in May 1794. Soon other European countries followed and in 1830 "lines of telegraph towers stretched across much of western Europe, forming a sort of mechanical Internet of whirling arms and blinking shutters" (Standage, 2000, p. 18). But the system had also its drawbacks. It was expensive to run requiring shifts of skilled operators at each station and involving to build towers all over the place. Beside, optical telegraphs would not work in the dark or in fog and mist.
Eventually, it were electric inventions that changed the world. The discovery of the electric telegraph in the 1830s, and the telephone in the 1870s, marked a distinct new era. Discovered simultaneously in Great Britain and the United States, the telegraph made possible the first efficient direct control of operations in one distant place from another. It became the first agent of instant communication between countries and continents coordinating international financial and commodities markets (Mokyr, 1990, p. 124). For example, prior to its introduction between New York and Philadelphia, the transmission of price data from the distant market to the local one had taken one day. The telegraph enabled investors to obtain the price information before their own market closed. Before telegraph lines were established between New York and New Orleans, dealers in foreign exchange in one city received quotes from the other with a delay from four days to a week. This was reduced by the telegraph to a day or less (Garbade and Silber, 1994). The telegraph fundamentally changed America's financial landscape. In 1850, there were 250 stock exchanges. 50 years later, New York had become the dominant exchange standing out as the only financial centre of national importance (Edwards 1998).
However, it was not until after 1848 that the impact of the new technology on financial markets was really visible. Its expansion was fastest in the United States. In New York, in these years, there were eleven separate lines and "it was not uncommon for some bankers to send or receive six or ten messages each day." (Standage, 2000, p. 58) In contrast, in Britain the telegraph was at first more associated with the railways. Nevertheless, when London reported the first congestion problems in the early 1850s, half of all telegraph messages were already related to the Stock Exchange. Soon, there were telegraphic money transfers. However, one big problem was security. Banks had their own sophisticated private codes for money transmission but, up to the implementation of a scheme developed by the Western Union (the then dominant US telegraph company) in 1872, transferring money was considered as highly unsecure demanding a high level of trust between both parties and telegraph operators.
Financial market participants have been among the early users of telegraph facilities, but to them the real watershed became the submarine cable. London became linked to Paris in 1851, to New York in 1866 and Melbourne in 1872 (Inwood, 1998, p. 480). The transatlantic cable was used to arbitrage the London and New York securities markets immediately after its opening. Some days after the event the New York Evening Post started publishing price quotations from the London market (Garbade and Silber, 1994). In those years, English investors held a substantial volume of US Treasury debt which traded in London as well as New York. Before the establishment of the submarine link, information travelled with a time delay equal to the duration of an ocean crossing, or about three weeks. And, since purchase and sale orders directed to the foreign market had to cross the Atlantic, too, execution took the same time once again. After the opening of the transatlantic cable those delays were reduced to one day. By the 1890s, telegrams between the London and the New York Stock Exchanges took three minutes from sender to receiver (Headrick, 1988, p. 104).
Britain became the leader in the world submarine cable business. British firms laid most British and non-British cables in the world and owned 24 of the world's 30 cable ships which earned the country an overwhelming military and economic advantage. London became not only the centre for financing most of the international submarine cable business but also the major communications node reinforcing Britain's position as the foremost naval, commercial and financial power in the world. For example, French journalists must learn that "news of commercial importance - commodity prices, contracts, ships' arrivals and departures, etc. - passed through London before reaching Paris. British newspapers and the Reuters news agency received reports of world conditions sooner and in more detail than their French counterparts." (Headrick, 1988, p. 114)
The telegraph changed the newspaper business. Prior to its invention, newspapers mostly covered a small locality with news of distant places taking often weeks to reach the readers. Only few larger newspapers had correspondents in foreign countries, and their letters took weeks to arrive as well. With the telegraph news from distant places became available instantly, and the question arose of who ought to be doing the reporting (Standage, 2000, pp. 140 ff.). For newspapers one obvious solution was to form groups and cooperate, establishing networks of reporters. The advantage was a far greater reach without the expenses of maintaining own reporters in distant places. In the United States, the first news agency, the New York Associated Press, a syndicate of New York newspapers, was set up in 1848. In Europe, there had already been a correspondents network even before the arrival of the telegraph. Established by Paul Julius Reuter it used carrier pigeons to supply business information and prices of bonds, stocks and shares. Initially, it operated between Aix-la-Chapelle and Brussels. When England and France were linked by telegraph, Reuter "followed the cable" moving to London as the financial capital of the world and the centre of the telegraphic network.
Communication facilities provided by the telegraph were often incomplete. For example, when telegraph communication between New York and Philadelphia - where securities were dually listed on both stock exchanges - began, the line first reached only from Newark, New Jersey to Philadelphia. Then, within a few months, the line was extended to Jersey City with messages ferried across the Hudson river to Wall Street (Garbade and Silber, 1994). In Europe, there was a similar picture. For instance, Paul Julius Reuter supplemented incomplete telegraphic communications systems on the continent by using carrier pigeons to bridge the gaps (Hall and Preston, 1988, p. 45).
In those years, telegraph companies started offering regular bulletins containing a digest of the morning papers, or a summary of the most recent market prices to the business community. But, in contrast to other businesses, for the financial industry daily or twice-daily reports soon proved not enough. Demand for more frequently updated information led to the development of the stock ticker, a machine producing a continuous record of stock price fluctuations printed on a paper tape. The name of the machine was derived from the characteristic chattering sound it made. The ticker was a great success which immedeately found hundreds of subscribers throughout New York's financial district.
These days, the telegraph is often compared to the internet:
During Queen Victoria's reign, a new communications technology was developed that allowed people to communicate almost instantly across great distances, in effect shrinking the world faster and further than ever before. A worldwide communications network whose cables spanned continents and oceans, it revolutionised business practice, gave rise to new forms of crime, and inundated its users with a deluge of information. Romances blossomed over the wires. Secret codes were devised by some users, and cracked by others. The benefits of the network were relentlessly hyped by its advocates, and dismissed by the sceptics. Governments and regulators tried and failed to control the new medium. Attitudes to everything from newsgathering to diplomacy had to be completely rethought. Meanwhile, out on the wires, a technological subculture with its own customs and vocabulary was establishing itself. (Standage, 2000, p. 1)
And, the first intranets emerged as well: From the 1870s onwards big companies with several offices began leasing private lines for internal communication. The advantage was that in this way internal messages could be sent for free allowing organisations to be controlled from a head office. This laid the foundations of today's large hierarchical companies and financial conglomerates.

Telephone and Telex

With the invention of the telephone in the late 1870s, the telegraph increasingly lost out to the new technology although it remained the main form of long-distance international communication until well into the 20th century (Hall and Preston, 1988, p. 44). Initially, the telephone was widely seen as another, improved kind of telegraph - a "speaking" telegraph -not a wholly distinct technology (Standage, 2000, p. 185). Again, business use started in the United States where by 1887, only ten years after commercial introduction, "there were 743 main and 444 branch exchanges connecting over 150 000 subscribers with about 146 000 miles of wire" (Hall and Preston, 1988, p. 50). In Europe, mainly due to political reasons, progress was much slower. In 1885, for example, the United Kingdom only had about 10,000 subscribers. Bell and Edison actively engaged in the transfer of their systems to European cities. Bell himself came to Europe in 1877. Promoters of Edison's patents were closely linked to American banks in London that had the necessary financial resources, organisations and foreign contacts to move technology across national boundaries (Hall and Preston, 1988, p. 51).
The telephone facilitated all kinds of financial and foreign exchange transactions making it much easier for the financial community to take advantage of discrepancies in rates prevailing at the same time in different locations. As a consequence, those differences tended to narrow down considerably. But, the process was handicapped by the inadequacy of long-distance telephone communications. Even after the second World War, long delays for lines for trunk calls to correspondents abroad were frequent. "It was not until the 'fifties that the improvement of long-distance telephone service and the adoption of the 'telex' system made it virtually impossible for such discrepancies to continue for any length of time between markets with identical business hours. By the 'fifties it could be said with very little exaggeration that it was almost as easy to transact business with a bank in a foreign centre as with one just across the road." (Einzig, 1970, p. 239)
In the early postwar years the telex - an international system of sending written messages where the text is typed on a machine in one place and immediately printed out by a machine in another place - became the major medium of telecommunications (Dicken, 1998, p. 155) and the single most important determinant of the world structure of financial centres. This had two aspects. First, the greater the number of direct links between centres provided by local and foreign banks via telex, the greater the quality of unfiltered information, i.e. information transmitted not through relatively public networks but through direct banking links. Second, since bankers used international telex facilities to the exclusion of nearly all other communications systems, the volume of international telex activity became an indicator of a place's international influence as a financial centre (Reed, 1994).
In the 1960s and 1970s, telex and ticker tape became of decisive importance for the growth of a new kind of financial markets, external markets or so-called euromarkets. Euromarkets, unlike conventional financial markets, occupy no fixed location but from the start relied on international telecommunications networks linking financial centres throughout the world (Langdale, 1994). Their beginnings date back to the postwar period when the widespread use of the US dollar as a vehicle currency for making payments in international transactions, the easing of exchange restrictions in major European countries and the overall growth in European business after the formation of the Common Market contributed to the need for external markets for currency deposits and loans (Dufey and Giddy, 1994). But, their true take-off is often related to the 1960s, to the restraints on foreign portfolio investment in the United States (the interest equalization tax) and on US bank lending abroad.

The Beginnings of Electronic Dealing

In the 1970s, there was another information technology revolutionising financial trading which was videotext. The technique allowed recording data on magnetic tape for to be displayed on a television making possible video conferencing - the meeting of people in various places around the world by seeing and hearing each other on a screen. But, for the world of international finance, what became even more important was screen trading. A series of companies emerged providing equipment that placed the information directly on the desks of dealers thereby threatening the traditional role of the trading floor as the centre of activity. At the beginning of the 1970, the terminals of Telerate and Quotron in the United States, and Reuters, Extel and Datastream in Europe, displayed prices fed to them by banks, brokers and dealers and allowed to change the information live, or online, at the originator's request (Hamilton, 1986, pp. 41ff.). Another revolution took place in the stock markets. In response to a crisis in OTC securities dealings in the late 1960s in the United States the National Association of Securities Dealers developed NASDAQ, an electronic dealing system. Installed in 1971 NASDAQ consisted of 20,000 miles of leased telephone lines connecting dealers' terminals with a central computing system that recorded prices, deals and other information. Trading volume soon rose dramatically and by 1985 reached more than 16 billion shares, with a value of some $200 billion, making the exchange the third largest stock exchange in the world behind those of New York and Tokyo.
Another electronic system which started in 1969 was Instinet. This system, backed by Merrill Lynch and other groups, aimed at providing a low-cost trading network along the lines of foreign exchange trading for institutions buying and selling shares in bulk which later quoted not only US stocks but also foreign stocks and options on stocks and currencies from the CBOE (Chicago Board Options Exchange). The latter was a first step towards automation of derivatives trading - a market which traditionally had been considered most resistant to being removed from the exchange floors because of the large sums involved and the volume brought by "locals" or independent floor traders. The same tendency became visible when in Europe the London International Financial Futures Exchange (LIFFE) was founded in 1982. Although keeping the open-outcry system of floor trading, from the beginning LIFFE had a high degree of automation in quotation and settlement. Later, many traditional exchanges followed this trend and invested in modern technology such that everything but the final order execution became automated (OECD, 2001).
Increased automation was accompanied by a growing tendency towards globalisation. One step in this direction was the establishment of a trading link between the Singapore International Monetary Exchange (SIMEX) and the Chicago Mercantile Exchange (CME) in 1984. This was the first of several networks and systems of an increasingly globalised automated securities trading and a forerunner of Globex, the system jointly developed by the CME and Reuters which allowed to electronically match buy and sell orders from computer terminals around the world as soon as the Chicago markets were closed (Dubofsky, 1992). The first fully electronic exchange in Europe was the Swiss Options and Financial Futures Exchange (SOFFEX) founded in 1988. Through its merger with Deutsche Terminbörse (DTB) in 1998, it became EUREX that, in 2001, was Europe's biggest derivatives market measured by the number of contracts traded. These days, outside of Switzerland and Germany, EUREX has access points in Amsterdam, Chicago, New York, Helsinki, London, Madrid, Paris, Hong Kong and Tokyo.
In the 1990s, the era of the telex came to an end and it was largely displaced by the fax, a machine used to copy documents by scanning information electronically along a telephone line and to receive copies sent in this way, and, more recently, by electronic mail using the internet to send written messages and all kinds of information and data electronically from one computer to another. But, fax and e-mail were only two of those small but highly significant changes occuring in a much larger transformation of transmission channels in global communications that took place by the development of satellite technology and optical fibres. The use of satellites for commercial telecommunications dates back to the 1960s. The Early Bird, the first geostationary satellite, launched in 1965 was able to carry 240 telephone conversations or two television channels simultaneously. The Intelsat V launched in 1995 by a multi-nation consortium of 122 countries carried 22,500 two-way telephone circuits and three television channels. When privatised in 2001 Intelsat had 450 customers in 215 countries. This and other global satellite operators now are offering broadcast, video, broadband, multimedia, internet and telecommunications services. There are regional systems as well, such as Eutelsat, and even privately owned ones like those of Citibank, Merrill Lynch, Prudential Bache and others (McGahey et al., 1994, p. 131). Transmission by satellite has the advantage that costs are largely insensitive to distance. Within the satellite beam whether transmitting for five hundred or five thousand miles, to two points on earth close together or far apart, makes no difference. But, the short hightime of satellite communications is already challenged by a new technology: optical fibre cables. Those have a still far higher carrying capacity transmitting information at very high speed and with great signal strength. Again, large financial firms are installing private networks to "bypass" public ones (McGahey et al., 1994, p. 131).

CONVERGENT TECHNOLOGIES, MARKETS AND STRUCTURES

The 1970s and 1980s saw a convergence of two initially distinct technologies, of communications technology concerned with the transmission of information on the one hand and computer technology concerned with the processing of information on the other. Computers and telecommunications became integrated into a single system of information processing and exchange (Dicken, 1998, p. 151) affecting a wide range of areas and activities such as management information systems, professional data bases, integrated text and data processing, professional problem solving, transaction clearing systems and online enquiry and electronic mail. The changes for the financial industry were substantial. Together with a far-reaching libearlisation of financial markets and capital flows in many parts of the world, the new technologies allowed financial institutions the transition from internationalisation to globalisation - from the central operation and control of worldwide activities to the dispersion of central functions to all major nodes of the world economy and their constant interaction within large networks - and, at the same time, revolutionised not only the way in which financial instruments are traded but a wide spectrum of activities from information gathering, price discovery and trading over portfolio and risk management to clearing and settlement and mergers and acquisitions.
Before the advent of the internet and the rise of fibre-optic networks, only very large organisations and firms fully utilised the new technologies. Transnational corporations (TNCs) became the main users of international leased telecommunications lines for Electronic Data Interchange (EDI) and Electronic Fund Transfer (EFT) which are regarded as a key factor for speeding innovation, mobility of capital and competitive advantage within organisations. In general, international EFT encompasses a wide range of payment systems and services differing among others by ownership, user access and geographical extent. For interbank transactions the SWIFT (Society for Worldwide Interbank Financial Communication) network became a cheap, reliable and secure alternative to public services (Langdale, 1994, p. 437). SWIFT is a private international telecommunications service for member banks and qualified participants. It provides an international network for a large range of interbank communications including money transfers, letters of credit and many more. SWIFT was founded in 1973 as a cooperative nonprofit organisation with headquarters in Brussels. In the beginning, it had 239 member banks from 15 countries. Operation started in May 1977 with 15 banks in Belgium, France and Britain. Meanwhile, there are over 7000 members from 194 countries.
For intraorganisational communications, large TNCs and transnational banks alternatively use their own networks with lines leased from PTT (Postal, Telephone and Telegraph) authorities. They are motivated by reliability and availability concerns as well as cost and control considerations (Langdale, 1994, pp. 437 f.). The banks' networks long have become an integral part of competitive strategies to attract large customers with sophisticated transmission requirements such as TNCs, and they are traditionally strongly competing with one another and with institutions like SWIFT, PTTs and others offering EFT services. On the other hand, TNC networks themselves have become a source of competition to financial institutions in handling a substantial share of their own financial transactions bypassing the services of banks. At the beginning of the 1990s internal data and voice transmissions with TNCs comprised an estimated 50 per cent of all cross-border communication flows between nations (Graham and Marvin, 1996, p. 138). The spatial form of those networks linking banks and firms across time zones and spaces may be categorised into three broad types reflecting various degrees of internationalisation and globalisation (Figure 1). In traditional centralised systems (Figure 1a) both regional and national nodes are connected directly with the firm's headquarters or centre of control of international activities by low-speed leased circuits. On the other hand, in a truly global network (Figure 1c) there are linkages between national and regional hubs with the latter connected to the global centre and to one another by high-speed circuits. Regional hub-and-spoke networks (Figure 1b) are an in-between form. TNCs and transnational banks may use either one of the three or a mixture of types. The larger and more extensive their operations, the more likely are they to use either regional hub-and-spoke or global networks.

Retail Banking and Exchanges

The new information technologies profoundly affected customer relations in the financial industry. Traditional modes of business were first altered by automatisation and the possibilities of home banking. The first automated teller machines (ATMs) - unmanned terminals used to dispense cash, take instructions on fund transfers and summarise information on the state of the account and other features - emerged during the 1970s and early 1980s. The greatest investments in this field were made in the United States, followed by France where the nationalised banks were encouraged by government to introduce the new technology. With respect to home banking Scandinavia, and Finland in particular, led the development. This was explained by high labour cost and the isolation of the rural population in those countries (Hamilton, 1986).
Another novelty was credit cards which, again, were first introduced in the USA. This time, it took a full decade before the development reached Europe or Asia. One of the reasons was same-day settlement. The US Fedwire system, a private network for transfers between financial institutions with accounts at the Federal Reserve Bank, was far ahead of comparable systems elsewhere. Founded in 1914, Fedwire was computerised in the early 1970s and modified in 1982. The system handled transfers relating to commercial as well as money market, foreign exchange and securities transactions. In comparison, in the 1970s, the nearest equivalent in Britain, the Bankers' Automated Clearing Service (BACS), founded in 1968, was intended to handle only standardised payment instructions. In London, until 1984 "the actual clearing of cheques between banks was achieved by bringing the cheques, orders and instructions to a central clearing hall ... where they were redistributed to the desks of the clearing banks and the Bank of England." (Hamilton, 1986, p. 36)
The emergence of the internet, a network based on packet-switching technology and independent of command and control centres (Castells, 1996), added an aditional dimension to the world of electronic retail finance. New market segments and structures emerged. Cost reductions by electronic commerce were substantial. One credit card company estimated the cost of processing purchase orders to have declined from $125 to $40. In 2000, cost of a financial transaction for a US bank was $1.27 for a teller, $0.27 for an ATM and $0.01 for an online transaction. Costs in back-office operations and brokerage transactions were reduced, too, leading to online brokerage fees of below $5 compared to those of traditional discount brokers exceeding $50 (Lucking-Reiley and Spulber, 2001, p. 57). But, the most dramatic effect of the internet was the virtualisation of individual customer activities. Suddenly, everyone with a PC and telephone had access to financial information worldwide and trading opportunities never known before. This development was not restricted to the US. There were estimates for Europe in January 2000 that an average of 466 new online accounts were being opened in Sweden every day, 685 in Britain and even 1,178 in Germany. At the same time, in Asia, with around 30 per cent of stock market turnover South Korea had the highest proportion of online trading in the world. And in Japan, the market leader Nomura boasted its online accounts to be growing by 1,100 a day (The Economist, 2000). Banks, securities houses and non-traditional players started to set up trading platforms and offered all kinds of financial services online. Meanwhile, most are facing difficult times since online trading by individuals has ground almost to a halt after the near-collapse of the market in new-economy stocks in the year 2000 and the concomitant sharp decline in profit margins. Pressures to consolidate and find partners are rising, and those pursuing a multi-channel banking approach keeping voice-broking elements beside their electronic systems now seem to fare best (Semler, 2001).
In recent years, both in retail and wholesale trading brokers and exchanges faced increasing competition from so-called electronic communications networks (ECNs). Those include professional trading systems such as Instinet, which has become the world's biggest electronic broker. These days, Instinet is trading in about 40 markets with offices in London, Frankfurt, Paris, Zurich, Hong Kong, Tokyo and Toronto. Other ECNs are order-matching services like Posit and E-Crossnet. Most are owned by traditional market participants or brokers and regulated as investment companies rather than exchanges. The spread of ECNs dates back to US regulatory changes in the mid 1990s authorising non-traditional players to set up trading platforms. There are estimates that in the United States ECNs meanwhile account for 30 per cent of trading in securities listed on NASDAQ. In contrast, in Europe, so far, they play a minor role accounting for less than five per cent of equity trading turnover (OECD, 2001, p. 49). However, there is one area where they have started becoming more important, which is foreign exchange trading. Here ECNs are held partly responsible for the recent decline in trading volumes (Bank for International Settlements, 2000, pp. 98 f.).
Another segment of electronic trading that is strongly growing is in fixed-income securities. Only recently have advances in computer software made it easier to categorise and match types of outstanding bonds, and the internet gave small investors access to information that previously had been the province of broker-dealers and large institutional investors. In 1999, there were 68 electronic systems for institutional trading of bonds, compared with 11 three years before, five of which operating only in Europe. In the US, after consolidation in response to overcapacity, contraction in the internet economy and unsuccessful business models, in 2001, the number declined to 49, while in Europe end of 2001 it was grown to 24 systems (Wiggins, 2001). In addition, electronic trading of bonds by retail investors via the internet is growing rapidly. Nevertheless, so far, fixed-income markets are still behind those in equities (OECD, 2001, p. 50).
In bond and equity markets alike competition between traditional and electronic exchanges and ECNs intensified in recent years enforcing a tendency towards alliances, mergers and pan-European and worldwide 24-hour trading. One example is the merger of the bourses of Paris, Amsterdam and Brussels to form Euronext, which then won the battle for LIFFE in October 2001. Others are the (failed) hostile takeover bid for the London Stock Exchange by OM Gruppen of Sweden, and the (equally failed) plan to create iX by merging the London and Frankfurt stock exchanges and possibly Madrid and Milan. Worldwide alliances include the creation of markets in various countries with local partners using a common technology, a strategy applied, for instance, by NASDAQ, which so far established NASDAQ Europe, NASDAQ Japan and NASDAQ Canada. Another example is the Globex Alliance which, beside the CME and SIMEX, meanwhile includes Euronext, Brazil's Bolsa de Mercadorias & Futuros (BM&F), the MEFF Renta Fija Spanish Exchange for Fixed Income Derivatives and MEFF Renta Variable Spanish Exchange for Equity Derivatives and the Montreal Exchange. In addition, there is a CME-LIFFE Partnership (Figure 2).
In all regions of the world, in electronic stock trading traditional leaders are facing competition from smaller exchanges. In the United States, for example, NASDAQ is challenged by exchanges such as the American Stock Exchange, the Cincinnati Stock Exchange and the Boston Stock Exchange which are all seeking to expand into NASDAQ trading aiming to offer services to securities firms and others that trade NASDAQ stocks (Labate, 2001). In Asia, traditional trading centres such as Tokyo, Singapore and Hong Kong are experiencing competition from electronic trading in smaller places like Korea where, for example, trading volumes on Kofex surged since its start in April 1999 reaching an accumulative four million contracts in January 2001. In Europe, smaller electronic exchanges are struggling for survival in the new financial landscape. Prominent examples are VirtX and Jiway. VirtX is a pan-European stock exchange that was formed in 2001 by a merger of Tradepoint, the UK electronic platform, and the Swiss Stock Exchange blue chips. It specialised on cross-border trade undercutting other exchanges with cheaper fees for trading and settlement. Jiway was launched by OM Gruppen and Morgan Stanley Dean Witter in 2000 as an online cross-border exchange for retail investors. In September 2001, it was bought out by OM in an attempt to cut costs by integrating its exchange operations with those of the OM London Exchange.

Trading, Risk Management and Settlement

IT convergence did not only affect retail banking and exchanges. When informations became available instantaneously, or in "real time", and new computer technologies and software programs allowed to monitor and analyse huge data sets and discover so far unseen relations between them, new opportunities arose in other areas as well. This holds in particular for large banks and institutional investors. Examples are program trading and portfolio insurance: The former is a kind of arbitrage between cash and derivatives markets to exploit minor discrepancies in pricing signalled by computers, the latter is a way in which major institutional investors seek to lock in profits on their portfolios, or hedge against losses, by operations in various markets with trades initiated automatically by computer programs. Another example is options trading. Option prices are calculated according to complex formulas which explains why, before the arrival of respective computer facilities, compared to current volumes trading in those markets was rather modest. The availability of those technologies also paved the way for the rise of so-called hedge funds. Those are privately subscribed funds that take highly leveraged positions. Although having their roots in the United States, where they look back to a long history (Bennett and Shirreff, 1994, p. 29), nowadays, most of them are operating largely unregulated from offshore centres.
But, there were also trades and activities that ceased to exist with the emergence of the new technologies. With real time information of remote markets and prices, and the opportunity to react instantaneously to emerging discrepancies, dealers started adjusting quotations quasi automatically, with the result that, for example, in foreign exchange trading the arbitrageur in the traditional sense of the word has largely disappeared. What is now called arbitrage generally is not aimed at taking advantages of different prices in different markets or regions but at exploiting price differences in time. Dealers buy or sell a financial instrument in the hope to make a profit by reversing the transaction at a time in the not too distant future - which in the interbank market, for instance, usually means later in the day (Reszat, 1997, pp. 60 ff.).
With the new technologies at hand new approaches in financial research emerged, too. Stimulated by latest developments in the natural sciences and encouraged by the advances in computer and information technologies, in the 1990s, scholars of finance began to apply concepts such as chaos theory, fuzzy sets and neural networks, that require the analysis of huge data sets, to their own discipline and developed new techniques for time series analysis of high-frequency data which almost instantly found their way into the research departments of banks and other financial institutions. At the same time, and partly as a result of these efforts, overall portfolio and risk management of those institutions became far more sophisticated. Without the new technologies, the shift in emphasis from the classical asset-and-liabilities management (ALM) in banking business to the value-at-risk (VAR) concept as an international standard for risk management would have been unthinkable.
With the possibility to handle ever growing flows of data clearing and settlement of monetary and financial transactions became easier and more transparent, too. As a result, payment system risk in international banking was reduced substantially. Until recently, most large-value interbank payments and settlements systems worldwide operated on the basis of netting. In the course of the day, the systems were keeping track of the net position of banks which were sending thousands of payment instructions to each other, and at the end of the day the net amounts owed were settled by means of transfers between the participants' accounts at the central bank (Borio and Van den Bergh, 1993). Meanwhile, in most G10 countries real-time gross settlement (RTGS) has been introduced, at least for some financial transactions. There are a number of variants of RTGS systems but common to all of them is that funds transfers are settled individually as soon as the corresponding orders are sent provided that the sending bank has sufficient cover in its account with the central bank.
Beside, there are private sector solutions to reduce payment system risk. Not least with the availability of new technologies banks have begun to improve their back office payments processing, correspondent banking arrangements and risk management procedures. Beyond those efforts, institutes worldwide have started thinking about strategies to jointly limit payment system risks. Some are 'netting' trades bilaterally or on a multilateral basis by pooling their trades in a particular currency and cancelling out offsetting ones with settlement at the end of the day. In August 1995, a group of European banks consisting of large British banks as well as several French, Dutch and Scandinavian institutes launched the Exchange Clearing House (Echo) in London thereby extending the concept of netting to a multilateral basis. Participants of Echo no longer make payments to each other but to the Echo clearing house. Compared to bilateral procedures multilateral netting reduces payment flows still further since transfers are made by all members to a single counterparty. A competing system which received regulatory approval in December 1996 is Multinet established by six Canadian and two US banks.
While settlement of interbank payments became safer, in other markets progress was much slower. This holds in particular for cross-border securities settlement, although with considerable differences between regions and markets. For instance, in Europe, the annual costs of maintaining a share clearing and settlement infrastructure amounts to between $1 billion and $1.2 billion, compared to an estimated $600 million in the United States. While for fixed-income securities the emergence of an international bond market in the 1970s led to the creation of two international clearers - Euroclear and Cedel - clearing and settling shares traded between European countries remained strongly fragmented. There are between 20 to 30 institutions in charge of it, some run as for-profit organisations, others not. But, in recent years, with mounting competitive pressures the industry has seen some consolidation in these markets and between bonds and equities markets: Cedel and Deutsche Börse Clearing merged to form Clearstream. Sicovam, the Paris settlement system merged with Euroclear, which was then joined by CIK and Necigef, the central securities depositaries of Belgium and the Netherlands, and in London the CCP, a central counterparty for stocks, has formed as a joint initiative by the London Stock Exchange, the London Clearing House and Crest. End of 2001, the LSE reached an agreement with the London Clearing House and Euroclear allowing its international customers to use Euroclear to settle trades. There is an initiative to create a single central clearing counterparty for Europe's equities markets by the world's biggest banks, but obstacles such as legal and tax differences, cultural and linguistic problems and differences in national supervisory systems are high.
The consequences the new technologies had on the financial industry as a whole and on location choices were far-reaching and not clear cut: On the one hand, they facilitated activities such as data processing, trading, risk monitoring and settlement and, in allowing to steer operations from afar, loosened the ties to particular places. On the other, they made management operations and control much more difficult calling for a growing centralisation of headquarter functions which, in turn, strengthened the importance of location. Beside, they affected firms' spatial behaviour in enabling a rising number of financial institutions to strive for market shares beyond regional borders or on a global level, or to become organised in worldwide networks in order to achieve scale economies through cooperation with others. The result was an increasing overall level of international financial activities, and a rising importance of financial services for the growth and development of cities. Competition between places rose and, at the same time, complementary linkages between them became more pronounced as well. Exchanges merged, but they did not centralise functions to an extent that made them settle down in one single location. Banks started exploiting the comparative advantages of places by buying local firms and concentrating activities like investment banking and foreign exchange trading in one place, but, at the same time, they maintained other functions and regional headquarters and control centres elsewhere. Seen as a whole, spatial relations and structures in international finance became more flexible and complex. This explains why the effects of the new technologies on world financial landscapes attract a growing attention in the literature.

WORLD CITIES IN THE AGE OF ELECTRONIC FINANCE

The advent of the internet, e-mail and electronic trading platforms is often said to have opened a new dimension to the idea of location in the financial industry. Even within firms physical closeness seems no longer necessary and "virtual" market places are increasingly competing with real ones. The first IT revolution with the emergence of the telegraph and telephone largely contributed to the rise and economic prosperity of world cities through its impact on international finance. Now there are growing fears that the second one, characterised by the convergence of communications and computer technologies, may equally contribute to diminishing the cities' future role and reducing their growth prospects. In order to assess whether these fears are justified, in principle, answers have to be found to two fundamental questions, namely, what are the determinants of the world's financial landscapes in general and in which way do the new technologies alter the rationale for financial institutions to choose a particular location?

Centres and Peripheries

Historically, the world's financial landscapes are characterised by patterns of centres and peripheries. In the Middle Ages, Venice and Genoa in the south, and the Champagne fairs, and later Bruges and Amsterdam, in the north were the first nodes of financial activity in Europe. Later, London became the financial centre, first of Europe and then of the world, which kept this role until the first World War, then temporarily lost it to New York, only to regain it in the 1960s with the advent of the euromarkets (Reszat, 2000). But, there are regional differences. For example, in the New World the first financial centres emerged only in the 19th century. In Asia, the phenomenon is still much younger although the region is the origin of one of the oldest informal remittance and money exchange systems (Reszat, 2001). In contrast to western banks, money changers and bankers in Asia traditionally relied on a private web of finance long having no need for public institutions such as fairs and exchanges functioning as a "centre". Beside, it is not always international finance that determined a place's rise as a financial centre in the first place. Both, Tokyo and New York, for example, owe their importance for the financial industry more to their countries' economic strength, and their regional and worldwide economic dominance, than to the international activities concentrated there. Regional differences also exist with respect to the dynamics of competition between centres. While in Europe, in the process of monetary union, London's predominance is challenged by Frankfurt and Paris, and in Asia places like Singapore and Hong Kong are threatening Tokyo's leading role - with new competitors on the horizon in both regions -, in the United States Chicago is still the only serious rival to New York.
Traditionally, regional and international financial centres are located in world cities. Those are characterised by high densities of economic factors such as labour, money, commodities, services and information. Their influence is stretching far beyond their boundaries to places and regions worldwide. Together they form an international network that, although encompassing only a small fraction of the earth's surface and population, represents a large share of the world's total production and consumption. World cities are large urbanised regions that are characterised by dense patterns of interaction rather than by political-administrative boundaries. In recent years, the growing connections between them have influenced a new thinking about global economic activities focusing on nodes organising flows and inter-city linkages rather than nation states (Beaverstock et al., 2000; Taylor, 2001). Serving as control centres of the global system they can be arranged into a hierarchy according to the economic power they represent - at least, with respect to their global financial articulation and as far as the top is concerned where New York, London and Tokyo are the undisputed leaders (Friedman, 1995; Taylor et al., 2002).

Traditional Determinants of Location

What makes financial institutions locate in centres? In economics, first approaches to answer this question can be found in the concept of the new economic geography with its emphasis on the interplay of centripetal and centrifugal forces as determinants of agglomeration (Reszat, 1999). In principle, the concept dates back to the theory of central places developed by Walter Christaller and August Lösch in the 1930s and 1940s which was explaining firms' location decisions as a tradeoff between scale economies and transportation costs. The new economic geography is further developing this idea in stressing the role of interdependence and interaction. Here, a firm's decision on where to locate depends on all other firms' choices. On the one hand, firms like having other businesses nearby because those help attracting customers to an area and add to the variety of local services offered. These advantages are called centripetal forces. On the other hand, they dislike the proximity of others because they compete for customers, land and workers. These motives are the centrifugal forces driving them away from one another. Then, the interplay of both centripetal and centrifugal forces determines the process of business migration and the forming pattern of agglomeration (Krugman et al., 1999).
Financial institutions partly differ from other firms in their motives for choosing a particular location. In former times, the need for physical money transfers between banks and the proximity of the stock exchange played a dominant role. With developing information and communication technologies other arguments such as trading frictions and transactions costs became more important. In the literature, among the centripetal forces named as relevant in this context are economies of scale in the payment mechanism, informational spillovers and market liquidity, among the centrifugal ones market access costs, policy intervention and localised information that cannot be traded globally (Gehrig, 1998). Other arguments relate to the effects of congestion such as high rents and to socio-institutional and cultural factors (Thrift, 1994, Grote et al., 1999, Reszat, 2000).
The study of centripetal and centrifugal forces helps understand why banks tend to form "clusters". It does not necessarily explain why they locate in world cities. Considering the environment in which large international financial firms are operating arguments why they would wish to be in the world's big centres can be broadly summarised under three headings. Those are risk management, dependence on producer services and innovation. Among those, reducing risk and uncertainty perhaps has the highest priority. Uncertainty is particularly high when internal and external relations, operations and management tasks are complex, the numbers of products sold, and of countries in which they are sold, are high and personnel and financial resources are committed to many markets with different cultural, political, social or regulatory environments. In these cases, location decisions are strongly influenced by the need to closely monitor activities. Experience has shown that human management of the risks financial institutions run cannot be entirely replaced by electronic surveillance. This does not only explain why firms search to have the riskiest parts of their operations under one roof or, at least, nearby (Edwards, 1998). It is also the reason why they tend to locate in those places that will give them access to as much information as possible, be it through contact with local businesses or by allowing them to monitor the behaviour of competitors. World cities then are an obvious choice. As Daniels (1993, p. 114) puts it: "Shadowing competitors may seem an unlikely way to capture a share of new markets but, in some services such as banking or finance, pioneering location choices are not made lightly because of the adverse signals that they may be perceived to convey to investors or businesses."
Following Daniels, there are influences of location decisions that can be categorised into urbanisation economies and localisation economies. The former include benefits from access to infrastructure of transport, telecommunications, housing and office buildings and the availability of large and diverse labour markets. The latter refer to the benefits of proximity to similar services or other economic activities either as input providers or clients. World cities offer a wide range of so-called "producer services" provided by specialised firms of which international financial institutions with large and growing geographic and functional dispersion of activities have become more and more dependent (Sassen, 1991; 1995). There are information processing services like marketing, insurance, accountancy, property management, advertising and legal services. Others are goods-related services such as distribution, transport management, infrastructure maintenance and installation as well as repair and maintenance of communications equipment. A third category is personnel support services including welfare, catering or personal travel and accomodation (Daniels, 1993, p. 26). The firms offering these services often have rather limited location choices because, operating under high competitive pressures, they depend on having access to the best possible information, labour and specialist advice which are most available in the big cities and this, in turn, is attracting their clients to these places, too.
But, risk reduction and dependence on producer services are not the only explanations for the attractiveness of world cities to financial institutions. Another key factor is innovation and networking. Traditionally, cities are seen as seedbed for innovations. The city as "creative milieu", as "networked society" where the proximity of advanced technology, high-quality education and sophisticated finance is generating new ways of economic organisation, new forms of production and new industries and wealth providing a basis for patronage of arts and culture that, in turn, are again stimulating knowledge and technology is a recurring theme in history (Hall, 1998). Another recurring theme is the role of outsiders coming into the city and creating something new which then becomes a source of economic progress and growth. Traditionally, world cities are places where many cultures are coming together, where the local and the global are mixing and often mutating to something wholly different. "It is in the juxtaposition, mutations and connections of different cultural spaces, in the overlaying of contradictory cultural landscapes over each other that creativity and vitality may emerge." (Homi Bhabha cited in Crang, 1998, p. 175).
In the history of western finance foreign bankers always played a prominent role in contributing to the big centres' creative milieux. Technical progress, economic growth, city development and financial innovations often went together stimulating one another. Examples are found in the environment in which finance and insurance of long-distance trade blossomed in Europe since the merchant empires of the Middle Ages, in Amsterdam which was a magnet for both science and the arts and international capital in the 16th and 17th centuries, in the creation of debt instruments in 19th century Paris to finance Haussmann's city building projects, or the invention of bond houses during the New York boom in skyscrapers and office development in the mid-1920s - to name only a few. For a long time, it was taken for granted that among all human activities this interplay of innovations and the creative milieu in which they flower were particularly depending on face-to-face contact. But, with the latest developments in IT technology this dependence has become at least arguable and for a while it seemed as if the emergence of technopoles and large industrial complexes ranging from Silicon Valley over Akademgorodok in Siberia to Tsukuba, Japan, would indicate that the role of cities as seedbed for innovations had undergone substantial changes reducing their importance in the world economy.

Cities in the Geography of Information Technology

A closer look at the overall position of cities in the geography of information technology is putting this impression into perspective. These days, it is widely accepted to talk about the "dissolution of cities". This has become one of the grand metaphors of the influence of the IT revolution (Graham and Marvin, 1996, p. 8), although the argument is not new: The economic decline of locations due to electronic communications, the advance of personal computing and related developments eliminating the need for cities as centres of interaction has been conjured up since the early 1970s (Moss and Townsend, 1998). But, studying, for example, the visualisation of internet traffic flows in the 1990s (Mapa Mundi, 2002), transatlantic submarine cable linkages (Telegeography, 2002) or other indicators of worldwide information activities gives the overall impression that global cities somehow managed to redefine their role in the networks and geography of the new information technologies.
Recent research confirms that the spatial distribution of those technologies shows familiar characteristics (Gorman, 2001). Domains and connectivity cluster predominantly in big cities. The large metropolitan areas of the industrialised world have the highest concentration of internet domains and, at the same time, are the gateways to internet connections in other world regions. For example, in the United States, the 20 most networked cities account for 4.3 per cent of the nation's population, but 17.1 per cent of its domains (Moss et al., 1997). Apparently, those new electronic grids spanning the globe and being capable of going anywhere remain inherently nodal. As Graham and Marvin (1996, p. 3) wrote: "Urban areas are the dominant centres of demand for telecommunications and the nerve centres of the electronic grids that radiate from them", and financial services make for a decisive part of the traffic flooding across those networks and play a dominant role in linking the nodes. A look at the top five most networked cities in the United States shows that Manhattan has by far the highest number of domains (Table 1), Singapore and Hong Kong have become critical nodes in an Asian-Pacific network geography, and in Europe, after far-reaching deregulation in the 1980s, London developed into a centre for all kinds of information-rich service industries ranging from finance over broadcasting and publishing to advertising and many more (Dodge and Kitchin, 2001, p. 48).
Much of the discussion about the implications of IT technology for the financial industry, for the future role of cities and the competition of places focuses on the growing complexity of management operations and control that is reinforcing the need for centrality. But, for the cities it is not necessarily headquarters and control functions that matter - for example, many investment banks in London have their headquarters located elsewhere - but a vast spectrum of financial services and activities. Those activities account for large numbers of people living in a place, paying taxes, consuming, using its amenities and in contributing to the place's "crowdedness" enhance its attractiveness to others searching the company of like-minded. With the rise of IT technologies, their numbers are in decline. Even with headquarters staying in the central business districts of the world's metropoles, in other areas of finance, there is an observed tendency towards dispersion. New technologies have reduced the need for proximity in sectors not affected before, such as customer relations and retail business. Call centres, "software complexes" and internet suppliers with locations far away from the world's financial metropoles blossom, and, while leaving high-order activities in urban agglomerations, financial institutions are increasingly shifting part of their low-order activities such as back-office operations to other places or outer parts with cheaper labour and lower rents thereby diminishing the role of centres and central business districts.
Does this mean "the end of cities"? A closer look at the underlying nature of the phenomenon of virtualisation and its implications for the world's financial landscapes may help exploring the possible extent of these tendencies providing some insights into the general mechanisms at work. What characterises virtual activities and in which way do they differ from traditional ones? What is lost and won in the process of virtualisation? To what extent can electronic flows replace physical activities, and to what extent and in which ways do they complement them in the world of finance?

Spaces and Places

In general, the study of real and virtual market places immedeately raises the question of how to define "place", of what kind of experiences places provide and in which way those differ from their virtual counterparts. In disciplines such as sociology, cultural geography and architectural studies, the basic idea of "place" is the sense of belonging it conveys to people beyond any simple concept of location. Places differ from one another in the set of cultural characteristics they show, the patterns of behaviour and interaction developing there in course of time, and the shared experiences they mean for people living there. People tend to define themselves through a sense of place. When asked, they emphasise being "a Londoner" or "a New Yorker". Financial communities do not differ in this respect from other groups of society. They, too, identify themselves with the places they live in. Those working in metropoles like London and New York are well aware of their special status experiencing a certain way of life and the feeling of being part of a special culture (Reszat, 2000). Trust and familiarity are built by common ideas, behaviour patterns, norms and "rituals", and, through close contacts from constantly trading with one another and other concomittants of the globalisation process this feeling is in parts transmitted to financial communities in other places engaged in the same kinds of activities. Influences like these establish non-measurable location advantages that nonetheless do exist. Whether similar advantages are enjoyed by participants of purely "virtual" networks and communities, lacking the experience of a physical environment is not easy to see.
Place in this sense differs from "space" by the evolution of shared experience, by having a past and future "that binds people together round them." (Crang, 1998, p. 103). The term "cyberspace" indicates that this characteristic is widely regarded as lacking in virtual relations. Virtualisation is often equalised with the erosion of place, extinguishing all specifities that matter to people. Lives are considered to become somehow diminished and reduced in their traditional dimensions with the computer screen as one of Marc Augé's "non-places" like airports, motorways and hotels where neither identity, nor relations, nor history exist (Augé, 1995). Individuals in those spaces are facing a kind of "contractual solitariness" (Crang, 1998). Alone, or in small groups, they are related to wider society through very limited and specific interactions. They are linked to their surroundings mainly through words and texts, through instructions - be they prescriptive, prohibitive or merely informative - supported by signboards, screens and posters (Augé, 1995, p. 96).
On the other hand, virtual reality itself opens new dimensions. For example, Crang et al. (1999) distinguish between four kinds of experiences. They study virtuality as simulation, as the "other" in relation, and in opposition, to the real with both depending on one another; virtuality as complexity, which refers to the multiple configurations of self and world constantly constructed and reconstructed developing characteristics of self-organisation and emergent order; virtuality as mediasation, as one stage in the history of "mediated and distanciated communication" (Crang et al., 1999, p. 10); and virtuality as spatial emphasising the "other" geography of cyberspace with its relation to existing social, political and economic geographies. In this context, the realm of international finance is, above all, concerned with two effects. First, the extent to which virtual reality replaces, or complements, face-to-face interaction. Second, the way in which virtual reality leads to an accelerated transformation, a kind of "tunnel effect" or "warping" of time and space barriers (Graham and Marvin, 1996, p. 60). Both affect the prospects of world financial landscapes.
One crucial characteristic of face-to-face interaction is that it allows tacit bargaining and communication where each actor watches and interprets others' behaviour, fully aware that his own decisions are being carefully watched, interpreted and anticipated, too (Schelling, 1980). Traditionally, tacit communication is an essential part of the financial business which matters for some activities more than for others. As a consequence, in some cases it is also easier to be replaced by other forms of communication. For example, for trading securities in bulk, making interbank payments or in standardised foreign exchange dealing the advantages may be minuscule or even non-existent. For mergers and acquisitions, the management of investors' portfolios or the lead management of syndicates they are high. Tacit communication is an essential element in building trust which, in turn, is a major prerequisite for many financial transactions. In these cases, the impossibility of face-to-face contact is regarded as connected with a loss of "authenticity". A virtual market place is said to be no longer an "authentic" place and therefore no substitute for a real city centre.
What is won in exchange for authenticity is an acceleration of transformations or "tunnel effect" resulting in a wider reach and greater speed and efficiency. In traditional geographies people would experience spaces and places change slowly in the course of history with the hierarchy of cities, their rise and decline, undergoing slow changes as well (Braudel, 1986; Hohenberg and Lees, 1996). Under virtual reality people experience that barriers of time and space are no longer of much importance. Cities become linked together into networks, with the spaces in between largely excluded and denied access. Market places are no longer perceived as local phenomena, local entities as parts of cities. For a wide range of financial activities and instruments London, New York and Tokyo represent one single global market. One consequence is that people are able to be present in different markets and localities at the same time. Another is that cities in these worldwide networks may be dominating in one area such as finance while at the same time being inferior in other realms so that their relations are no longer determined by a strict hierarchy but by the evolving interdependencies (Graham and Martin, 1996, p. 61). And, to the extent that electronic networks and linkages can be compared to, and are able to replace, traditional "networked societies" within cities they may well contribute to the decline of central places.

An Experiential Continuum

Talks of the "end of cities" as hubs of financial activities in the electronic age usually are based on the view of a complete separation of virtual and real activities. But, as a closer look reveals, this assumption does not hold. On the contrary, cyberspace and geographic space are closely intervowen. These days, there is no part of the financial industry that is doing without the new technologies. The bank manager preparing for a talk with a client relies on virtual networks and services for data and information from all parts of the world before entering the meeting presenting the arguments as convincingly as possible. The brokers that fared best in the recent consolidation in e-business are the ones that kept voice-broking elements beside electronic systems. The investor in search of hints to future profit opportunities from tacit communication will turn to the computer and, afterwards, confront his counterpart with the latest news or results of research observing the reactions. And when both have finished their business they will go back to the screen feeding in their latest findings in order to analyse and combine them with other information laying the foundations for future decisions. On the other hand, even the age of virtual business cannot do without personal communication over large distances, and in the world of finance the jet aircraft still plays an indispensable role. The financial community worldwide is part of an international class of people brought up by globalisation. One characteristic of those people is their constantly being in reach of, and contact with, firms, clients and one another by computer, mobile telephone and beeper (Micklethwait and Wooldridge, 2000). The bank manager in the airport lounge with a laptop on the knees is a familiar sight.
In practice, there is no virtuality without corresponding reality, and depending on context the virtual market place is rather complementing than replacing real financial activity. Identities "explored and acted out online are always contextualised within experiences offline. ... Conversely, our lives offline become embodied through our memories and experiences online, so that a recursive process exists as the virtual is realised and the real virtualised." (Dodge and Kitchin, 2001, p. 24) The financial world seen in this way is an "experiential continuum", as Dodge and Kitchin name it, between the materiality of geographic space and the virtuality of cyberspace. This calls for a revision of the virtualisation effects mentioned earlier:
On the one hand, the loss of authenticity in virtual market places is not as complete as at first view, and even "authentic" trust building and trust requiring activities demanding personal contacts are surrounded and supported by virtual realities. On the other hand, the acceleration of transformations in virtual reality is slowed down by interaction with the real world. Financial deals may be done surpassing time and space, but in the place they are invented and decided, and in the "switches" from the virtual to the real, and back again, the traditional laws of physics - and psychology - still hold. Financial risks may be calculated by sophisticated computer programs. But, the time it takes to digest the results and develop a strategy in response is limited by human abilities and deficiencies. As a rule, firm managers are well aware that each trial to shorten the process of personal communications and investigations needed to find out what really were behind the numbers, or to cope with the remaining uncertainties from far away, may prove a fatal decision.
What are the consequences of this intervowenness of real and virtual for the role of cities? To the extent that virtual activities cannot be separated from their real surroundings, the traditional arguments for and against centralisation still hold setting clear limits to the diffusion process. As a result, even in the age of virtuality cities continue to compete with - and complement - one another as financial centres. Within cities, to the extent that the new technologies allow to shift low-order activities to outer places, the process is determined by traditional arguments such as rents and labour costs and the scarcity of space in cities' central business districts. But, experience shows that, typically, even in these cases, there is a tendency to locate in suburbia rather than in distant regions which has both internal and external reasons. In firms' internal relations, one explanation is risk considerations and the experience that electronic surveillance is no substitute for human management. In external relations, it is above all trust building activities, the need to become familiar with a special context, innovation and networking. In general, the less financial activities require spatial proximity in order to manage risks, gain access to information, establish social contacts or get impulses from the creative milieux of local communities, the more cost arguments matter providing a reason for dispersion, at least on a limited scale.
Thus, the strongest foreseeable impact of information technology is on the spatial organisation within cities. This is reinforced by the expansion of workplaces and spatial requirements of "intelligent buildings" for which traditional financial districts often are lacking space and which, therefore, in parts have to locate outside the old centres. La Defense in Paris and London's Docklands are two examples. But, here again are regional differences. In the United States, the centres in major cities such as New York and Chicago have been rebuilt many times without caring much of urban infrastructure and design, and there are vast empty spaces left for rebuilding according to the requirements of technology. In contrast, in Europe, urban centres are much more protected and rarely contain significant stretches of abandoned space. Here, the financial district often has become less compact extending more into a metropolitan area in the form of a grid of nodes of business activities (Sassen, 1995).

CONCLUSIONS

From the role of IT in the history of finance, its very nature and its relation to world city growth, several conclusions can be drawn. First, unlike in many other sectors, in international financial relations electronic information transmission, data processing and trading is not a new phenomenon. The "internet revolution" here brought rather a gradual change. Financial services as the forerunners of globalisation have a long tradition of using advanced information and communication technologies for overcoming the limits of time and space. Their formation of "clusters" and location in centres largely contributed to rise and economic prosperity of cities. Second, even in the age of electronic connectedness and virtual finance location still matters and there are no signs that the bulk of financial services will shift away from the world's metropoles. The electronic grids of financial institutions spanning the globe are inherently nodal, and the cities so far managed to redefine their role as nodes in the networks and geography of the new technologies. Third, the myth of the "dissolution" of cities is based on the assumption of a perfect separation of virtual and real activities that, at a closer look, does not hold. Financial decisions are made in an "experiential continuum" between the materiality of geographic space and the virtuality of cyberspace. Neither the loss of authenticity nor the acceleration of transformations as a result of virtual reality are complete. Virtual markets and processes complement rather than replace existing real ones. Fourth, the biggest impact of the new technologies so far is on the shape and spatial organisation within cities. Technological progress allowed financial institutions to shift parts of activities to suburbia in face of rising costs, and the lack of space meeting the requirements of an extended workforce and "intelligent buildings" have led to a spread beyond old city centres. As a consequence, cities' financial districts appear less compact than in former times, but there are limits to the diffusion process since many activities continue to require proximity.
New technologies are characterised by the fact that nowbody ever knows exactly where they will lead. In the middle of the 19th century, one could perhaps foresee that the railways would change the geography of countries, "but no one anticipated the simple device of the commuter ticket which would allow suburbs to spread and finally turn cities inside out" (Hall, 1998, p. 943). However, the world of finance appears different in that, as the preceding chapters have shown, the history of information technology in the financial services industry is a far more evolutionary and less transforming process. In the literature, there is a tendency to overstate the importance of the phenomenon. But, as one author puts it: " ... associations of technologies with modernity are contingent not only historically but also geographically; for much, indeed most of the world, the telephone is still thoroughly new and modern ..." (Crang et al., 1999, p. 3). In this sense, cities remain exceptional places, and the financial industry requires the infrastructure and environment provided by these places to prosper.








                                          XXX  .   V000  EPG model 

EPG Model is an international business model including three dimensions – ethnocentric, polycentric and geocentric. It has been introduced by Howard V. Perlmutter within the journal article "The Tortuous Evolution of Multinational Enterprises" in 1969. These three dimensions allow executives to more accurately develop their firm's general strategic profile  . 

The EPG model is a framework for a firm to better pinpoint its strategic profile in terms of international business strategy. The authors Wind, Douglas and Perlmutter have later extended the model by a fourth dimension, "Regiocentric", creating the "EPRG Model".[3]
The importance of the EPG model is mainly in the firm's awareness and understanding of its specific focus. Because a strategy based mainly on one of the three elements can mean significantly different costs or benefits to the firm, it is necessary for a firm to carefully analyze how their firm is oriented and make appropriate decisions moving forward. In performing an EPG analysis, a firm may discover that they are oriented in a direction that is not beneficial to the firm or misaligned with the firm's corporate culture and generic strategy. In this case, it would be important for a firm to re-align its focus in order to ensure that it is correctly representing the firm's focus.
Each of the three elements of the EPG profile is briefly highlighted in the table below, showing the main focus for each element, as well as its correlating function, products, and geography.[2]
EthnocentrismPolycentrismGeocentrism
DefinitionBased on ethnicityBased on political orientationBased on geography
Strategic Orientation/FocusHome Country OrientedHost Country OrientedGlobal Oriented
FunctionFinanceMarketingR&D
ProductIndustrial productsConsumer goods
GeographyDeveloping countriesUS and Europe

Elements of the EPG model

Ethnocentrism

There is no international firm today whose executives will say that ethnocentrism is absent in their organization.[2] The word ethnocentrism derives from the Greek word "ethnos", meaning "nation" or "people," and the English word center or centrism.[4] A common phrase set for ethnocentrism is "tunnel vision." In this context, ethnocentrism is the view that a particular ethnic group's system of beliefs and values is morally superior to all others.[4] Ethnocentrism is characterized by or based on the attitude that one's own group is superior to others.[5] The ethnocentric attitude is found in many companies that have many nationalities and culture groups working together. It is a natural tendency for people to act ethnocentrically because it is what they feel comfortable with.[2] It is based on past experiences and learned behaviors and norms.[2]
The ethnocentric attitude is seen often when home nationals of various countries believe they are superior to, more trustworthy and more reliable than their foreign counterparts.[2] Ethnocentric attitudes are often expressed in determining the managerial process at home and overseas.[2] There is a tendency towards ethnocentrism in relations with subsidiaries in developing countries and in industrial product divisions.[2]
Organizations that are designed with an ethnocentric focus will portray certain tendencies. These include an organizations headquarters that's decision-making authority is relatively high. Home standards are applied to the evaluation and control of the organization.[2] These standards are to ensure performance and product quality. Ethnocentric attitudes can be seen in the organizations communication process.[2] This is evident when there is constant advice, and counsel from the headquarters to the subsidiary.[2] This advice usually bears the message, "This works at home; therefore it must work in your country".[2] Organizations that portray ethnocentrism usually identify themselves with the nationality of the owner.[2] For example, Wal-Mart is seen as an American company because its headquarters are located in America. The crucial critical concept of ethnocentrism in international organizations is the current policy that recruits from the home country are hired, and trained for key executive position in the organization.[2] The ethnocentric attitude is a centralized approach. With the centralized approach, the training originates at the headquarters and than corporate trainers travel to the subsidiaries, and often adapt to local situations.[2]
There are many costs that ethnocentrism can incur on an international organization. Using the centralized approach can cause inefficient staffing problems in the organization, this is because the employed staff will incur high financial costs to the global business as they have to pay for the transfer costs of the staff coming from the home country to overseas.[6] This also could bring inefficiency to the business if the new staff is not able to fit in and be culturally compatible in their newly situated location. There is often ineffective planning due to poor feedback from the international subsidiaries.[6] The organization may see capital flight, as the best men in the foreign subsidiary will seek other employment opportunities.[6] Ethnocentric organizations may lose their ability to build a high caliber local organization, which could lead to fewer innovations. This in turn could cause a lack of flexibility and local responsiveness.[6]

Costs and benefits of ethnocentrism

CostsBenefits
Ineffective planning due to poor feedbackSimple organization
Subsidiary 'valuable' executive flightGreater communication and control
Fewer innovations
Inability to build a high caliber local org.
Lack of flexibility and responsiveness
[6]

Polycentrism

Polycentrism is one of the three legs in the EPG framework that "identifies one of the attitudes or orientations toward internationalization that is associated with successive stages in the evolution of international operations"[7]
Polycentrism can be defined as a host country orientation; which reflects host countries goals and objectives with respect "to different management strategies and planning procedures with regard to international operations."[7] Under a polycentric perspective, a company's management team believes that in international business practices local preferences and techniques are usually found most appropriate to deal with the local market conditions. In the most extreme views of polycentrism, it is the "attitude that culture of various countries are different, that foreigners are difficult to understand and should be left alone as long as their work is profitable."[8]
Although there is great benefit to taking into consideration local preferences in the host country when it comes to international business practices, a polycentric approach has its obstacles once implemented. A polycentric approach "gives rise to the problems of coordination and control."[7] Management usually loses coordination of its international subsidiaries usually because they are forced to operate independently of one another, and establish separate objectives and plans which meet the host countries criteria. "Marketing of the company's products are organized on a country-by-country basis, and marketing research is conducted independently in each country."[7]
Management is unable to have total control over the company in the host country because it is found that "local nationals have a better understanding and awareness of national market conditions, more so than home office personnel."[7] This is very accurate in several aspects of the products delivery including pricing, customer service and well-being, market research, and channels of distribution. Therefore, the majority of control in the host countries practices is lost, and the company is forced to manage its operations from the outside. "Local nationals occupy virtually all of the key positions in their respective local subsidiaries, and they appoint and develop their own people."[8]
There are a few other drawbacks to the polycentric approach which may restrict a multinational company from completely realizing its full potential in the host country. The first drawback of a polycentric approach is that the "benefits of global coordination between subsidiaries such as the development of economies of scale cannot be realized."[3] This basically restricts the company for mass production of its products, as they are forced to manufacture its products with the local preferences being the priority of production.[3] Secondly, the fact that because all of the subsidiaries work independently of one another, learning across geographic regions is not applied to one another. Therefore, knowledge that could be beneficial across all regions is lost, and subsidiaries could be worse off than if they had obtained the knowledge.[3] Lastly is that the "treating of each market as unique may lead to the duplication of facilities."[3] By focusing on the business practices of local preferences and techniques which pertain to the local market conditions, the subsidiary in the host country could mimic that of local companies and appear less appealing to local consumers.
In concluding, a polycentric approach should only be used within a company in which there is a certain amount of comfort in allowing the host country to make all major decisions, following their own procedures and objectives. It must be understood that there is limited control or communication between the home and host-country, and products and distribution may vary across countries. Companies should evaluate all legs of the EPG model before implementing a strategy, as all companies differ in international strategy among industry and region.

Costs and benefits of polycentrism

CostsBenefits
Waste due to duplicationIntense exploitation of local markets
Localization costs of "universal" productsBetter sales due to better-informed local management
Inefficient use of home-country experienceMore initiative for local products
Excessive regard for local traditions at expense of global growthMore host government support
Good local managers with high morale
[6]

Geocentrism

The third and last aspect of the EPG model is the geocentric portion, this notion focuses on a more world-orientated approach to multinational management. The main difference of geocentrism compared to ethno and polycentrism is that it does not show a bias to either home or host country preferences but rather spotlights the significance of doing whatever it takes to better serve the organization.[2] This is evident in the sense that upper management does not hire or delegate responsibility to an individual because they best exemplify the host or home countries opinions. Instead, management selects the person best suited to foster the companies goals and solve problems worldwide.[2] The purpose of this is to build an organization in which the subsidiary is not only a good citizen of the host nation but is a leading exporter from this nation in the international community and contributes such benefits as (1) an increasing supply of hard currency, (2) new skills and, (3) a knowledge of advanced technology.[2]
The sole goal of geocentrism is to globally unite both headquarters and subsidiaries.[2] The firms subsidiaries are thus neither satellites nor independent city states, but parts of a whole whose focus is on worldwide objectives as well as local objectives, each part making its unique contribution with its unique competence.[2] Furthermore, geocentrism boils down to product differentiation, diversifying functions in the sense that different markets require dissimilar behavior, and lastly geographic location.[2] Out of all aspects of a business there are two that are predominantly geocentric – research and development and marketing.[2] As stated previously this is because different markets, regions, and countries require distinctive ways of approaching them. For example, the standards in which the home country operates are going to be much different from how the host country operates. What is accepted as a permissible way of treating employees in the United States, the home country, may not be acceptable to Chinese employees, in the host country. In addition, consumer tastes vary greatly from home country to host country, and going even further these tastes will continue to change from host country to host country proving that for R&D to be effective it must be world-oriented.[2]
It is the overall goal of geocentrism to form a collaborative network between headquarters and subsidiaries; this arrangement should entail a set of universal standards that can thus be used as a guideline when attacking key business decisions.[2] Such decisions include the management and start-up of new plants, ideas of how to market a product to a new consumer base, and product alterations. The most effective way to enforce geocentrism is with a formal reward system that encourages both subsidiary and headquarters managers to work for global goals rather than just defending home country values. This ideology is a great example of how today's business must manage both global and local issues in order to succeed in the end.[2]
While there are many obstacles that will hinder a company's ability to become geocentric, there are also a handful of forces which will drive them towards this. For instance, there is the loss of national sovereignty when one nation is dominated by another – this can lead to a loss in economic and political nationalism.[2] There is also the feeling from host countries that receive disproportionate reimbursement from international profits that only fuels the lack of trust toward big corporations felt by political leaders of host countries.[2] On the other hand, there are also forces that push an organization to the geocentric notion of managing a multinational corporation (MNC).[2] The first and most obvious of these is competition, if one company is to enter a new country or market it forces rivals to do the same in order to maintain pace.[2] Secondly, there is a large amount of customers available to MNC's internationally that require a geocentric approach in order to effectively targeted. A third force causing this movement is the abundance of growing world markets, occurring in areas such as income earning age population, rising GDP's, and escalating disposable income in areas such as China and Korea.[2]
With that said geocentrism is an ideology that must be accepted by any corporation operating globally in order for any sort of success and long term stability to be attained. However, there are certain aspects of the business life in which ethnocentrism and polycentrism are more adequate models to follow, but functional smoothness and success in both home and host countries is dependent upon upper managements ability to select individuals who are world orientated as opposed to home or host country centered.[2]

Costs and benefits of geocentrism

CostsBenefits
High communication and travel costsIntegrated global outlook
Educational costs at all levelsMore powerful total company throughout
Time spent in consensus decision-makingBetter quality of products and services
International headquarters bureaucracyWorldwide use of best resources
"Too wide" distribution of powerImproved local country management
Personnel problems, especially those of international executive reentryGreater commitment to global objectives
Higher global profits





                                 XXX  .  V0000 Business Risk Factors 

The Group’s operations and financial results are subject to various risks and uncertainties, including those described below, that could significantly affect investors’ judgments. In addition, the following statements include matters which might not necessarily fall under such significant risks, but are deemed important for investors’ judgment from a standpoint of affirmative disclosure.
Descriptions about the future in the following are based on what the Group recognizes as of December 31, 2016.

1) Market Fluctuations

Semiconductor market fluctuations, which are caused by such factors as economic cycles in each region and shifts in demand of end customers, affect the Group. Although the Group carefully monitors changes in market conditions, it is difficult to completely avoid the impact of market fluctuations due to economic cycles in countries around the world and changes in the demand for end products. Market downturns, therefore, could lead to decline in product demand and increase in production and inventory amounts, as well as lower sales prices. Consequently, market downturns could reduce the Group’s sales, as well as lower fab utilization rates, which may in turn result in worsened gross margins, ultimately leading to deterioration in profits.

2) Fluctuations in foreign exchange and interest rates

The Group engages in business activities in all parts of the world and in a wide range of currencies. The Group continues to engage in hedging transactions and other arrangements to minimize exchange rate risk, but it is possible for our consolidated business results and financial condition, including our sales volume in foreign currencies, our materials costs in foreign currencies, our production costs at overseas manufacturing sites, and other items, to be influenced if exchange rates change significantly. Also, the Group’s assets, liabilities, income, and costs can change greatly by showing our foreign currency denominated assets and debts converted to amounts in Japanese yen, and these can also change when financial statements in foreign currencies at our overseas subsidiaries are converted to and presented in Japanese yen.
Furthermore, since costs and the values of assets and debts associated with the Group’s business operation are influenced by fluctuations in interest rates, it is also possible for the Group’s businesses, performance, and financial condition to be adversely influenced by these fluctuations.

3) Natural Disasters

Natural disasters such as earthquakes, typhoons, and floods, as well as accidents, acts of terror, infection and other factors beyond the control of the Group could adversely affect the Group’s business operation. Especially, as the Group owns key facilities and equipment in areas where earthquakes occur at a frequency higher than the global average, the effects of earthquakes and other events could damage the Group’s facilities and equipment and force a halt to manufacturing and other operations, and such events could consequently cause severe damage to the Group’s business. The Group sets and manages several preventive plans and Business Continuity Plan which defines countermeasures such as contingency plans and at the same time the Group is subscribed to various insurances; however, these plans and insurances are not guaranteed to cover all the losses and damages incurred.

4) Competition

The semiconductor industry is extremely competitive, and the Group is exposed to fierce competition from rival companies around the world in areas such as product performance, structure, pricing and quality. In particular, certain of our competitors have pursued acquisitions, consolidations, and business alliances, etc. in recent years and there is a possibility to have such moves in the future as well. As a result, the competitive environment surrounding the Group may further intensify. To maintain and improve competitiveness, the Group takes various measures including development of leading edge technologies, standardizing design, cost reduction, and consideration of strategic alliances with third parties or possibility of further acquisitions but in the event that the Group cannot maintain its competitiveness, the Group’s market share may decline, which may negatively impact the Group’s financial results.
In addition, fierce market competition has subjected the products of the Group to sharp downward pressure on prices, for which measures to improve profitability, such as price negotiations and efforts at cost price reduction, have been unable to fully compensate. This raises the possibility of a worsening of the Group’s gross margin. Furthermore, in cases where customers for the Group’s products for which the gross margin is low have difficulty switching to other products or require a certain amount of time to secure replacements, it may be difficult for the Group to halt or reduce production in a timely manner. This may result in a reduction in the profitability of the Group.

5) Implementation of Management Strategies

The Group is implementing a variety of business strategies and structural measures, including making the mid-term growth strategy and reforming the organizational structure of the Group, to strengthen the foundations of its profitability. Implementing these business strategies and structural measures requires a certain level of cost and due to changes in economic conditions and the business environment, factors whose future is uncertain, and unforeseeable factors, it is possible that some of those reforms may become difficult to carry out and others may not achieve the originally planned results. Furthermore, additional costs, which are higher than originally expected, may arise. Thus these issues may adversely influence the Group’s performance and financial condition.

6) Business Activities Worldwide

The Group conducts business worldwide, which can be adversely affected by factors such as barriers to long-term relationships with potential customers and local enterprises; restrictions on investment and imports/exports; tariffs; fair trade regulations; political, social, and economic risks; outbreaks of illness or disease; exchange rate fluctuations; rising wage levels; and transportation delays. As a result, the Group may fail to achieve its initial targets regarding business in overseas markets, which could have a negative impact on the business growth and performance of the Group.

7) Strategic Alliance and Corporate Acquisition

For business expansion and strengthening of competitiveness, the Group may engage in strategic alliances, including joint investments, and corporate acquisitions involving third parties in the areas of R&D on key technologies and products, manufacturing, etc. For example, in February 2017, the Group completed the acquisition of Intersil Corporation, a provider of power management and precision analog solutions. With regard to such alliances and acquisitions, the Group examines the likely return on investment and profitability from a variety of perspectives. However, in cases where there is a mismatch with the prospective alliance partner or acquisition target in areas of management strategy such as capital procurement, technology management, and product development, or there are financial or other problems affecting the business of the prospective collaboration partner or acquisition target, in addition to the time and expense required for integration of aspects such as business execution, technology, products, personnel, systems and response to antitrust laws and other regulations of the relevant authorities, there is a possibility that the alliance relationship or capital ties will not be sustainable, or in the case of acquisitions that the anticipated return on investment or profitability cannot be realized.
Furthermore, there is a possibility that the anticipated synergies or other advantages cannot be realized due to an inability to retain or secure the main customers or key personnel of the prospective alliance partner or acquisition target. Thus, there is no guarantee that an alliance or acquisition will achieve the goals initially anticipated.

8) Financing

While the Group has been procuring business funds by methods such as borrowing from financial institutions and other sources, in the future it may become necessary to procure additional financing to implement usiness and investment plans, expand manufacturing capabilities, acquire technologies and services, and repay debts. It is possible that the Group may face limitations on its ability to raise funds due to a variety of reasons,including the fact that the Group may not be able to acquire required financing in a timely manner or may face increasing financing costs due to the worsening business environment in the semiconductor industry, worsening conditions in the financial and stock markets, and changes in the lending policies of lenders. In addition, some of the borrowing contracts executed between the Group and some financial institutions stipulate articles of financial covenants. If the Group breaches these articles due to worsened financial base of the Group etc., the Group may lose the benefit of term on the contract, and it may adversely influence the Group’s business performance and financial conditions.

9) Notes on Additional Financing

After implementing of the allocation of new shares to a third party based on a decision at the Meeting of the Board of Directors held on December 10, 2012, we received an offer from the Innovation Network Corporation of Japan that they are willing to provide additional investments or loans with an upper limit of 50 billion yen. Currently, no specific details regarding the timing of or conditions associated with these additional investments or loans have been determined, and there is no guarantee that these additional investments or loans will actually be implemented. If investments occur based on this offer, further dilution of existing stock will occur and this may adversely impact existing shareholders. Also, if loans are made under this offer, the Group’s outstanding interestbearing debt will increase and this may impose restrictions on some of our business activities. Furthermore, if fluctuations in interest rates occur in the future, the Group’s businesses, performance, and financial condition may be adversely affected.

10) Notes on the Fact that the Largest Shareholder Possesses the a Majority Share of Voting Rights

As a result of the allocation of common stock to the Innovation Network Corporation of Japan and others by way of third-party allotment on September 30, 2013, the Innovation Network Corporation of Japan now holds a majority share of voting rights held in association with Renesas Electronics’ share. Thus, the business operations of the Group are potentially subject to a substantial influence through the exercise by the Innovation Network Corporation of Japan of its voting rights at General Meetings of Shareholders. In addition, should the Innovation Network Corporation of Japan at some future date sell all or part of Renesas Electronics’ share which is currently held for investment purpose, this could potentially have a substantial effect on the market value of Renesas Electronics’ share, depending on factors such as the market climate at the time of the sale.

11) Rapid Technological Evolutions and Other Issues

The semiconductor market in which the Group does business is characterized by rapid technological changes and rapid evolution of technological standards. Therefore, if the Group is not able to carry out appropriate research and development, the Group’s businesses, performance, and financial condition may all be adversely affected by product obsolescence and the appearance of competing products.

12) Product Production

a. Production Process Risk
Semiconductor products require extremely complex production processes. In an effort to increase yields (ratio of non-defective products from the materials used), the Group takes steps to properly control production processes and seeks ongoing improvements. However, the emergence of problems in these production processes could lead to worsening yields. This problem, in turn, could trigger shipment delays, reductions in shipment volume, or, at worst, the halting of shipments.
b. Procurement of Raw Materials, Components, and Production Facilities
The timely procurement of necessary raw materials, components and production facilities is critical to semiconductor production. To avoid supply problems related to these essential raw materials, components and production facilities, the Group works diligently to develop close relationships with multiple suppliers. Some necessary materials, however, are available only from specific suppliers. Consequently, insufficient supply capacity amid tight demand for these materials as well as events including natural disasters, accidents, worsening of business conditions, and withdrawal from the business occurred in suppliers could preclude their timely procurement, or may result in sharply higher prices for these essential materials upon procurement. Furthermore, defects in procured raw materials or components could adversely influence the Group’s manufacturing operations and additional costs may be incurred by the Group.
c. Risks Associated with Outsourced Production
The Group outsources the manufacture of certain semiconductor products to external foundries (contract manufacturers) and other entities. In doing so, the Group selects its trusted outsourcers, rigorously screened in advance based on their technological capabilities, supply capacity, and other relevant traits; however, the possibility of delivery delays, product defects and other production-side risks stemming from outsourcers cannot be ruled out completely. In particular, inadequate production capacity among outsourcers or operation shutdown of the outsourcers as a result of a natural disaster, could result in the Group being unable to supply enough products.
d. Maintenance of Production Capacity at an Appropriate Level
The semiconductor market is sensitive to fluctuations in the business climate, and it is difficult to predict future product demand accurately. Thus, it is not always possible for the Group to maintain production capacity at an appropriate level that matches product demand. In addition, even if the Group engages in capital investment to boost production capacity, there is generally a certain amount of time required before the actual increase in production capacity takes place.
Therefore, if demand for specific products substantially exceeds the Group’s production capacity at a certain point and the state of excess demand continues over time, there is a possibility that the Group will be unable to supply customers with the products they desire, that opportunities to sell the products in question will be lost, that the Group will lose market share as customers switch to competing products, and that the relationship of the Group and its customers will suffer.
On the other hand, if in response to a rise in demand for specific products the Group undertakes capital investment with the aim of increasing production capacity, there is no guarantee that demand for the products in question will remain strong once production capacity actually increases and afterward. There is a possibility that actual product demand may turn out to be less than anticipated, in which case it may not be possible to recover the capital investment with the anticipated earnings.

13) Product Quality

Although the Group makes an effort to improve the quality of semiconductor products, they may contain defects, anomalies or malfunctions that are undetectable at the time of shipment due to increased sophistication of technologies, the diversity of ways in which the Group’s products are used by customers and defects in procured raw materials or components. These defects, anomalies or malfunctions could be discovered after the Group products were shipped to customers, resulting in the return or exchange of the Group’s products, claims for compensatory damages, or discontinuation of the use of the Group’s products, which could negatively impact the profits and operating results of the Group. To prepare for such events, the Group has insurance such as product liability insurance and recall insurance, but it is not guaranteed that the full costs of reimbursements would be covered by these.

14) Product Sales

a. Reliance on Key Customers
The Group relies on certain key customers for the bulk of its product sales to customers. The decision by these key customers to cease adoption of the Group’s products, or to dramatically reduce order volumes, could negatively impact the Group’s operating results.
b. Changes in production plans by customers of custom products
The Group receives orders from customers for the development of specific semiconductor products in some cases. There is the possibility that after the Group received orders the customers decide to postpone or cancel the launch of the end products in which the ordered product is scheduled to be embedded. There is also the possibility that the customers cancel its order if the functions and quality of the product do not meet the customer requirements. Further, the weak sales of end products in which products developed by the Group are embedded may result in customers to reduce their orders, or to postpone delivery dates. Such changes in production plans, order reductions, postponements and other actions from the customers concerning custom products may cause declines in the Group sales and profitability.
c. Reliance on Authorized Sales Agents
In Japan and Asia, the Group sells the majority of its products via independent authorized sales agents, and relies on certain major authorized sales agents for the bulk of these sales. The inability of the Group to provide these authorized sales agents with competitive sales incentives or margins, or to secure sales volumes that the authorized sales agents consider appropriate, could result in a decision by such agents to review their sales network of the Group’s products, including the reduction of the network, etc., which could cause a downturn in the Group sales.

15) Securing Human Resources

The Group works hard to secure superior human resources for management, technology development, sales, and other areas when deploying business operations. However, since such superbly talented people are of limited number, there is fierce competition in the acquisition of human resources. Under the current conditions, it may not be possible for the Group to secure the talented human resources it requires.

16) Retirement Benefit Obligations

Net defined benefit liability and net defined benefit asset are calculated based on actuarial assumptions, such as discount rates and the long-term expected rates of returns on assets.
However, the Group performance and financial condition may be adversely affected either if discrepancies between actuarial assumptions and business performance arise due to changing interest rates or a fall in the stock market and retirement benefit obligations increase or our plan assets decrease and there is an increase in the pension funding deficit in the retirement benefit obligations system.

17) Capital Expenditures and Fixed Cost Ratio

The semiconductor business in which the Group is engaged requires substantial capital investment. The Group undertakes capital investment in an ongoing manner, and this requires it to bear the associated amortization costs. In addition, if there is a drop in demand due to changes in the market climate and the anticipated scale of sales cannot be achieved, or if excess supply causes product prices to fall, there is a possibility that a portion or the entirety of the capital investment will not be recoverable or will take longer than anticipated to be recovered. This could have an adverse effect on the business performance and the financial condition of the Group.
Furthermore, the majority of the expenses of the Group are accounted for by fixed costs such as production costs associated with factory maintenance and R&D expenses, in addition to the abovementioned amortization costs accompanying capital investment. Even if there is a slump in sales due to a reduction in orders from the Group’s main customers or a drop in product demand, or if the factory operating rate decreases, it may be difficult to reduce fixed costs to compensate. As a result, a relatively small-scale drop in sales can have an adverse effect on the profitability of the Group.

18) Impairment Loss on Fixed Assets

The Group owns substantial fixed assets, consisting of both tangible fixed assets such as plant and equipment and intangible fixed assets such as goodwill obtained through the acquisition of Intersil Corporation. These fixed assets are amortized according to the accounting principles generally accepted in Japan (“Japanese GAAP”), but when there are indications of impairment, the Group examines the possibility of recovering the book value of assets based on the future cash flow to be generated from the fixed assets. It may be necessary to recognize impairment of such assets if insufficient cash flow is generated. Furthermore, the Group is considering the voluntary adoption of International Financial Reporting Standards (IFRS), starting with the fiscal year ending December 31, 2017. Under IFRS goodwill is not amortized, and a different method is used to determine impairment of fixed assets. As a result of the change in accounting standards, it may be necessary to recognize impairment of goodwill earlier than was the case under Japanese GAAP, and the impairment to be recognized may be larger.

19) Information Systems

Information systems are growing importance in the Group’s business activities. Although the Group makes an effort to manage stable operation of information systems, there is a likelihood that customer confidence and social trust would deteriorate, resulting in a negative effect on the Group’s performance, if there is a significant problem with the Group’s information systems caused by factors such as natural disasters, accidents, computer viruses and unauthorized accesses.

20) Information Management

The Group has in its possession a great deal of confidential information and personal information relating to its business activities. While such confidential information is managed according to law and internal regulations specifically designed for that purpose, there is always the risk that information may leak due to unforeseen circumstances.
Should such an event occur, there is a likelihood that leaks of confidential information may result in damages to our competitive position and customer confidence and social trust would deteriorate, resulting in a negative effect on the Group’s performance.

21) Legal Restrictions

The Group is subject to a variety of legal restrictions in the various countries and regions. These include requirements for approval for businesses and investments, antitrust laws and regulations, export restrictions, customs duties and tariffs, accounting standards and taxation, and environment laws. Moving forward, it is possible that the Group’s businesses, performance, and financial condition may be adversely affected by increased costs and restrictions on business activities associated with the strengthening of local laws.
The Group makes use of an internal regulation system to ensure legal compliance and appropriate financial reporting. However, since by its nature an internal regulation system is inherently limited, there is no guarantee that it will accomplish its goals completely.
Consequently, the possibility is not nonexistent that legal violations, etc., may occur moving forward. Should a violation of the law or other regulations occur, the Group could be subject to administrative penalties such as fines, legal penalties, or claims for compensatory damages, or there could be a negative impact on the social standing of the Group. This could have an adverse effect on the businesses, business performance, and financial condition of the Group.

22) Environmental Factors

The Group strives to decrease its environmental impact with respect to diversified and complex environmental issues such as global warming, air pollution, industrial waste, tightening of hazardous substance regulation, and soil pollution. There is the possibility that, regardless of whether there is negligence in its pursuit of business activities, the Group could bear legal or social responsibility for environmental problems. Should such an event occur, the burden of expenses for resolution could potentially be high, and the Group could suffer erosion in social trust.

23) Intellectual Property

While the Group seeks to protect its intellectual property, it may not be adequately protected in certain countries and areas. In addition, there are cases that the Group’s products are developed, manufactured and sold by using licenses received from third parties. In such cases, there is the possibility that the Group could not receive necessary licenses from third parties, or the Group could only receive licenses under terms and conditions less favorable than before.
With regard to the intellectual property rights related to the Group’s products, it is possible that a third party might file a lawsuit against the Group or its customers claiming patent infringement, or the like, and that as a result the manufacture and sale of the affected products might not be possible in certain countries or regions. It is also possible that the Group could be liable for damages to a third party or to a customer of the Group.

24) Legal Issues

As the Group conducts business worldwide, it is possible that the Group may become a party to lawsuits, investigation by regulatory authorities and other legal proceedings in various countries. In particular, the Group has been named in Canada as a defendant in a civil lawsuit related to possible violations of competition law involving smartcard chips brought by purchasers of such products. Also, the Company and its European subsidiary have been named in the United Kingdom as defendants in a civil lawsuit related to possible violations of competition law involving smartcard chips brought by purchasers of such products.
It is difficult to predict the outcome of the legal proceedings to which the Group is presently a party or to which it may become a party in future. The resolution of such proceedings may require considerable time and expense, and it is possible that the Group may be required to pay compensation for damages, possibly resulting in significant adverse effects to the business, performance, and financial condition of the Group.



                            XXX  .  V00000 International Finance Trends 

International trade provides unique opportunities and risks for investors. U.S. exporters looking to expand can learn how best to invest by analyzing current trends.

Senin, 04 Desember 2017

modern electronic equipment for transmission and distribution as well as communication on modern finance in today's world AMNIMARJESLOW GOVERNMENT 91 22 00 17 LOR ELTRANSMISION AND DISTRIBUTION AT ELECTRONIC TRANSMISSION 02096010014 LJBUSAF XAM$&* TRY AND THREE MODELING CUTTING 8$? YES NOTAND JES XAM$$






                       Advantages and Disadvantages of Electronic Communication


Technology in Modern Communication
Communication is needed for decision-making, coordination, control, and planning. Communication is required for processing information in the accounting department, finance department, personnel department, establishment, of public relations, sales department, market research, production department, purchase department etc. Communication with the government, shareholders, and prospective investors, customers etc. is also required for the day to day functioning of the business concern. The conventional process of communication is not sufficient to meet the multidimensional needs of the firm enterprises. So, the need for modern communication technology emerges to meet the desired need of modern business operations. Worldwide communication has been facilitated by the electronic transmission of data which connects individuals, regardless of geographic location, almost instantly.
Advantages and Disadvantages of Electronic Communication
Definition of Electronic Communication
Communication using electronic media known as electronic communication. Such communication allows transmission of message or information using computer systems, fax machine, e-mail, title or video conferencing and satellite network. People can easily share conversation, picture, image, sound, graphics, maps, interactive software and thousands of things for the development of electronic communication. Due to electronic technology, jobs, working locations and cultures are changing, and therefore people can easily access to worldwide communication without any physical movement.
L.C. Bovee and Others said, “Electronic communication is the transmission of information using advanced techniques such as computer moderns, facsimile machines, voice mail, electronic mail, teleconferencing, video cassettes, and private television networks.”



Advantages of Electronic Communication
The following points highlight on the advantages of electronic communication:
1. Speedy transmission:  It requires only a few seconds to communicate through electronic media because it supports quick transmission.
2. Wide coverage: World has become a global village and communication around the globe requires a second only.

3. Low cost: Electronic communication saves time and money. For example, Text SMS is cheaper than the traditional letter.

4. Exchange of feedback: Electronic communication allows the instant exchange of feedback. So communication becomes perfect using electronic media.
5. Managing global operation: Due to the advancement of electronic media, business managers can easily control operation across the globe. Video or teleconferencing e-mail and mobile communication are helping managers in this regard.

Disadvantages of Electronic Communication
Electronic communication is not free from the below limitations:
1. The volume of data: The volume of telecommunication information is increasing at such a fast rate that business people are unable to absorb it within the relevant time limit.
2. The cost of development: Electronic communication requires huge investment for infrastructural development. Frequent change in technology also demands further investment.
3. Legal status: Data or information, if faxed, may be distorted and will cause zero value in the eye of law.
4. Undelivered data: Data may not be retrieved due to system error or fault with the technology. Hence required service will be delayed

5. Dependency: Technology is changing every day, and therefore developing countries face the problem as they cannot afford the new or advanced technology. Therefore developing countries need to be dependent towards developed countries for sharing global network.




                                          XXX  .  V0  Modern Finance Theory

This timely text examines the role that financial markets and institutions play in modern macroeconomics. Over the last couple of decades there has been a fair amount of research on microeconomic models of market failure and the impact of such failures on business cycles and other macroeconomic phenomena. the importance of having efficient financial markets in achieving economic growth, and the constraints on growth in the absence of a working financial system.

Outcomes are easy to measure in financial markets: you either beat the index or you don’t. And the results could not be clearer about how few people possess any real skill.
Experts, not traders, would set the prices of the securities in the marketplace. “The time seems to be ripe for the publication of elaborate monographs on the investment value of all the well-known stocks and bonds listed on the exchanges,” he wrote. “The last word on the true worth of any security will never be said by anyone, but men who have devoted their whole lives to a particular industry should be able to make a better appraisal of its securities than the outsider can.”

Williams’s new science of investing featured the now-familiar dividend discount model. In short, this model quantifies the idea that the investment value of a security is equal to “the present worth of the expected future dividends.”7 Williams laid this out in simple formula form:

Armed with this formula and a desired rate of interest, investors needed only a way to forecast future dividends. Williams believed he had a straightforward solution. The investor should “make a budget showing the company’s growth in assets, debt, earnings, and dividends during the years to come.”8 Rather than using an accountant’s ledger book, however, Williams proposed that investors use algebraic formulas instead. Williams believed that growth could be modeled using a logarithmic curve:

With this logarithmic model for growth (the familiar “S-curve” that Bain & Company consultants so dearly love) and a few additional formulas to provide the “terminal value” once competitive forces had stalled growth, Williams believed the true value of a company could be determined.
Combining a company’s budget with these new algebraic formulas offered an approach that Williams believed was “altogether new to the accountant’s art.” Williams grew almost giddy with excitement contemplating the beauty of his new technique. “By the manipulation of algebraic symbols, the warp of development is traced through the woof of time with an ease unknown to ordinary accounting.”9 One can only imagine what literary metaphors might decorate Williams’s prose had he lived to see the modern Excel model!
Yet as he waxed philosophical about the new art of algebraic accounting, Williams acknowledged that some might be skeptical of the supposed ease of his method. “It may be objected that no one can possibly look with certainty so far into the future as the new methods require and that the new methods of appraisal must therefore be inferior to the old,” he wrote. “But is good forecasting, after all, so completely impossible? Does not experience show that careful forecasting—or foresight as it is called when it turns out to be correct—is very often so nearly right as to be extremely helpful to the investor?”10
Williams acknowledged this key limitation of his model—uncertainty about the future. But this, he argued, was a problem for others, not the fault of his beautiful mathematical models. “If the practical man, whether investment analyst or engineer, fails to use the right data in his formulas, it is no one’s fault but his own,” he wrote.11
Williams’s theories had grown out of the chaos of the Crash of 1929, and he retained a pessimism about the “practical men” that were his contemporaries. “Since market price depends on popular opinion, and since the public is more emotional than logical, it is foolish to expect a relentless convergence of market price towards investment value,” he wrote.12
But the generation of finance researchers emerging after World War II was more optimistic. The success of American industry—and American science—in defeating the Nazis inspired confidence in a new generation of researchers who believed that better math and planning could change the course of human affairs.
Markowitz, Sharpe, and the Capital Asset Pricing Model
Harry Markowitz was one of these men. Born in 1927, a teenager during the war, he studied economics at the University of Chicago in the late 1940s. After Chicago, Markowitz researched the application of statistics and mathematics to economics and business first at the Cowles Commission and then at the RAND Corporation. He was at the epicenter of the new wave of technocratic, military-scientific planning institutes.
Markowitz understood the key problem with Williams’s ideas. “The hypothesis (or maxim) that the investor does (or should) maximize discounted return must be rejected,” he wrote in his 1952 paper.13 “Since the future is not known with certainty, it must be ‘expected’ or ‘anticipated’ returns which we discount.”
Without accounting for risk and uncertainty, Markowitz observed, Williams’s theory “implies that the investor places all his funds in the security with the greatest discounted value.”14 This defied common sense, however, and so Markowitz set out to update Williams’s theory. “Diversification is both observed and sensible; a rule of behavior which does not imply the superiority of diversification must be rejected both as a hypothesis and as a maxim,” he argued.15
Markowitz’s explanation for why diversification was both observed and sensible was that investors care about both returns and about “variance.” By mixing together different securities with similar expected returns, investors could reduce variance while still achieving the desired return. Investors, he believed, were constantly balancing expected returns with expected variance in building their portfolios.
The logical next step in this argument was that the prices of different assets should have a linear relationship to their expected variance to maintain a market equilibrium where one could only obtain a higher return by taking on a higher amount of variance. This was the logic of the Capital Asset Pricing Model developed by Markowitz’s student William Sharpe.
Sharpe’s Capital Asset Pricing Model was meant as “a market equilibrium theory of asset prices under conditions of risk.”16 He believed that “only the responsiveness of an asset’s rate of return to the level of economic activity is relevant in assessing its risk. Prices will adjust until there is a linear relationship between the magnitude of such responsiveness and expected returns.”17 Sharpe measured responsiveness by looking at each security’s historical variance relative to the market, which he labeled its beta.
To be sure, not everyone from Williams’s era was so enthusiastically enamored with the prospect of experts planning the appropriate pricing of investments. At the same time as Williams was completing his 1937 dissertation, Friedrich Hayek was attacking this type of planning mindset. If society were turned over to experts and planners, he wrote:
change will be quite as frequent as under capitalism. It will also be quite as unpredictable. All action will have to be based on anticipation of future events and the expectations on the part of different entrepreneurs will naturally differ. The decision to whom to entrust a given amount of resources will have to be made on the basis of individual promises of future return. Or, rather, it will have to be made on the statement that a certain return is to be expected with a certain degree of probability. There will, of course, be no objective test of the magnitude of the risk. But who is then to decide whether the risk is worth taking?18
Alas, it would be another forty years before the notion of state planning by experts generally fell out of fashion and Hayek was given the Nobel Prize for his 1940s era work on price discovery in the broader field of economics. It is enough to note for our purposes that in financial economics many of the hangovers of the Williams/Markowitz/Sharpe era, and its religious faith in algebraic calculations and experts’ forecasting power, lasted well beyond the nightmare of the 1970s.
The Empirical Invalidation of Modern Finance Theory
The intellectual history of modern finance theory thus followed a simple logical flow. A stock is worth the net present value of future dividends. But investors must not only care about expected return—otherwise they would put all their money in one stock—they must also care about the variance of their investment portfolios. If investors want to maximize expected return and minimize expected variance, then expected variance should have a linear relationship with expected return.
This is the core idea: combining a forecast of future cash flows with an assessment of future variance based on the historical standard deviation of a security’s price relative to the market should produce an “equilibrium” asset price.
Williams, Markowitz, and Sharpe were all brilliant men, and their models have a certain mathematical elegance. But there is one big problem with their theoretical work: their models don’t work. The accumulated empirical evidence has invalidated nearly every conclusion they present.
Robert Shiller won the Nobel Prize for conclusively proving that the dividend discount model was a failure. In a paper for the Cowles Commission thirty years after Markowitz’s work was published, Shiller took historical earnings, interest rates, and stock prices, and calculated the true price at every moment of every stock in the market with the benefit of perfect hindsight. He found that doing this could explain less than 20 percent of the variance in stock prices. He concluded that changes in dividends and discount rates could “not remotely justify stock price movements.”
Below is a graph showing the actual future value of earnings discounted back at the actual interest rates relative to the actual stock market index over time. As you can see, the stock market index is far more volatile than could possibly be explained through the dividend discount model.

Future earnings and interest rates do not explain stock price movements as Williams thought they would. Variance also fails to explain stock prices, as Markowitz and Sharpe thought they would.
In a review of forty years of evidence, Nobel Prize winner Eugene Fama and his research partner Ken French declared in 2004 that the Capital Asset Pricing Model (CAPM) was bunk: “despite its seductive simplicity, the CAPM’s empirical problems probably invalidate its use in applications.”19 These models fail because they assume predictability: they assume that it is possible to forecast future dividends and future variance by using past data.
The core prediction of the model—that stocks with higher price volatility should produce higher returns—has failed every empirical test since 1972. Below is a graph from Fama and French showing CAPM’s predictions relative to reality.

A New Philosophy of Unpredictability
So what is the alternative? A philosophy based on unpredictability. As we have seen time and time again, experts cannot predict the future. The psychologist Philip Tetlock ran a twenty-year-long study in which he picked 284 experts on politics and economics and asked them to assess the probability that various things would or would not come to pass.20 By the end of the study, in 2003, the experts had made 82,361 forecasts. They gave their forecasts in the form of three possible futures and were asked to rate the probability of each of the three. Tetlock found that assigning a 33 percent probability to each possible alternative future would have been more accurate than relying on the experts.
Think back to 2016: The experts told us that Brexit would never happen. They told us that Trump would never win the Republican primary, much less the general election. In both cases, prominent academics and businessmen warned of terrible consequences for financial markets (consequences that, of course, never came to pass). These are only two glaring examples of the larger truth Tetlock has identified: experts are absolutely terrible when it comes to predicting the future.
Stanford economist Mordecai Kurz argues that there are a variety of rational interpretations of historical data that lead to different logical predictions about the future (just think of all the different analysts and researchers currently putting out very well-researched and very well-informed predictions for the price of oil one year from now). But only one of those outcomes will come true, and when the probability of that one outcome goes from less than 100 percent to 100 percent, every other alternative history is foreclosed.21 Kurz has developed a model that explains 95 percent of market volatility by assuming that investors make rational forecasts that future events nevertheless prove incorrect.
The forecast error arises from investors’ inability to accurately predict the future. What looks inevitable in retrospect looks contingent in the moment, the product of what Thomas Wolfe called “that dark miracle of chance that makes new magic in a dusty world.”
In his 1962 classic The Structure of Scientific Revolutions, historian of science Thomas Kuhn posited that scientists rely on simplified models, or paradigms, to understand the observed facts. These paradigms are used to guide scientific enquiry until such enquiry turns up facts that cast these paradigms into doubt. Scientific revolutions are then required to develop new paradigms to replace the old, disproven ones.
The economists Harrison Hong, Jeremy Stein, and Jialin Yu have posited that markets function in a similar way.22 Investors come up with simplified models to understand individual securities. When events unfold that suggest that these simple models are incorrect, investors are forced to revise their interpretation of historical data and develop new paradigms. Stock prices move dramatically to adjust to the new paradigm.
Ever-changing paradigms are necessary because the world is infinitely complex, and forecasting would be impossible without simplification. Think of all of the many things that affect a stock’s price: central bank policy, fund flows, credit conditions, geopolitical events, oil prices, the cross-holdings of the stocks’ owners, future earnings, executive misbehavior, fraud, litigation, shifting consumer preferences, etc. No model of a stock’s price could ever capture these dynamics, and so most investors rely instead on dividend discount models married to investment paradigms (e.g., a growth investor who looks for companies whose growth is underpriced relative to the market combines a paradigm about predicting growth with a dividend discount model that compares the rate of that growth to what is priced into the stock). But both these dividend discount models and these investment paradigms are too simple and are thus frequently proven wrong.
Market volatility results from these forecasting errors in predictable ways that fit with our empirical observations of stock prices. Stock returns are stochastic, rather than linear, reacting sharply to paradigm shifts. High-priced glamour stocks are more apt to experience unusually large negative price movements, usually after a string of negative news. The converse is true for value stocks. As a result, stocks with negative paradigms outperform stocks with positive paradigms because, when those paradigms are proven wrong, the stock prices move dramatically. These empirical truths have been widely validated.23
A more humanistic approach to finance suggests that the most intellectually honest approach—and the most profitable trading strategy—is to bet against the experts. This theory has the benefit of wide empirical support. Consider two natural conclusions that flow from a philosophy of unpredictability: preferring low-cost funds over high-cost funds and preferring contrarian strategies that buy out-of-favor value stocks. Evidence shows that there is a significant negative correlation between expenses paid to management companies and returns for investors.24
In direct contradiction to the capital asset pricing model, value stocks significantly outperform growth stocks.25 Large value stocks outperform large growth stocks by 0.26 percent per month controlling for CAPM, and small value stocks outperform small growth stocks by 0.78 percent per month controlling for CAPM. This is a significant and dramatic difference in return that stems entirely from betting on out-of-favor stocks that the experts dislike, and avoiding the glamorous, in-favor stocks that professional investors like.
These findings are consistent with an assumption of unpredictability. Those who try to predict fail, and the stocks of companies that are predicted to grow underperform the stocks of companies that are predicted not to grow. Growth is simply not predictable.26
Broader Implications
The man who would go on to become the youngest recipient of the Nobel Prize in economics, Kenneth Arrow, began his career during World War II in the Weather Division of the U.S. Army Air Force. The division was responsible for turning out long-range weather forecasts.
Arrow ran an analysis of the forecasts and found that his group’s predictions failed to beat the null hypothesis of historical averages. He and his fellow officers submitted a series of memos to the commanding general suggesting that, in light of this finding, the group should be disbanded and the manpower reallocated.
After months of waiting in frustration for a response, they received a terse response from the general’s secretary. “The general is well aware that your division’s forecasts are worthless. However, they are required for planning purposes.”
Finance professionals rely on discounted cash flow models and capital asset pricing assumptions not because they are correct but because they are required for planning purposes. These tools lend social power to the excellent sheep of American society, the conventionally smart but dull and unimaginative strivers: the lawyers of the baby-boom generation and the MBAs of the millennial era.
These managerial capitalists are not trying to remake the world. They are neither craftsmen, pursuing a narrow discipline for the satisfaction of mastering a métier, nor are they entrepreneurs, pursuing bold ideas to satisfy a creative or competitive id. Rather, they are seeking to extract rents from “operating businesses” by taking on roles in private equity or “product management” where their prime value add will be the creation of multi-tab Excel models and beautifully formatted PowerPoint presentations.
Ironically, these theories were invented to avoid bubbles, to replace the volatility of markets with the predictability of an academic discipline. But corporate America’s embrace of empirically disproven theories speaks instead to a hollowness at the core—the prioritization of planning over purpose, of professional advancement over craftsmanship, and rent-seeking over entrepreneurship.
Harvard Business School and the Stanford Graduate School of Business are the Mecca and Medina of this new secular faith. The curriculum is divided between courses that flatter the vanity of the students (classes on leadership, life design, and interpersonal dynamics) and courses that promote empirically invalidated theories (everything with the word finance in the course title). Students can learn groupthink and central planning tools in one fell swoop.
Indeed, the problem of the business schools is so clear that a few clever wags on Wall Street now use the “Harvard MBA Indicator” as a tool to detect bubbles. If more than 30 percent of Harvard MBAs take jobs in finance, it is an effective sell signal for stocks.
Our capitalist economy offers every possible incentive for accuracy and the achievement of excellence. Why do people keep using bad theories that have failed in practice and have been academically invalidated? Not because these theories and methods produce better outcomes for society, but because they create better outcomes for the middle manager, creating a set of impenetrably complex models that mystify the common man and thus assure the value of the expert, of the MBA, of the two-years-at-an-investment-bank savant.
Active investment management does not work for investors because it was not designed to benefit them: it was designed to benefit the managers. The last decade of sclerotic economic growth has shown that, in many cases, corporate management teams are not working to benefit the shareholders, the employees, or anyone other than themselves.


                         XXX  .  V00 DEVELOPING NEW  BUSINESS MODEL 

Developing New Business Models

Most executives assume that creating a sustainable business model entails simply rethinking the customer value proposition and figuring out how to deliver a new one. However, successful models include novel ways of capturing revenues and delivering services in tandem with other companies. In 2008 FedEx came up with a novel business model by integrating the Kinko’s chain of print shops that it had acquired in 2004 with its document-delivery business. Instead of shipping copies of a document from, say, Seattle to New York, FedEx now asks customers if they would like to electronically transfer the master copy to one of its offices in New York. It prints and binds the document at an outlet there and can deliver copies anywhere in the city the next morning. The customer gets more time to prepare the material, gains access to better-quality printing, and can choose from a wide range of document formats that Fed-Ex provides. The document travels most of the way electronically and only the last few miles in a truck. FedEx’s costs shrink and its services become extremely eco-friendly.
Some companies have developed new models just by asking at different times what their business should be. That’s what Waste Management, the $14 billion market leader in garbage disposal, did. Two years ago it estimated that some $9 billion worth of reusable materials might be found in the waste it carried to landfills each year. At about the same time, its customers, too, began to realize that they were throwing away money. Waste Management set up a unit, Green Squad, to generate value from waste. For instance, Green Squad has partnered with Sony in the United States to collect electronic waste that used to end up in landfills. Instead of being just a waste-trucking company, Waste Management is showing customers both how to recover value from waste and how to reduce waste.