Senin, 04 Desember 2017

modern electronic equipment for transmission and distribution as well as communication on modern finance in today's world AMNIMARJESLOW GOVERNMENT 91 22 00 17 LOR ELTRANSMISION AND DISTRIBUTION AT ELECTRONIC TRANSMISSION 02096010014 LJBUSAF XAM$&* TRY AND THREE MODELING CUTTING 8$? YES NOTAND JES XAM$$






                       Advantages and Disadvantages of Electronic Communication


Technology in Modern Communication
Communication is needed for decision-making, coordination, control, and planning. Communication is required for processing information in the accounting department, finance department, personnel department, establishment, of public relations, sales department, market research, production department, purchase department etc. Communication with the government, shareholders, and prospective investors, customers etc. is also required for the day to day functioning of the business concern. The conventional process of communication is not sufficient to meet the multidimensional needs of the firm enterprises. So, the need for modern communication technology emerges to meet the desired need of modern business operations. Worldwide communication has been facilitated by the electronic transmission of data which connects individuals, regardless of geographic location, almost instantly.
Advantages and Disadvantages of Electronic Communication
Definition of Electronic Communication
Communication using electronic media known as electronic communication. Such communication allows transmission of message or information using computer systems, fax machine, e-mail, title or video conferencing and satellite network. People can easily share conversation, picture, image, sound, graphics, maps, interactive software and thousands of things for the development of electronic communication. Due to electronic technology, jobs, working locations and cultures are changing, and therefore people can easily access to worldwide communication without any physical movement.
L.C. Bovee and Others said, “Electronic communication is the transmission of information using advanced techniques such as computer moderns, facsimile machines, voice mail, electronic mail, teleconferencing, video cassettes, and private television networks.”



Advantages of Electronic Communication
The following points highlight on the advantages of electronic communication:
1. Speedy transmission:  It requires only a few seconds to communicate through electronic media because it supports quick transmission.
2. Wide coverage: World has become a global village and communication around the globe requires a second only.

3. Low cost: Electronic communication saves time and money. For example, Text SMS is cheaper than the traditional letter.

4. Exchange of feedback: Electronic communication allows the instant exchange of feedback. So communication becomes perfect using electronic media.
5. Managing global operation: Due to the advancement of electronic media, business managers can easily control operation across the globe. Video or teleconferencing e-mail and mobile communication are helping managers in this regard.

Disadvantages of Electronic Communication
Electronic communication is not free from the below limitations:
1. The volume of data: The volume of telecommunication information is increasing at such a fast rate that business people are unable to absorb it within the relevant time limit.
2. The cost of development: Electronic communication requires huge investment for infrastructural development. Frequent change in technology also demands further investment.
3. Legal status: Data or information, if faxed, may be distorted and will cause zero value in the eye of law.
4. Undelivered data: Data may not be retrieved due to system error or fault with the technology. Hence required service will be delayed

5. Dependency: Technology is changing every day, and therefore developing countries face the problem as they cannot afford the new or advanced technology. Therefore developing countries need to be dependent towards developed countries for sharing global network.




                                          XXX  .  V0  Modern Finance Theory

This timely text examines the role that financial markets and institutions play in modern macroeconomics. Over the last couple of decades there has been a fair amount of research on microeconomic models of market failure and the impact of such failures on business cycles and other macroeconomic phenomena. the importance of having efficient financial markets in achieving economic growth, and the constraints on growth in the absence of a working financial system.

Outcomes are easy to measure in financial markets: you either beat the index or you don’t. And the results could not be clearer about how few people possess any real skill.
Experts, not traders, would set the prices of the securities in the marketplace. “The time seems to be ripe for the publication of elaborate monographs on the investment value of all the well-known stocks and bonds listed on the exchanges,” he wrote. “The last word on the true worth of any security will never be said by anyone, but men who have devoted their whole lives to a particular industry should be able to make a better appraisal of its securities than the outsider can.”

Williams’s new science of investing featured the now-familiar dividend discount model. In short, this model quantifies the idea that the investment value of a security is equal to “the present worth of the expected future dividends.”7 Williams laid this out in simple formula form:

Armed with this formula and a desired rate of interest, investors needed only a way to forecast future dividends. Williams believed he had a straightforward solution. The investor should “make a budget showing the company’s growth in assets, debt, earnings, and dividends during the years to come.”8 Rather than using an accountant’s ledger book, however, Williams proposed that investors use algebraic formulas instead. Williams believed that growth could be modeled using a logarithmic curve:

With this logarithmic model for growth (the familiar “S-curve” that Bain & Company consultants so dearly love) and a few additional formulas to provide the “terminal value” once competitive forces had stalled growth, Williams believed the true value of a company could be determined.
Combining a company’s budget with these new algebraic formulas offered an approach that Williams believed was “altogether new to the accountant’s art.” Williams grew almost giddy with excitement contemplating the beauty of his new technique. “By the manipulation of algebraic symbols, the warp of development is traced through the woof of time with an ease unknown to ordinary accounting.”9 One can only imagine what literary metaphors might decorate Williams’s prose had he lived to see the modern Excel model!
Yet as he waxed philosophical about the new art of algebraic accounting, Williams acknowledged that some might be skeptical of the supposed ease of his method. “It may be objected that no one can possibly look with certainty so far into the future as the new methods require and that the new methods of appraisal must therefore be inferior to the old,” he wrote. “But is good forecasting, after all, so completely impossible? Does not experience show that careful forecasting—or foresight as it is called when it turns out to be correct—is very often so nearly right as to be extremely helpful to the investor?”10
Williams acknowledged this key limitation of his model—uncertainty about the future. But this, he argued, was a problem for others, not the fault of his beautiful mathematical models. “If the practical man, whether investment analyst or engineer, fails to use the right data in his formulas, it is no one’s fault but his own,” he wrote.11
Williams’s theories had grown out of the chaos of the Crash of 1929, and he retained a pessimism about the “practical men” that were his contemporaries. “Since market price depends on popular opinion, and since the public is more emotional than logical, it is foolish to expect a relentless convergence of market price towards investment value,” he wrote.12
But the generation of finance researchers emerging after World War II was more optimistic. The success of American industry—and American science—in defeating the Nazis inspired confidence in a new generation of researchers who believed that better math and planning could change the course of human affairs.
Markowitz, Sharpe, and the Capital Asset Pricing Model
Harry Markowitz was one of these men. Born in 1927, a teenager during the war, he studied economics at the University of Chicago in the late 1940s. After Chicago, Markowitz researched the application of statistics and mathematics to economics and business first at the Cowles Commission and then at the RAND Corporation. He was at the epicenter of the new wave of technocratic, military-scientific planning institutes.
Markowitz understood the key problem with Williams’s ideas. “The hypothesis (or maxim) that the investor does (or should) maximize discounted return must be rejected,” he wrote in his 1952 paper.13 “Since the future is not known with certainty, it must be ‘expected’ or ‘anticipated’ returns which we discount.”
Without accounting for risk and uncertainty, Markowitz observed, Williams’s theory “implies that the investor places all his funds in the security with the greatest discounted value.”14 This defied common sense, however, and so Markowitz set out to update Williams’s theory. “Diversification is both observed and sensible; a rule of behavior which does not imply the superiority of diversification must be rejected both as a hypothesis and as a maxim,” he argued.15
Markowitz’s explanation for why diversification was both observed and sensible was that investors care about both returns and about “variance.” By mixing together different securities with similar expected returns, investors could reduce variance while still achieving the desired return. Investors, he believed, were constantly balancing expected returns with expected variance in building their portfolios.
The logical next step in this argument was that the prices of different assets should have a linear relationship to their expected variance to maintain a market equilibrium where one could only obtain a higher return by taking on a higher amount of variance. This was the logic of the Capital Asset Pricing Model developed by Markowitz’s student William Sharpe.
Sharpe’s Capital Asset Pricing Model was meant as “a market equilibrium theory of asset prices under conditions of risk.”16 He believed that “only the responsiveness of an asset’s rate of return to the level of economic activity is relevant in assessing its risk. Prices will adjust until there is a linear relationship between the magnitude of such responsiveness and expected returns.”17 Sharpe measured responsiveness by looking at each security’s historical variance relative to the market, which he labeled its beta.
To be sure, not everyone from Williams’s era was so enthusiastically enamored with the prospect of experts planning the appropriate pricing of investments. At the same time as Williams was completing his 1937 dissertation, Friedrich Hayek was attacking this type of planning mindset. If society were turned over to experts and planners, he wrote:
change will be quite as frequent as under capitalism. It will also be quite as unpredictable. All action will have to be based on anticipation of future events and the expectations on the part of different entrepreneurs will naturally differ. The decision to whom to entrust a given amount of resources will have to be made on the basis of individual promises of future return. Or, rather, it will have to be made on the statement that a certain return is to be expected with a certain degree of probability. There will, of course, be no objective test of the magnitude of the risk. But who is then to decide whether the risk is worth taking?18
Alas, it would be another forty years before the notion of state planning by experts generally fell out of fashion and Hayek was given the Nobel Prize for his 1940s era work on price discovery in the broader field of economics. It is enough to note for our purposes that in financial economics many of the hangovers of the Williams/Markowitz/Sharpe era, and its religious faith in algebraic calculations and experts’ forecasting power, lasted well beyond the nightmare of the 1970s.
The Empirical Invalidation of Modern Finance Theory
The intellectual history of modern finance theory thus followed a simple logical flow. A stock is worth the net present value of future dividends. But investors must not only care about expected return—otherwise they would put all their money in one stock—they must also care about the variance of their investment portfolios. If investors want to maximize expected return and minimize expected variance, then expected variance should have a linear relationship with expected return.
This is the core idea: combining a forecast of future cash flows with an assessment of future variance based on the historical standard deviation of a security’s price relative to the market should produce an “equilibrium” asset price.
Williams, Markowitz, and Sharpe were all brilliant men, and their models have a certain mathematical elegance. But there is one big problem with their theoretical work: their models don’t work. The accumulated empirical evidence has invalidated nearly every conclusion they present.
Robert Shiller won the Nobel Prize for conclusively proving that the dividend discount model was a failure. In a paper for the Cowles Commission thirty years after Markowitz’s work was published, Shiller took historical earnings, interest rates, and stock prices, and calculated the true price at every moment of every stock in the market with the benefit of perfect hindsight. He found that doing this could explain less than 20 percent of the variance in stock prices. He concluded that changes in dividends and discount rates could “not remotely justify stock price movements.”
Below is a graph showing the actual future value of earnings discounted back at the actual interest rates relative to the actual stock market index over time. As you can see, the stock market index is far more volatile than could possibly be explained through the dividend discount model.

Future earnings and interest rates do not explain stock price movements as Williams thought they would. Variance also fails to explain stock prices, as Markowitz and Sharpe thought they would.
In a review of forty years of evidence, Nobel Prize winner Eugene Fama and his research partner Ken French declared in 2004 that the Capital Asset Pricing Model (CAPM) was bunk: “despite its seductive simplicity, the CAPM’s empirical problems probably invalidate its use in applications.”19 These models fail because they assume predictability: they assume that it is possible to forecast future dividends and future variance by using past data.
The core prediction of the model—that stocks with higher price volatility should produce higher returns—has failed every empirical test since 1972. Below is a graph from Fama and French showing CAPM’s predictions relative to reality.

A New Philosophy of Unpredictability
So what is the alternative? A philosophy based on unpredictability. As we have seen time and time again, experts cannot predict the future. The psychologist Philip Tetlock ran a twenty-year-long study in which he picked 284 experts on politics and economics and asked them to assess the probability that various things would or would not come to pass.20 By the end of the study, in 2003, the experts had made 82,361 forecasts. They gave their forecasts in the form of three possible futures and were asked to rate the probability of each of the three. Tetlock found that assigning a 33 percent probability to each possible alternative future would have been more accurate than relying on the experts.
Think back to 2016: The experts told us that Brexit would never happen. They told us that Trump would never win the Republican primary, much less the general election. In both cases, prominent academics and businessmen warned of terrible consequences for financial markets (consequences that, of course, never came to pass). These are only two glaring examples of the larger truth Tetlock has identified: experts are absolutely terrible when it comes to predicting the future.
Stanford economist Mordecai Kurz argues that there are a variety of rational interpretations of historical data that lead to different logical predictions about the future (just think of all the different analysts and researchers currently putting out very well-researched and very well-informed predictions for the price of oil one year from now). But only one of those outcomes will come true, and when the probability of that one outcome goes from less than 100 percent to 100 percent, every other alternative history is foreclosed.21 Kurz has developed a model that explains 95 percent of market volatility by assuming that investors make rational forecasts that future events nevertheless prove incorrect.
The forecast error arises from investors’ inability to accurately predict the future. What looks inevitable in retrospect looks contingent in the moment, the product of what Thomas Wolfe called “that dark miracle of chance that makes new magic in a dusty world.”
In his 1962 classic The Structure of Scientific Revolutions, historian of science Thomas Kuhn posited that scientists rely on simplified models, or paradigms, to understand the observed facts. These paradigms are used to guide scientific enquiry until such enquiry turns up facts that cast these paradigms into doubt. Scientific revolutions are then required to develop new paradigms to replace the old, disproven ones.
The economists Harrison Hong, Jeremy Stein, and Jialin Yu have posited that markets function in a similar way.22 Investors come up with simplified models to understand individual securities. When events unfold that suggest that these simple models are incorrect, investors are forced to revise their interpretation of historical data and develop new paradigms. Stock prices move dramatically to adjust to the new paradigm.
Ever-changing paradigms are necessary because the world is infinitely complex, and forecasting would be impossible without simplification. Think of all of the many things that affect a stock’s price: central bank policy, fund flows, credit conditions, geopolitical events, oil prices, the cross-holdings of the stocks’ owners, future earnings, executive misbehavior, fraud, litigation, shifting consumer preferences, etc. No model of a stock’s price could ever capture these dynamics, and so most investors rely instead on dividend discount models married to investment paradigms (e.g., a growth investor who looks for companies whose growth is underpriced relative to the market combines a paradigm about predicting growth with a dividend discount model that compares the rate of that growth to what is priced into the stock). But both these dividend discount models and these investment paradigms are too simple and are thus frequently proven wrong.
Market volatility results from these forecasting errors in predictable ways that fit with our empirical observations of stock prices. Stock returns are stochastic, rather than linear, reacting sharply to paradigm shifts. High-priced glamour stocks are more apt to experience unusually large negative price movements, usually after a string of negative news. The converse is true for value stocks. As a result, stocks with negative paradigms outperform stocks with positive paradigms because, when those paradigms are proven wrong, the stock prices move dramatically. These empirical truths have been widely validated.23
A more humanistic approach to finance suggests that the most intellectually honest approach—and the most profitable trading strategy—is to bet against the experts. This theory has the benefit of wide empirical support. Consider two natural conclusions that flow from a philosophy of unpredictability: preferring low-cost funds over high-cost funds and preferring contrarian strategies that buy out-of-favor value stocks. Evidence shows that there is a significant negative correlation between expenses paid to management companies and returns for investors.24
In direct contradiction to the capital asset pricing model, value stocks significantly outperform growth stocks.25 Large value stocks outperform large growth stocks by 0.26 percent per month controlling for CAPM, and small value stocks outperform small growth stocks by 0.78 percent per month controlling for CAPM. This is a significant and dramatic difference in return that stems entirely from betting on out-of-favor stocks that the experts dislike, and avoiding the glamorous, in-favor stocks that professional investors like.
These findings are consistent with an assumption of unpredictability. Those who try to predict fail, and the stocks of companies that are predicted to grow underperform the stocks of companies that are predicted not to grow. Growth is simply not predictable.26
Broader Implications
The man who would go on to become the youngest recipient of the Nobel Prize in economics, Kenneth Arrow, began his career during World War II in the Weather Division of the U.S. Army Air Force. The division was responsible for turning out long-range weather forecasts.
Arrow ran an analysis of the forecasts and found that his group’s predictions failed to beat the null hypothesis of historical averages. He and his fellow officers submitted a series of memos to the commanding general suggesting that, in light of this finding, the group should be disbanded and the manpower reallocated.
After months of waiting in frustration for a response, they received a terse response from the general’s secretary. “The general is well aware that your division’s forecasts are worthless. However, they are required for planning purposes.”
Finance professionals rely on discounted cash flow models and capital asset pricing assumptions not because they are correct but because they are required for planning purposes. These tools lend social power to the excellent sheep of American society, the conventionally smart but dull and unimaginative strivers: the lawyers of the baby-boom generation and the MBAs of the millennial era.
These managerial capitalists are not trying to remake the world. They are neither craftsmen, pursuing a narrow discipline for the satisfaction of mastering a métier, nor are they entrepreneurs, pursuing bold ideas to satisfy a creative or competitive id. Rather, they are seeking to extract rents from “operating businesses” by taking on roles in private equity or “product management” where their prime value add will be the creation of multi-tab Excel models and beautifully formatted PowerPoint presentations.
Ironically, these theories were invented to avoid bubbles, to replace the volatility of markets with the predictability of an academic discipline. But corporate America’s embrace of empirically disproven theories speaks instead to a hollowness at the core—the prioritization of planning over purpose, of professional advancement over craftsmanship, and rent-seeking over entrepreneurship.
Harvard Business School and the Stanford Graduate School of Business are the Mecca and Medina of this new secular faith. The curriculum is divided between courses that flatter the vanity of the students (classes on leadership, life design, and interpersonal dynamics) and courses that promote empirically invalidated theories (everything with the word finance in the course title). Students can learn groupthink and central planning tools in one fell swoop.
Indeed, the problem of the business schools is so clear that a few clever wags on Wall Street now use the “Harvard MBA Indicator” as a tool to detect bubbles. If more than 30 percent of Harvard MBAs take jobs in finance, it is an effective sell signal for stocks.
Our capitalist economy offers every possible incentive for accuracy and the achievement of excellence. Why do people keep using bad theories that have failed in practice and have been academically invalidated? Not because these theories and methods produce better outcomes for society, but because they create better outcomes for the middle manager, creating a set of impenetrably complex models that mystify the common man and thus assure the value of the expert, of the MBA, of the two-years-at-an-investment-bank savant.
Active investment management does not work for investors because it was not designed to benefit them: it was designed to benefit the managers. The last decade of sclerotic economic growth has shown that, in many cases, corporate management teams are not working to benefit the shareholders, the employees, or anyone other than themselves.


                         XXX  .  V00 DEVELOPING NEW  BUSINESS MODEL 

Developing New Business Models

Most executives assume that creating a sustainable business model entails simply rethinking the customer value proposition and figuring out how to deliver a new one. However, successful models include novel ways of capturing revenues and delivering services in tandem with other companies. In 2008 FedEx came up with a novel business model by integrating the Kinko’s chain of print shops that it had acquired in 2004 with its document-delivery business. Instead of shipping copies of a document from, say, Seattle to New York, FedEx now asks customers if they would like to electronically transfer the master copy to one of its offices in New York. It prints and binds the document at an outlet there and can deliver copies anywhere in the city the next morning. The customer gets more time to prepare the material, gains access to better-quality printing, and can choose from a wide range of document formats that Fed-Ex provides. The document travels most of the way electronically and only the last few miles in a truck. FedEx’s costs shrink and its services become extremely eco-friendly.
Some companies have developed new models just by asking at different times what their business should be. That’s what Waste Management, the $14 billion market leader in garbage disposal, did. Two years ago it estimated that some $9 billion worth of reusable materials might be found in the waste it carried to landfills each year. At about the same time, its customers, too, began to realize that they were throwing away money. Waste Management set up a unit, Green Squad, to generate value from waste. For instance, Green Squad has partnered with Sony in the United States to collect electronic waste that used to end up in landfills. Instead of being just a waste-trucking company, Waste Management is showing customers both how to recover value from waste and how to reduce waste.

New technologies provide start-ups with the ability to challenge conventional wisdom. Calera, a California start-up, has developed technology to extract carbon dioxide from industrial emissions and bubble it through seawater to manufacture cement. The process mimics that used by coral, which builds shells and reefs from the calcium and magnesium in seawater. If successful, Calera’s technology will solve two problems: Removing emissions from power plants and other polluting enterprises, and minimizing emissions during cement production. The company’s first cement plant is located in the Monterey Bay area, near the Moss Landing power plant, which emits 3.5 million tons of carbon dioxide annually. The key question is whether Calera’s cement will be strong enough when produced in large quantities to rival conventional Portland cement. The company is toying with a radical business model: It will give away cement to customers while charging polluters a fee for removing their emissions. Calera’s future is hard to predict, but its technology may well upend an established industry and create a cleaner world.
Developing a new business model requires exploring alternatives to current ways of doing business as well as understanding how companies can meet customers’ needs differently. Executives must learn to question existing models and to act entrepreneurially to develop new delivery mechanisms. As companies become more adept at this, the experience will lead them to the final stage of sustainable innovation, where the impact of a new product or process extends beyond a single market.

Creating Next-Practice Platforms

Next practices change existing paradigms. To develop innovations that lead to next practices, executives must question the implicit assumptions behind current practices. This is exactly what led to today’s industrial and services economy. Somebody once asked: Can we create a carriage that moves without horses pulling it? Can we fly like birds? Can we dive like whales? By questioning the status quo, people and companies have changed it. In like vein, we must ask questions about scarce resources: Can we develop waterless detergents? Can we breed rice that grows without water? Can biodegradable packaging help seed the earth with plants and trees?
Sustainability can lead to interesting next-practice platforms. One is emerging at the intersection of the internet and energy management. Called the smart grid, it uses digital technology to manage power generation, transmission, and distribution from all types of sources along with consumer demand. The smart grid will lead to lower costs as well as the more efficient use of energy. The concept has been around for years, but the huge investments going into it today will soon make it a reality. The grid will allow companies to optimize the energy use of computers, network devices, machinery, telephones, and building equipment, through meters, sensors, and applications. It will also enable the development of cross-industry platforms to manage the energy needs of cities, companies, buildings, and households. Technology vendors such as Cisco, HP, Dell, and IBM are already investing to develop these platforms, as are utilities like Duke Energy, SoCal Edison, and Florida Power & Light.• • •
Two enterprise wide initiatives help companies become sustainable. One: When a company’s top management team decides to focus on the problem, change happens quickly. For instance, in 2005 General Electric’s CEO, Jeff Immelt, declared that the company would focus on tackling environmental issues. Since then every GE business has tried to move up the sustainability ladder, which has helped the conglomerate take the lead in several industries. Two: Recruiting and retaining the right kind of people is important. Recent research suggests that three-fourths of workforce entrants in the United States regard social responsibility and environmental commitment as important criteria in selecting employers. People who are happy about their employers’ positions on those issues also enjoy working for them. Thus companies that try to become sustainable may well find it easier to hire and retain talent.
Leadership and talent are critical for developing a low-carbon economy. The current economic system has placed enormous pressure on the planet while catering to the needs of only about a quarter of the people on it, but over the next decade twice that number will become consumers and producers. Traditional approaches to business will collapse, and companies will have to develop innovative solutions.
That will happen only when executives recognize a simple truth: Sustainability = Innovation.


Facing banks and financial institutions
  1. Not making enough money. Despite all of the headlines about banking profitability, banks and financial institutions still are not making enough return on investment, or the return on equity, that shareholders require.
  1. Consumer expectations. These days it’s all about the customer experience, and many banks are feeling pressure because they are not delivering the level of service that consumers are demanding, especially in regards to technology.
  1. Increasing competition from financial technology companies. Financial technology (FinTech) companies are usually start-up companies based on using software to provide financial services. The increasing popularity of FinTech companies is disrupting the way traditional banking has been done. This creates a big challenge for traditional banks because they are not able to adjust quickly to the changes – not just in technology, but also in operations, culture, and other facets of the industry.
  1. Regulatory pressure. Regulatory requirements continue to increase, and banks need to spend a large part of their discretionary budget on being compliant, and on building systems and processes to keep up with the escalating requirements.
These challenges continue to escalate, so traditional banks need to constantly evaluate and improve their operations in order to keep up with the fast pace of change in the banking and financial industry today. 


What Will You Do In The Digital Economy?

By now, most executives are keenly aware that the digital economy can be either an opportunity or a threat. The question is not whether they should engage their business in it. Rather, it’s how to unleash the power of digital technology while maintaining a healthy business, leveraging existing IT investments, and innovating without disrupting themselves.
Yet most of those executives are shying away Businesspeople in a Meeting --- Image by © Monalyn Gracia/Corbisfrom such a challenge. According to a recent study by MIT Sloan and Capgemini, only 15% of CEOs are executing a digital strategy, even though 90% agree that the digital economy will impact their industry. As these businesses ignore this reality, early adopters of digital transformation are achieving 9% higher revenue creation, 26% greater impact on profitability, and 12% more market valuation.
Why aren’t more leaders willing to transform their business and seize the opportunity of our hyperconnected world? The answer is as simple as human nature. Innately, humans are uncomfortable with the notion of change. We even find comfort in stability and predictability. Unfortunately, the digital economy is none of these – it’s fast and always evolving.

Digital transformation is no longer an option – it’s the imperative

At this moment, we are witnessing an explosion of connections, data, and innovations. And even though this hyperconnectivity has changed the game, customers are radically changing the rules – demanding simple, seamless, and personalized experiences at every touch point.
Billions of people are using social and digital communities to provide services, share insights, and engage in commerce. All the while, new channels for engaging with customers are created, and new ways for making better use of resources are emerging. It is these communities that allow companies to not only give customers what they want, but also align efforts across the business network to maximize value potential.
To seize the opportunities ahead, businesses must go beyond sensors, Big Data, analytics, and social media. More important, they need to reinvent themselves in a manner that is compatible with an increasingly digital world and its inhabitants (a.k.a. your consumers).
Here are a few companies that understand the importance of digital transformation – and are reaping the rewards:
  1. Under Armour:  No longer is this widely popular athletic brand just selling shoes and apparel. They are connecting 38 million people on a digital platform. By focusing on this services side of the business, Under Armour is poised to become a lifestyle advisor and health consultant, using his product side as the enabler.
  1. Port of Hamburg: Europe’s second-largest port is keeping carrier trucks and ships productive around the clock. By fusing facility, weather, and traffic conditions with vehicle availability and shipment schedules, the Port increased container handling capacity by 178% without expanding its physical space.
  1. Haier Asia: This top-ranking multinational consumer electronics and home appliances company decided to disrupt itself before someone else did. The company used a two-prong approach to digital transformation to create a service-based model to seize the potential of changing consumer behaviors and accelerate product development. 
  1. Uber: This startup darling is more than just a taxi service. It is transforming how urban logistics operates through a technology trifecta: Big Data, cloud, and mobile.
  1. American Society of Clinical Oncologists (ASCO): Even nonprofits can benefit from digital transformation. ASCO is transforming care for cancer patients worldwide by consolidating patient information with its CancerLinQ. By unlocking knowledge and value from the 97% of cancer patients who are not involved in clinical trials, healthcare providers can drive better, more data-driven decision making and outcomes.

It’s time to take action 

During the SAP Executive Technology Summit at SAP TechEd on October 19–20, an elite group of CIOs, CTOs, and corporate executives will gather to discuss the challenges of digital transformation and how they can solve them. With the freedom of open, candid, and interactive discussions led by SAP Board Members and senior technology leadership, delegates will exchange ideas on how to get on the right path while leveraging their existing technology infrastructure.

What Is Digital Transformation?


Achieving quantum leaps through disruption and using data in new contexts, in ways designed for more than just Generation Y — indeed, the digital transformation affects us all. It’s time for a detailed look at its key aspects.

Data finding its way into new settings

Archiving all of a company’s internal information until the end of time is generally a good idea, as it gives the boss the security that nothing will be lost. Meanwhile, enabling him or her to create bar graphs and pie charts based on sales trends – preferably in real time, of course – is even better.
But the best scenario of all is when the boss can incorporate data from external sources. All of a sudden, information on factors as seemingly mundane as the weather start helping to improve interpretations of fluctuations in sales and to make precise modifications to the company’s offerings. When the gusts of autumn begin to blow, for example, energy providers scale back solar production and crank up their windmills. Here, external data provides a foundation for processes and decisions that were previously unattainable.

Quantum leaps possible through disruption

While these advancements involve changes in existing workflows, there are also much more radical approaches that eschew conventional structures entirely.
“The aggressive use of data is transforming business models, facilitating new products and services, creating new processes, generating greater utility, and ushering in a new culture of management,” states Professor Walter Brenner of the University of St. Gallen in Switzerland, regarding the effects of digitalization.
Harnessing these benefits requires the application of innovative information and communication technology, especially the kind termed “disruptive.” A complete departure from existing structures may not necessarily be the actual goal, but it can occur as a consequence of this process.
Having had to contend with “only” one new technology at a time in the past, be it PCs, SAP software, SQL databases, or the Internet itself, companies are now facing an array of concurrent topics, such as the Internet of Things, social media, third-generation e-business, and tablets and smartphones. Professor Brenner thus believes that every good — and perhaps disruptive — idea can result in a “quantum leap in terms of data.”

Products and services shaped by customers

It has already been nearly seven years since the release of an app that enables customers to order and pay for taxis. Initially introduced in Berlin, Germany, mytaxi makes it possible to avoid waiting on hold for the next phone representative and pay by credit card while giving drivers greater independence from taxi dispatch centers. In addition, analyses of user data can lead to the creation of new services, such as for people who consistently order taxis at around the same time of day.
“Successful models focus on providing utility to the customer,” Professor Brenner explains. “In the beginning, at least, everything else is secondary.”
In this regard, the private taxi agency Uber is a fair bit more radical. It bypasses the entire taxi industry and hires private individuals interested in making themselves and their vehicles available for rides on the Uber platform. Similarly, Airbnb runs a platform travelers can use to book private accommodations instead of hotel rooms.
Long-established companies are also undergoing profound changes. The German publishing house Axel Springer SE, for instance, has acquired a number of startups, launched an online dating platform, and released an app with which users can collect points at retail. Chairman and CEO Matthias Döpfner also has an interest in getting the company’s newspapers and other periodicals back into the black based on payment models, of course, but these endeavors are somewhat at odds with the traditional notion of publishing houses being involved solely in publishing.

The impact of digitalization transcends Generation Y

Digitalization is effecting changes in nearly every industry. Retailers will likely have no choice but to integrate their sales channels into an Omni channel approach. Seeking to make their data services as attractive as possible, BMW, Mercedes, and Audi have joined forces to purchase the digital map service HERE. Mechanical engineering companies are outfitting their equipment with sensors to reduce downtime and achieve further product improvements.
“The specific potential and risks at hand determine how and by what means each individual company approaches the subject of digitalization,” Professor Brenner reveals. The resulting services will ultimately benefit every customer – not just those belonging to Generation Y, who present a certain basic affinity for digital methods.
“Think of cars that notify the service center when their brakes or drive belts need to be replaced, offer parking assistance, or even handle parking for you,” Brenner offers. “This can be a big help to elderly people in particular.”

Chief digital officers: team members, not miracle workers

Making the transition to the digital future is something that involves not only a CEO or a head of marketing or IT, but the entire company. Though these individuals do play an important role as proponents of digital models, it also takes more than just a chief digital officer alone.
For Professor Brenner, appointing a single person to the board of a DAX company to oversee digitalization is basically absurd. “Unless you’re talking about Da Vinci or Leibnitz born again, nobody could handle such a task,” he states.
In Brenner’s view, this is a topic for each and every department, and responsibilities should be assigned much like on a soccer field: “You’ve got a coach and the players – and the fans, as well, who are more or less what it’s all about.”
Here, the CIO neither competes with the CDO nor assumes an elevated position in the process of digital transformation. Implementing new databases like SAP HANA or Hadoop, leveraging sensor data in both technical and commercially viable ways, these are the tasks CIOs will face going forward.


Human Skills for the Digital Future 



Technology Evolves.
So Must We.


Uniquely Human Abilities
AI is excellent at automating routine knowledge work and generating new insights from existing data — but humans know what they don’t know.
We’re driven to explore, try new and risky things, and make a difference.



We deduce the existence of information we don’t yet know about.



We imagine radical new business models, products, and opportunities.



We have creativity, imagination, humor, ethics, persistence, and critical thinking.

There’s Nothing Soft About “Soft Skills”
To stay ahead of AI in an increasingly automated world, we need to start cultivating our most human abilities on a societal level. There’s nothing soft about these skills, and we can’t afford to leave them to chance.
We must revamp how and what we teach to nurture the critical skills of passion, curiosity, imagination, creativity, critical thinking, and persistence. In the era of AI, no one will be able to thrive without these abilities, and most people will need help acquiring and improving them.
Anything artificial intelligence does has to fit into a human-centered value system that takes our unique abilities into account. While we help AI get more powerful, we need to get better at being human.



XXX . V0000 Electronic Communication Systems Information Systems Security and control    

      
Introduction
Management Information System a challenging and constantly changing field of study It involves the innovative application of computer technology and analytical skills to know and understand the needs of customers, effectively manage operations and supply chain issues, create new efficiencies and competitive advantages and realize the growing promise of e-commerce
Management information systems increasingly change people’s lives, including relationships, communications, transactions, data collection and decision-making. Changes in IT lead to innovation, new business models and services. Organizational leaders need to consider the impact of change inside their organizations. Business must constantly examine its performance, strategy, processes and systems in order to monitor the changes to be made.
The concept of change management and how people deal with it has gained much attention in the fields of human and organizational behaviour, psychology, business administration, operation management, and information systems. Technology and business improvements are needed in modern society, and finding effective ways for managing the process of changes is the key to success in a highly competitive and global business environment
Management Information System (MIS) enables easy access to corporate data such as student, staff, research and finance. The software allows for accessing data via a click and point structure allowing users to drill down to the level of detail that interests them. What data is displayed can be limited by the user by selecting various slices of the data to provide subsets that meet their particular needs. It supports the decision making processes and helps ensure that resource allocation and planning are founded on accurate and meaningful information which at present is drawn entirely from the central University databases. Management Information System vital organizational resources and constitute an integral part of managerial decision making. Therefore, it is important to understand how IS can be better used to assist managers and organizations to improve efficiency, differentiate markets and services, enhance performance and improve productivity while protecting organizational assets.
We are in the world of advanced Information Technology where things are moving in such a fast phase. The availability of information becomes cheaper and faster and the facilities existing to exchange the information among users all across the world has become more simpler due to the evolving of Information Super Highway. The internet provides fast and inexpensive communication channels that range from messages posted on bulletin boards to complex exchanges among many organisations. It also includes information transfer (among computers) and information processing. E-mail, chat groups, and newsgroups are examples of major communication media.
General Analysis
The development and management of information technology tools assists executives and the general workforce in performing any tasks related to the processing of information. MIS systems are especially useful in the collation of business data and the production of reports to be used as tools for decision making.

With computers being as ubiquitous as they are today, there's hardly any large business that does not rely extensively on their IT systems.

However, there are several specific fields in which MIS has become invaluable.

* Strategy Support

While computers cannot create business strategies by themselves they can assist management in understanding the effects of their strategies, and help enable effective decision-making.

MIS systems can be used to transform data into information useful for decision making. Computers can provide financial statements and performance reports to assist in the planning, monitoring and implementation of strategy.

MIS systems provide a valuable function in that they can collate into coherent reports unmanageable volumes of data that would otherwise be broadly useless to decision makers. By studying these reports decision-makers can identify patterns and trends that would have remained unseen if the raw data were consulted manually.

MIS systems can also use these raw data to run simulations – hypothetical scenarios that answer a range of ‘what if’ questions regarding alterations in strategy. For instance, MIS systems can provide predictions about the effect on sales that an alteration in price would have on a product. These Decision Support Systems (DSS) enable more informed decision making within an enterprise than would be possible without MIS systems.

* Data Processing

Not only do MIS systems allow for the collation of vast amounts of business data, but they also provide a valuable time saving benefit to the workforce. Where in the past business information had to be manually processed for filing and analysis it can now be entered quickly and easily onto a computer by a data processor, allowing for faster decision making and quicker reflexes for the enterprise as a whole.

Management by Objectives

While MIS systems are extremely useful in generating statistical reports and data analysis they can also be of use as a Management by Objectives (MBO) tool.

MBO is a management process by which managers and subordinates agree upon a series of objectives for the subordinate to attempt to achieve within a set time frame. Objectives are set using the SMART ratio: that is, objectives should be Specific, Measurable, Agreed, Realistic and Time-Specific.

The aim of these objectives is to provide a set of key performance indicators by which an enterprise can judge the performance of an employee or project. The success of any MBO objective depends upon the continuous tracking of progress.

In tracking this performance it can be extremely useful to make use of an MIS system. Since all SMART objectives are by definition measurable they can be tracked through the generation of management reports to be analysed by decision-makers.

Benefits of MIS

The field of MIS can deliver a great many benefits to enterprises in every industry. Expert organisations such as the Institute of MIS along with peer reviewed journals such as MIS Quarterly continue to find and report new ways to use MIS to achieve business objectives.

Core Competencies

Every market leading enterprise will have at least one core competency – that is, a function they perform better than their competition. By building an exceptional management information system into the enterprise it is possible to push out ahead of the competition. MIS systems provide the tools necessary to gain a better understanding of the market as well as a better understanding of the enterprise itself.

Enhance Supply Chain Management

Improved reporting of business processes leads inevitably to a more streamlined production process. With better information on the production process comes the ability to improve the management of the supply chain, including everything from the sourcing of materials to the manufacturing and distribution of the finished product.

Quick Reflexes

As a corollary to improved supply chain management comes an improved ability to react to changes in the market. Better MIS systems enable an enterprise to react more quickly to their environment, enabling them to push out ahead of the competition and produce a better service and a larger piece of the pie.
Business Data Communication and Networks
Introduction
In recent years, the world of communications has undergone enormous changes. In fact, the term paradigm shift has become ordinary in the information systems field. However, it is definitely an appropriate descriptor of the communications industry. The primary focus of computer technology in the past was to provide processing power for increasingly hungry but traditional applications, such as word processing, spreadsheet, and database applications. While computing power for application processing is still important, today's computer buyers are paying at least as much if not more attention to the computer's ability to connect to networks. In fact, some computer systems (for example, network PCs and Web TVs) have been developed primarily to connect to networks. These computers rely on other computer systems connected to a network to do most of the processing. This change in emphasis is affecting how computer systems impact individuals, organizations, and society by placing more information, even more computing power, at everyone's fingertips.
Transmission of voice, data, text, sound, and images pervades computer information systems regardless of the size of a manager's computer resources. Consider the diversity of organizational tasks that now depend on some form of communications system, The laws governing communications also have been changing rapidly, opening up opportunities for competition between industry giants who had enjoyed monopolies in their areas or were at least restricted from entering other communications areas. The most recent change is the Telecommunications Act of 1996. The basic purpose of this act is to permit any business to compete in any communications market. The law blurs traditional demarcations in industry "turf." For example, cable TV companies used to be confined to offering TV entertainment. These same companies are now considering offering voice communications over their cable system and have already begun to enter the arena of data communications by providing Internet access to their subscribers. At the same time, more and more video and voice conversations are being transmitted over the Internet, and telephone companies have been given the right to provide cable service to their customers. Entertainment firms have begun to purchase or make alliances with telephone, cable, and satellite broadcasting companies. Major TV networks have created alliances with major software firms, and local telephone companies have entered the long-distance telephone market. Some PHS stations have begun to embed data in their TV broadcasts, allowing PCs with a special card installed to receive the data. Even power companies are considering entering the communications business because of the important rights of way to our homes and businesses that they already possess
After seeing the basics and the components of a network, now we are going to see the various kinds of network available in the corporate world and their benefits.
LAN LAN stands for Local Area Network. These networks can consist of anywhere from two to thousands of computers. Even a simple network of one computer connected to one printer can be considered a LAN. Normally, LAN is a computer network that spans a relatively small area. Most
LANs are confined to a single building or group of buildings. However, one LAN can be connected to other LANs over any distance via telephone lines and radio waves.
Most LANs connect workstations and personal computers. Each node (individual computer) in a LAN has its own CPU with which it executes programs, but it also is able to access data and devices anywhere on the LAN. This means that many users can share expensive devices, such as laser printers, as well as data. Users can also use the LAN to communicate with each other, by sending e-mail or engaging in chat sessions.
LANs are capable of transmitting data at very fast rates, much faster than data can be transmitted over a telephone line; but the distances are limited, and there is also a limit on the number of computers that can be attached to a single LAN.
Peer-to-Peer - Sometimes called P2P, these networks are the simplest and least expensive networks to set up. P2P networks are simple in the sense that the computers are connected directly to each other and share the same level of access on the network, hence the name. Computer 1 will connect directly to Computer 2 and will share all files with the appropriate security or sharing rights. If many computers are connected a hub may be used to connect all these computers and/or devices. The diagram below shows a simple peer-to-peer network:
A peer-to-peer network is sometimes the perfect (and cheap) solution for connecting the computers at a small nonprofit. However, peer-to-peer networking has its limitations, and your organization should tread with caution to avoid headaches (security issues, hardware inadequacies, backup problems, etc.) down the road.
Client/Server - Probably the most common LAN types used by companies today, they are called "client/server" because they consist of the server (which stores the files or runs applications) and the client machines, which are the computers used by workers. Using a client/server setup can be helpful in many ways. It can free up disk space by providing a central location for all the files to be stored. It also ensures the most recent copy of that file is available to all. A server can also act as a mail server (which collects and sends all the e-mail) or a print server (which takes all the print jobs and sends them to the printer, thus freeing computing power on the client machine to continue working).
Establishing the right kind of network for your organization is important to make the most of your time and money. While a peer-to-peer network is often a good choice for small networks, in an environment with more than 10-15 computers, a peer-to-peer network begins to become more trouble than it is worth: your computers start to slow down, you can never find the file you are looking for, and security is non-existent. If this is happening in your organization, it is probably time to switch to a client-server network by bringing in a dedicated server to handle the load. The server is called "dedicated" because it is optimized to serve requests from the "client" computers quickly. The diagram below shows a simple client-server network:
What is a server?
A server is simply a computer that is running software that enables it to serve specific requests from other computers, called "clients." For example, you can set up a file server that becomes a central storage place for your network, a print server that takes in print jobs and ships them off to a printer, as well as a multitude of other servers and server functions. A server provides many benefits including:
• Optimization: server hardware is designed to serve requests from clients quickly
• Centralization: files are in one location for easy administration
• Security: multiple levels of permissions can prevent users from doing damage to files
• Redundancy and Back-up: data can be stored in redundant ways making for quick restore in case of problems
The client-server model of networking is the way to go for larger organizations. Once you have a client-server network set up, it should provide you with more flexibility than a peer-to-peer network as your needs change.For example, as network traffic increases, you can add another server to handle the additional load. You can also consider spreading out tasks to various servers, ensuring that they are performed in the most efficient manner possible. Most importantly, a client-server network is much easier to secure and back up, greatly improving the reliability and confidentiality of your data.
Wireless Networking
Wireless networking products have become more popular in the last few years due to an increase in competition among manufacturers and the emergence of a more dominant wireless technology standard. This section looks at the benefits and drawbacks of wireless networking and provides further resources for research into wireless products. Wireless networking refers to hardware and software combinations that enable two or more appliances to share data with each other without direct cable connections. Thus, in its widest sense, wireless networking includes cell and satellite phones, pagers, two-way radios, wireless LANs and modems, and Global Positioning Systems (GPS).

Wireless LANs


Wireless LANs enable client computers and the server to communicate with one another without direct cable connections. Generally, a wireless LAN is connected to an existing wired LAN, although they can exist without a wired LAN (in this case, users will only be able to communicate with other users on the same subnet).
Necessary components include an access point, Client LAN adaptors and the wired LAN. The access point is a device that translates between the wired LAN and the wireless LAN. The Client LAN Adaptors are PC cards, PCI or ISA boards that plug into laptop or desktop computers equipped with radio transceivers to communicate with the Access point. Other components to a wireless LAN can include Extension Points and Directional Antennas. Extension Points are devices similar to the access point, but not connected to the wired LAN. Extension points serve to extend the range of the wireless network by relaying signals from client computers to the Access point. Directional Antennas serve to connect wireless networks located at a greater distance from one another. Each network would have an antenna targeted at each other (known as a "line of site" connection).
How a Wireless LAN works?
In a typical wireless LAN configuration, the access point connects to the wired network from a fixed location using standard cabling. The access point receives and transmits data between the wireless LAN and the wired network infrastructure. A single access point can support a small group of users and can function within a range of less than one hundred to several hundred feet. End users access the wireless LAN through the wireless-LAN adapters installed in their computers.
Benefits of Wireless LANs
Cost: Wireless LANs can cost less to implement than wired LANs, especially in situations where implementing a wired LAN requires extensive labor and materials to install the wiring and drops. For environments that are difficult to wire (such as schools or temporary spaces) a wireless network can be more cost-effective in the long run than a wired one.
Simple/flexible to Install: Wireless LANs eliminate the time needed with wired LANs for laying and pulling wires, and can reach places that cannot be reached by wires.
Portability: Wireless LAN systems can move physical locations much easier than wired LANs, reducing total cost of ownership for organizations that are on the move.
Mobility: Wireless LAN systems can provide LAN users with access to network information anywhere in their organization.
Scalability: Wireless LAN systems can be configured for small offices and large, with peer-to-peer systems or large established LANs, specific to the localized need of a workgroup or across the whole enterprise. Wireless LAN systems grow easily with the need by adding more access points, client LAN adaptors and extension points. Wireless can be a good solution if you need to connect several buildings without installing a wired connection. Wireless LAN bridges can extend LANs that are typically one to five miles apart. These wireless bridges span multiple-building LANs without incurring the monthly costs of a T1 or higher speed lines.
Drawbacks of Wireless LANs
Cost: In environments with installed wiring or less demanding wiring needs, the up front costs of adopting a wireless LAN system can be more expensive than with wired LANs.
Interoperability: There are several competing technologies used by wireless LAN vendors to communicate data between hardware, with no ability for communication directly between systems using these different standards.
Interference: Most of the wireless devices today operate on 2.4-GHz radio bands, which are also used by cordless phones and most microwave ovens. The potential for interference when used near other devices sharing the same frequency band
Speed: Most commonly used wireless LAN products are rated for a maximum 11Mbps throughput, and in practice see speeds about 80% less than this - some wireless LAN products are rated for speeds much less than this (HomeRF systems for example). Still quite speedy for most network needs and for broadband Internet sharing, but for larger offices with high network traffic and demands for speed, this should be taken into consideration.
Wide Area Networks (WANs)
Wide Area Networks or WANs are very large networks of computers. These networks span large geographical areas, generally covering a couple miles, sometimes connecting computers thousands of miles apart. A WAN can also be a collection of LANs, bringing together many smaller networks into one large network. A WAN can constitute a very large corporate or government network, spanning the country or even the world. In fact, the Internet is the largest and most common WAN in existence today.
Normally, it means a computer network that spans a relatively large geographical area. Typically, a WAN consists of two or more local-area networks (LANs).
Computers connected to a wide-area network are often connected through public networks, such as the telephone system. They can also be connected through leased lines or satellites. The largest WAN in existence is the Internet.
Controller Area Network (CANs)
Last modified:
Abbreviated CAN, a serial bus network of microcontrollers that connects devices, sensors and actuators in a system or sub-system for real-time control applications There is no addressing scheme used in controller area networks, as in the sense of conventional addressing in networks (such as Ethernet). Rather, messages are broadcast to all the nodes in the network using an identifier unique to the network. Based on the identifier, the individual nodes decide whether or not to process the message and also determine the priority of the message in terms of competition for bus access. This method allows for uninterrupted transmission when a collision is detected, unlike Ethernets that will stop transmission upon collision detection.
Controller area networks were first developed for use in automobiles. Equipped with an array of sensors, the network is able to monitor the systems that the automobile depends on to run properly and safely. Beyond automobiles, controller area networks can be used as an embedded communication system for microcontrollers as well as an open communication system for intelligent devices.
The controller area network, first developed by Robert Bosch in 1986, is documented in ISO 11898 (for applications up to 1 Mbps) and ISO 11519 (for applications up to 125 Kbps).
Campus Area Networks (CANs)
An interconnection of local-area networks within a limited geographical space, such as a school campus or a military base
Metropolitan Area Networks (MANs)
Last modified: Tuesday, February 03, 2004
A data network designed for a town or city. In terms of geographic breadth, MANs are larger than local-area networks (LANs), but smaller than wide-area networks (WANs). MANs are usually characterized by very high-speed connections using fiber optical cable or other digital media.
Virtual Private Network (VPNs)
Answer: A virtual private network (VPN) is a private data network that makes use of the public telecommunication infrastructure, maintaining privacy through the use of a tunneling protocol and security procedures. A virtual private network can be contrasted with a system of owned or leased lines that can only be used by one company. The idea of the VPN is to give the company the same capabilities at much lower cost by using the shared public infrastructure rather than a private one.
A VPN connects computers located at various places throughout a city, a state, or even globally. It provides a secure network connection for distance computers and does not require laying cable to supply the connection. You can set up a VPN yourself (Windows 2000 server has settings to establish a VPN) or you can purchace one as a service from another company.
Home Area Network (HANs)
A HAN is a network contained within a user's home that connects a person's digital devices, from multiple computers and their peripheral devices to telephones, VCRs, televisions, video games, home security systems, "smart" appliances, fax machines and other digital devices that are wired into the network.
An Introduction to Wireless Networks for the Small/Medium Enterprise (SME)
Wireless Networking, WiFi, is not a new technology, but it is only recently that it has become mainstream. What are the benefits of wireless networks and should you be considering using it?
The advent of portable computing devices is one of the main drivers for the adoption of wireless networking. Today, around 50% of new laptops come wireless enabled out of the box. All of Apple's latest line of laptops comes with both wireless & bluetooth built in. Many Microsoft Windows laptops are similarly wireless enabled.
A powerful alliance of vendors joined together in 1999 to form the WiFi Alliance. You can be assured that any device approved by the WiFi Alliance will interoperate happily with any other approved device. The term WiFi has become corrupted in common usage to mean wireless networks in general, not just devices approved by the WiFi alliance.
Why adopt WiFi?
Today's workforce, equipped with PDAs, laptops and other mobile devices, demand access to your network from wherever they are, without the hassle of a fixed network. WiFi allows your business to deploy a network more quickly, at lower cost, and with greater flexibility than a wired system.
Productivity increases too, since workers can stay connected longer, and are able to collaborate with their co-workers as and where needed.
WiFi networks are more fluid than wired networks. A network is no longer a fixed thing, networks can be created and ripped down in an afternoon instead of the days or weeks required to create a structured cable network.
Architecture
Wireless cards can operate in two modes, Infrastructure and Ad-hoc.
Most business systems use wireless in Infrastructure mode. This means that devices communicate with an access point. Typically the access point also has a connection to the company wired network, allowing users access to servers and files as if they were physically attached to the LAN.
Ad-hoc connections are direct connections between wireless cards. This type of connection is more common amongst home users, but if used by business users could have serious management and security implications.
Management
You can easily connect to a WiFi network anywhere within range of an access point. This is a boon for your workers, but unfortunately, it also brings with it a few headaches for the IT department.
Security
Security is the bane of everybody who puts together a wireless network. access points, using factory default settings, are not secure at all.  So, if security is such a concern does that mean I shouldn't deploy WiFi? No, it doesn't. But it is something that you should bear in mind when in the planning stage.
When talking about security there is no such thing as having a completely secure system. Everything is insecure to some degree or other. The degree of security you require is dictated by the sensitivity of the information you possess.
If you require very high levels of security then you cannot rely on the built in security measures of a WiFi network alone. On the other hand, most small to medium sized companies do not require very high levels of security.

Integrating Enterprise Information on a Global Scale
In today's challenging business environment, companies are encountering levels of growth and change that can quickly make their business information systems obsolete. These enterprises are meeting this challenge by implementing real-time transaction processing systems to reduce cycle time, cut operation costs, and improve responsiveness to corporate users, customers, and vendors alike.
This section highlights the key factors that an organization must address when building a globally integrated business system. It also describes how 3Com Corporation implemented its own state-of-the-art enterprise resource planning (ERP) system using SAP R/3 business application solutions, the Informix OnLine Dynamic Server relational database, and 3Com's own networking systems and products. In addition, the paper touches on the advantages that the partnership between 3Com and Informix offers networking customers.
Thriving in a Volatile Business Climate
Many industries today are characterized by fierce competition, intense time-to-market pressure, and consolidation through mergers and acquisitions. Companies depend on their transaction processing systems to maintain a competitive edge, and to sustain business models that must constantly adapt to changing market conditions.
11.602 © Copy Right: Rai University 226 Management Information Systems
An effective business system is a marriage between information system (IS) and business process. When laying out a strategic plan and defining the underlying architecture for a new corporate transaction processing system, most organizations seek to achieve these primary objectives:
• Make business operations more responsive to customers and the needs of the enterprise by integrating logistical data into one global system
• Implement real-time transaction processing to provide online information access anytime, anywhere
• Develop processes that reduce cycle time for order fulfillment and minimize inventory and distribution costs
• Design a future-proof solution that can scale easily to accommodate growth
• Leverage technology to reduce long-term IS costs
Information technology has progressed dramatically in the last few years. It is now possible to integrate diverse functions more fully using software that offers better price/performance as well as plug-and-play modularity. The latest technology also makes it possible to combine data in a scalable, high-performance relational database, and to transmit the information globally over a reliable high-speed network.
A business information system may be divided into three major elements: the ERP applications, the database, and the network configuration. Each of these elements is equally important, and all of them must mesh smoothly to ensure a successful implementation.
Success Factors Here are some key factors that can help ensure successful implementation of a large-scale business system:
1. Keep the network as flat as possible for simplicity and efficiency. Utilize switching for high performance where you can, and use routing where you must at the network's edges and where security is a key issue.
2. Resist the tendency to over design; you cannot cost-effectively design a completely fail-proof network. Rather, design the network so that a failure in one area will not impact the business processes across the entire enterprise.
3. Put all application and database servers on their own Domain Name Service. This will avoid single points of failure. Again, keep the applications environment as flat as possible.
4. Involve the network management organization as a peer member on the business system implementation team from the beginning. Application management and bandwidth management are both important.
5. Commit to extensive training for users and managers. In-depth training early in the process will minimize the negative impact on productivity of introducing a new system.
6. Engage a consultant to ensure successful implementation. 3Com benefited greatly from Price Waterhouse's experience in R/3 implementations.
7. Conduct stress testing up front. Be prepared for constant refinement of baselines.
How does Management use Information?
The Management Process
Determining how management uses information may be done best in the context of the management process, which is a cycle consisting of the following stages:
Planning
setting goals and objectives and determining specific courses of action by which they may be attained
Organizing
identifying resources required to carry out a plan
Staffing
acquiring resources (e.g., recruiting personnel, purchasing material, raising capital)
Coordinating
executing the plan (e.g., giving instructions, allocating resources)
Controlling
ensuring that actual performance meets planned performance and making changes to plans and/or resource allocations to correct discrepancies
 
 
Information is necessary in all of these stages.
When setting goals and objectives, analysis of historical data can help to predict future trends. Best/worst/average case analyses can aid in the making of decisions where a choice must be made between alternative courses of action. There are also several modelling techniques, such as trend analysis and linear programming, which may be used to optimize production schedules and facility locations and to solve other similar problems - if enough information is available to determine and quantify cause-and-effect relationships between factors and results.
Identifying resource requirements for a completely unfamiliar plan may be little more than educated guesswork. However, in many cases, several parts of a plan are similar to, if not identical to, parts of past plans. Searching of historical data may identify solutions to past problems which are applicable to current problems. Also, certain financial models can be used to identify sources of capital.
Staffing
Again, historical records can be useful in acquiring resources. Information on qualified personnel already within the organization or reliable suppliers can substantially ease the task of locating them. There is a drawback, though, to reliance on this method of staffing in that it may preempt the location of new resources, such as the development of new business contacts or the hiring of talented new employees. To find external information, research is required.
A large part of coordination is communication. Information and instructions flow downward through the organization, communicating the nature of tasks to be performed, the materials to be used, and the destination of goods or the recipients of services. The analytical methods used in planning may also be applied to coordinating, but on a shorter term basis. The relationship between planning and coordinating can be represented this way: as planning becomes shorter and shorter in term, it approaches immediacy and eventually turns into execution.
The need for information at this stage is obvious. In order to compare actual performance to planned performance, information on actual performance is required. This information may include records of resource consumption, results of quality inspections, the disposition of output goods and services, and status reports on the accomplishment of tasks. These can then be compared against the standards established during planning. If discrepancies in performance are found, data analysis can pinpoint the source of the discrepancies so that corrective measures can be taken.
Certainly, not all activities are effectively supported by historic and current internal information alone. For example, staffing can require external information. Also, the planning activities of top executives may depend on a "feel" for the industry - an intuitive sense of the environment - which can only be gained from information that is more eclectic (i.e., not easily quantifiable) than is within the scope of an MIS to maintain. Other tasks are very well supported by computerized information systems. Controlling, for example, is enormously aided by such a system's ability to supply current information quickly and to update information quickly.
Process and Hierarchy
The degree to which many tasks can be supported by an MIS depends on the level of management. Between levels, there are differences not only in the type and scope of activities, but also in the amount of time spent on different stages of the management process. An approximate breakdown of how different levels of management spend their time (Figure 1) shows that the most time-consuming tasks of top and first-line management are planning and coordinating, respectively, and that middle management's time is dominated by planning and controlling. The information needs of different levels of management will be different.
Figure 1. Comparison of management time allocation (Kroeber 103)
Comparison of management time allocation
First-line management is mostly concerned with operational decision-making, such as inventory control, equipment maintenance, and employee evaluation. First-line managers may not need MIS services at all if they are in regular contact with actual production. Any decisions they make which can be supported by computerized systems tend to be straightforward enough that they can be completely automated so that they require no human intervention.
Middle management is involved in tactical decision-making - deciding how best to carry out plans. Examples of tactical decisions include resource allocation, production scheduling, and forecasting. Middle managers need information on operations, in correspondence with their heavy role in controlling. They make more structured decisions which require basically the same types of information on a regular basis.

Top level managers are involved in strategic decision-making - directing the business to gain a competitive edge. They make decisions such as capital budgeting, product line, and mergers and acquisitions. Top managers will want to see "the big picture," information on larger influences both in- and outside of their particular businesses. They need summarized, meaningful information to help them visualize the business without losing the ability to examine details should the need arise. The decisions they make are intuitive and creative, and as a consequence their information needs are difficult to anticipate.
Scheduled Reporting
In the beginning, computer systems were not the powerhouses that they are today. They could not support the load of several simultaneous interactive users without serious performance degradation. Also, they were not nearly as user-friendly as today's computers with their slick graphical user interfaces and point-and-click mouse capabilities. The people who knew how to use computers had to be specialized. Managers, some of whom did not even know how to type, had to rely on those who had technical expertise to use the systems to deliver the information to them. This fact, and the influence of traditional methods of reporting developed before the advent of computerized MIS, is the probable reasons for the continuing present predilection towards paper reports as the primary form of information delivery. Although some systems have evolved to the point where reports are distributed through email rather than through slower interoffice mail, the principle remains the same: someone else, usually the MIS department, pre-packages the information and gives it to the user.
Reports are good for presenting predefined information in a structured and relatively attractive way. Since they are predefined, they can be automated. They provide the same information regularly; and if they are well-designed, they are a simple and familiar method of communicating information. This method of information delivery can be ideal for controlling and keeping track of continuing developments.
If the reports are not well-designed, they can be a nightmare. Databases can store an excruciating amount of information at an excruciating level of detail which no one particularly wants to see.
The main advantage to regularly scheduled reporting is that they require relatively little effort for either side to produce and use, after the initial design process. The main disadvantage is their lack of flexibility. Middle management may find this method of information delivery satisfactory for limited purposes. Some flexibility may be incorporated by allowing users to choose from a selection of reports and report formats, but it remains tedious to obtain additional information.
Ad Hoc Interactive Querying
Now that interactive computing by many simultaneous users is technologically feasible, it is possible to implement systems where users have direct online access to databases and information. Software tools and security systems have been designed to enable users to query and manipulate data and to create their own reports. With this type of system, information needs of controlling and coordinating can still be supported; and higher level planning and organizing, for which information needs are not easy to anticipate, can be better supported.
Flexibility comes with a price: it increases the burden on the user to understand the data and the access tools. Some managers find that working with data helps them to understand the business better and increases their creativity and productivity because they can explore an idea quickly without submitting a request for information. Other managers find that the access tools are too complicated and cumbersome for occasional use, or that they waste too much time fighting the computer and doing what the MIS department should be doing. Some managers are willing to take courses or spend their own time to learn software; some are not. Much depends on the interface which is presented to the user.
Some very powerful and attractive software packages are commercially available. These packages can integrate several different types of databases, link information tables to create customized tables, sort, calculate, filter, format output, generate graphs, link to other applications, and build accessory applications. The range of options can overwhelm the casual user. Even if training is provided, adeptness at using the tool comes only from frequent and extensive use. More limited in-house software packages designed to work with existing systems are less flexible, but their simplicity is less intimidating.
Some possible solutions to the problem of trading flexibility for ease of use might be for the MIS department to customize the interfaces of the more powerful access tools or to train certain people in the more powerful access tools and attach them as a mini-MIS department to certain management teams or top executives. They would customize the interfaces for the managers and make changes according to their specifications. However, there would still be delays and misunderstandings of specifications. The advantages of using more powerful tools may not justify the cost of adapting the applications and making them compatible with existing systems.
Ad hoc querying allows users faster and more direct access to information and greater determination over the information they see. Users can search for information even if they cannot articulate exactly what they are looking for. However, the effectiveness of the system depends largely on the the effectiveness of the users. The best that the MIS department can do is survey the users and try to supply them with tools that they will use effectively.
Data analysis is defined as the examination of subsets of data in order to discern hidden information (Kroeber 110). Many planning and coordinating activities, especially those of middle management, are aided by mathematical models. Sometimes, analysis is as simple as subtotalling certain data fields or graphing data to make patterns more visible. Complex mathematical models may be slightly outside of a mandate of information storage and delivery, but analysis is an integral part of decision making and may be easily enhanced by computerized systems. Simple summarization functions can be built into access and report generation tools. Common models, calculations, and if-then case analyses can be stored as templates in common repositories.
As with database accessing software, software used for data analysis, such as spreadsheets, can range from very powerful to very basic. As well, the same problems of usability versus flexibility are associated. The MIS department can provide training, but it cannot force people to use systems they are not comfortable with. In this situation, though, it is more feasible to hire experts to use the software because the requirements are more specific and rigid. The experts can create and run analytical models and report results back to management. It is up to management to decide a course of action based on the results.
Realistically, it is impossible for an MIS to support all management activities: they are too diverse and unquantified. However, an MIS can concentrate on improving its support of activities that are well supported, thus freeing managers to perform those activities that cannot be facilitated by an MIS. It can be seen that the structured activities of middle management are most easily supported. Middle managers are likely to be the most frequent users of an MIS, so planning should focus on their needs.
The best method of information delivery depends on how much the users want to see balanced against how much they are willing to do. For some uses, scheduled reporting may be ideal. However, the advantages of ad hoc querying over restrictive reporting are too great to ignore. Eventually, an MIS department must try to expand its usefulness by increasing the range of data and informative options available to users. The flexibility of ad hoc querying is necessary to increase the functionality of an MIS.
Information Systems in the Enterprise
KINDS OF INFORMATION SYSTEMS 
  • Organizational Hierarchy
  • Organizational Levels
  • Information Systems
Four General Kinds of IS
  • Operational-level systems
Support operational managers by monitoring the day-to-day’s elementary activities and transactions of the organization.  e.g. TPS.
  • Knowledge-level systems
Support knowledge and data workers in designing products, distributing information, and coping with paperwork in an organization.  e.g. KWS, OAS
  • Management-level systems
Support the monitoring, controlling, decision-making, and administrative activities of middle managers. e.g. MIS, DSS
  • Strategic-level systems
Support long-range planning activities of senior management.  e.g. ESS
A Framework for IS
  • Executive Support Systems (ESS)
  • Management Information Systems (MIS)
  • Decision Support Systems (DSS)
  • Knowledge Work Systems (KWS)
  • Office Automation Systems (OAS)
  • Transaction Processing Systems (TPS
Transaction Processing Systems (TPS)
Computerized system that performs and records the daily routine transactions necessary to conduct the business; these systems serve the operational level of the organization
  • TYPE: Operational-level
  • INPUTS: transactions, events
  • PROCESSING: updating
  • OUTPUTS: detailed reports
A Symbolic Representation for a payroll TPS
 


Office Automation Systems (OAS)
Computer system, such as word processing, electronic mail system, and scheduling system, that is designed to increase the productivity of data workers in the office.
  •  TYPE: Knowledge-level
  •  INPUTS: documents, schedules
  •  PROCESSING: document management,      scheduling, communication
  •  OUTPUTS: documents; schedules
  •  USERS: clerical workers
EXAMPLE: document imaging system
Knowledge Work Systems (KWS)
Information system that aids knowledge workers in the creation and integration of new knowledge in the organization
  • TYPE: Knowledge-level
  •  INPUTS: design specifications
  •  PROCESSING: modelling
  •  OUTPUTS: designs, graphics
  •  USERS: technical staff; professionals
EXAMPLE: Engineering workstations
Decision Support Systems (DSS)
Information system at the management level of an organization that combines data and sophisticated analytical models or data analysis tools to support semi-structured and unstructured decision making
  • TYPE: Management-level
  •  INPUTS: low volume data
  •  PROCESSING: simulations, analysis
  •  OUTPUTS: decision analysis
  •  USERS: professionals, staff managers
  •  DECISION-MAKING: semi-structured
EXAMPLE: sales region analysis
Characteristics of Decision-Support Systems
DSS offer users flexibility, adaptability, and a quick response.
 DSS operate with little or no assistance from professional programmers.
 DSS provide support for decisions and problems whose solutions cannot be specified in advance.
DSS use sophisticated data analysis and modelling tools.
Characteristics of Management information Systems
MIS support structured decisions at the operational and management control levels. However, they are also useful for planning purposes of senior management staff.
MIS are generally reporting and control oriented. They are designed to report on existing operations and therefore to help provide day-to-day control of operations.
 MIS rely an existing corporate data-and data flows.
 MIS have little analytical capability.
 MIS generally aid in decision making using past and present data.
 MIS are relatively inflexible.
 MIS have an internal rather than an external orientation.

Executive Support Systems (ESS)
Information system at the strategic level of an organization that address unstructured decision making through advanced graphics and communications.
 TYPE: Strategic level
  •  INPUTS: aggregate data; internal and external
  •  PROCESSING: interactive
  •  OUTPUTS: projections
  •  USERS: senior managers
  •  DECISION-MAKING: highly unstructured
EXAMPLE: 5 year operating plan
 
Model of a Typical Executive Support System
 
 
 
Major Types of Information Systems

 
Classification of IS by Organizational Structure
Departmental Information Systems
Enterprise Information System
Inter-organizational Systems
Classification of IS by Functional Area
The accounting information system
The finance information system
The manufacturing (operations, production) information system
The marketing information system
The human resources information system
Sales & Marketing Systems
Systems that help the firm identify customers for the firm’s products or services, develop products and services to meet customer’s needs, promote products and services, sell the products and services, and provide ongoing customer support.
 
Manufacturing and Production Systems

Systems that deal with the planning, development, and production of products and services and with controlling the flow of production
 
Finance and Accounting Systems
Systems that keep track of the firm’s financial assets and fund flows
 
Human Resources Systems
Systems that maintain employee records; Track employee skills, job performance, and training; and support planning for employee compensation and career development.

 
Examples of Business Processes

 
Customer Relationship Management

Customer relationship management Business and technology discipline to coordinate alt of the business processes for dealing with customers.

 
Customer Relationship Management

Supply chain management Integration of supplier, distributor, and customer logistics requirements into one cohesive process.

Supply chain
Network of facilities for procuring materials, transforming raw materials into finished products,' and distributing finished produce to customers

 
HOW INFORMATION SYSTEMS CAN FACILITATE SUPPLY CHAIN MANAGEMENT
Information systems can help participants in the supply chain:
Decide when and what to produce, store, and move
Rapidly communicate orders Track the status of orders
Check inventory availability and monitor inventory levels
Track shipments
Plan production based on actual customer demand
Rapidly communicate changes in product design
Provide product specifications
Share information about defect rates and returns
Enterprise Systems

Firm wide information systems that integrate key business processes so that information can flow freely between different parts of the firm

Traditional View of Systems


 
 
Enterprise Systems

 
Benefits and Challenges of Enterprise Systems
  • Benefits
Firm structure and organization: One Organization
Management: Firm wide Knowledge-based Management Processes
Technology: Unified Platform
Business: More Efficient Operations and Customer-driven Business Processes
  • Challenges
Daunting Implementation
High Up-front Costs and Future Benefits
Inflexibility
Challenges in Management of Information Systems
Although information systems are creating many exciting opportunities for both businesses and individuals, they are also a source of new problems, issues and challenges for managers. In this course, you will learn about both the challenges and opportunities information systems pose, and you will be able to use information technology to enrich your learning experience.
New Opportunities with Technology
Is this new technology worth the headaches and heartaches associated with all the problems that can and will arise? Yes. The opportunities for success are endless. The new technologies do offer solutions to age-old problems. Improvements are possible to the way you operate and do business.
The rest of the lessons in this book and this course will give you tools you can use to be successful with the current and future Management Information Systems.
The Strategic Business Challenge
Companies spend thousands of dollars on hardware and software, only to find that most of the technology actually goes unused. "How can that be?" you ask. Usually because they didn't pay attention to the full integration of the technology into the organization Merely buying the technology without exploiting the new opportunities it offers for doing business smarter and better doesn't accomplish much. Think and rethink everything you do and figure out how you can do it better. Change is inevitable, and information must be managed just as you would any other resource.
Creating a digital firm and obtaining benefit is a long and difficult journey for most organizations. Despite heavy information technology investments, many organizations are not realizing significant business value from their business systems, nor or they become digitally enabled. The power of computer hardware and software has grown much more rapidly than the ability of organizations to apply and to use this technology. To fully benefit form information technology, realize genuine productivity, and take advantage of digital firm capabilities, many organizations actually need to be redesigned. They will have to make fundamental changes in organizational behavior, develop new business models and eliminate the inefficiencies of outmoded organizational structures. If organizations merely automate what they are doing today, they are largely missing the potential of information technology.
The Globalization Challenge
The world becomes smaller every day. Competition increases among countries as well as companies. A good Management Information System meets both domestic and foreign opportunities and challenges. The rapid growth in international trade and the emergence of a global economy call for information systems that can support both producing and selling goods in many different countries. In the past, each regional office of a multinational corporation focused on solving its own unique information problems. Given language, cultural and political differences among countries, this focus frequently resulted in chaos and the failure of central management controls. To develop integrated, multinational, information systems, businesses must develop global hardware, software and communication standards; create cross-cultural accounting and reporting structures; and design transnational business processes.
The Information Architecture Challenge
You have to decide what business you are in, what your core competencies are, and what the organization's goals are. Those decisions drive the technology, instead of the technology driving the rest of the company. Purchasing new hardware involves more than taking the machine out of the box and setting it on someone's desk. Remember the triangle of hardware, software, Take care of the people and they will take care of the rest! Information architecture describes how to incorporate technology into the mainstream processes in which the business is involved. How will the new Information System support getting the product produced and shipped? How will Advertising and Marketing know when to launch ad campaigns? How will Accounting know when to expect payment?
Many companies are saddled with expensive and unwieldy information technology platforms that cannot adopt to innovation and change. Their information systems are so complex and brittle that they act as constraints on business strategy and execution. Meeting new business and technology challenges may require redesigning the organization and building a new information architecture and information technology infrastructure.
The Information Systems Investment Challenge
Too often managers look at their technological investments in terms of the cost of new hardware or software. They overlook the costs associated with the non-technical side of technology. Is productivity up or down? What is the cost of lost sales opportunities and lost customer confidence from a poorly managed E-Business Web site? How do you determine if your Management Information System is worth it?
A major problem raised by the development of powerful, inexpensive computers involves not technology but management and organizations. It’s one thing to use information technology to design, produce, deliver and maintain new products. It’s another thing to make money doing it. How can organizations obtain a sizeable payoff from their investments in information systems? How can management make sure that the management information systems contribute to corporate value?
The Responsibility and Control Challenge
Remember, humans should drive the technology, not the other way around. Too often we find it easier to blame the computer for messing up than to realize it's only doing what a human being told it to do. Your goal should be to integrate the technology into the world of people. Humans do control the technology, and as a manager, you shouldn't lose sight of that.
How can we define information systems that people can control and understand? Although information systems have provided enormous benefits and efficiencies, they have also created new problems and challenges of which managers should be aware.
The following table describes some of these problems and challenges.
Positive and Negative Impacts of Information Systems

Benefits of Information Systems
Negative Impact
Information system can perform calculations or process paperwork much faster than people.
By automating activities that were previously performed by people, information systems may eliminate jobs
Information systems can help companies learn more about the purchase patterns and the preferences of the customers.
Information systems may allow organizations to collect personal details about people that violate their privacy
Information systems provide new efficiencies through services such as automated teller machines (ATMs), telephone systems, or computer controlled airplanes and air terminals
Information systems are used in so many aspects of everyday life that system outages can cause shutdowns of businesses or transportation services, paralyzing communities.
Information systems have made possible new medical advances in surgery, radiology, and patient monitoring
Heavy uses of information systems may suffer repetitive stress injury, techno stress, and other health problems
The internet distributes information instantly to millions of people across the world.
The internet can be used to distribute illegal copies of software, books, articles, and other intellectual property.
Management's focus must continually change to take advantage of new opportunities. Changes should take place throughout the organization. They require lots of attention and planning for smooth execution.
Extranets pack tough new challenges for MIS - Industry Trend or Event
Asking analysts and consultants to talk about the extranet phenomenon often leads to the same response: "What do you mean when you say extranet?" Confusion abounds when it comes to trying to nail down a solid definition for something that to some folks makes more sense as a concept than as a product or service.
In fact, at least one industry player believes the term to be meaningless. Semantics aside, many organizations are beginning to see real advantages to allowing selected suppliers, customer and business partners access to part or all of their own networks via the Internet, according to observers. Within the next three to four years, "the primary vehicle for delivering electronic commerce in the b-to-b [business to business] world will in fact be over extranets, rather than private value-added networks or even the global, open Internet," says Alyse Terhune, a research director with the Gartner Group in Stamford, Conn. And while companies used to develop and implement their own extranet strategies, more and more are looking to outside help, according to analysts. Internet Service Providers (ISPs) are particularly eager to cash in on the growing market.
Outsourcing may be a cheap and easy answer for some firms, but Terhune isn't convinced it's a surefire strategy. "I think that there are lots of pieces involved in a successful extranet, and some of them are the core competency of ISPs and some of them aren't." A solid infrastructure is one thing, she says, but administrative logistics can be something else entirely. Especially when different organizations with different ways of doing business get together
You have to deal with the business process that's being accommodated. That means providing things like applications that will facilitate say, buying and selling. That's more than just a catalogue. In the business world that's who within my requisitioning organization can buy from what suppliers," what they can buy and how much they can buy, she says.
Security can also be a big issue, but is more of an administrative problem than a technological one, says Terhune. "When you really look at the problem of securing information and determining which information has to be secured and to whom, it becomes more complicated." Technical solutions such as encryption, firewalls and data packets are "more or less standard features" these days, Reisman claims, but some organizations are still very Internet-shy.
Large corporations tend to be bigger targets for hackers and spies, he says, something small firms might want to consider when budgeting for security features. Future developments are particularly hard to quantify when it comes to extranets. It's difficult enough to come to a consensus on what they are now, let alone to guess how they might develop or grow in use over time.
Managers will also be faced with ongoing problems of security and control. Information systems are so essential to business, government and daily life that organisations must take special steps to ensure that they are accurate, reliable and secure. A firm invites disaster if it uses systems that don’t work as intended, that don’t deliver information in a form that people can interpret correctly and use, or that have control rooms, where control don’t work or where instruments give false signals. Information systems must be designed so that they function as intended and so that humans can control the process.

Electronic Communication Systems

World of advanced Information Technology where things are moving in such a fast phase The availability of information becomes cheaper and faster and the facilities existing to exchange the information among users all across the world has become more simpler due to the evolving of Information Super Highway. The internet provides fast and inexpensive communication channels that range from messages posted on bulletin boards to complex exchanges among many organisations. It also includes information transfer (among computers) and information processing. E-mail, chat groups, and newsgroups are examples of major communication media..

Electronic Conferencing
Professionals in all fields are looking to Internet technology to find communication methods that encourage greater collaboration and are an efficient way of dispersing helpful and relevant information in a cost effective manner. One method that has become increasingly popular is conferencing, whether it be on “electronic bulletin
Boards”, listservs, in chat rooms or using web-based meeting protocols
Conferencing allows a large group to exchange ideas by reading and posting messages which are delivered to a central point and broadcast to the conference participants by special software. This is done in a synchronous or asynchronous mode, that is when all participants gather together at the same time or not! i.e. that conference participants can read ad post messages at any time.
Conferencing has its advantages and disadvantages. By far, it is cheaper than using long-distance telephone or fax and the software and hardware needed to run it, a personal computer and an Internet connection, are becoming so readily available that it makes it possible for larger numbers of people to become participants. Conferencing keeps meeting costs down because the costs associated with face-to-face meetings such as travel and accommodations don’t exist. And in most modes of Internet conferencing, subscribers can participate at a time that is convenient for them, thus helping in the old “time management” dilemma.
Some people find it difficult to commit the amount of time it takes to make conferencing successful, and others don’t like it because of the lack of personal contact. Participation is linked to a person’s previous experience with technology and the Internet, his likes and dislikes or her preferred learning styles. Research suggests that auditory learners may feel distanced from discussions in asynchronous conferences and may prefer telephone or face-to-face meetings where they can be heard. Visual learners usually flourish in the on-line environment because they are used to processing large amounts of information in this manner.
Pre-planning: Decide on a series of topics and find guest facilitators/moderators who will lead the discussion on a given topic. Determine how long a conference will last--a day, a week, several weeks, a month etc. Spread the dates of the conferences out over a six or eight month period with a good break between conferences (three conferences in this time period would be good). Promote the conferences in advance. This allows people to begin thinking about the topics and the kinds of questions they would like to ask or information they would like to share.
Promotion: Begin sending messages introducing the moderator, giving information about the participants and commenting on different elements of the topic to conference subscribers two weeks before the conference to build anticipation.
Introductions: Ask the guest moderator to introduce him/herself a week or so before the actual start of the conference. During that time, ask the moderator to post the conference agenda and request potential participants to: a) introduce themselves, b) suggest what they'd like to learn from the conference, and c) identify one or two of their favourite resources related to the topic. These are ways to “break the ice” that can contribute to the quality of the conference and, unobtrusively, help the participants feel more comfortable as they get to know and feel at ease with each other and this, for some of them, new manner of meeting.
Time Factor: Consider stretching the conference out over an appropriate period of time and have the moderator post her/his “conference” material every third day or so, depending on the over all length of the conference, to give participants more time to come and go on the system. Adjust this as time goes on, as participants become more comfortable with this process in general and become more active in their participation.
Be Prepared: There will be lulls in the action: at the beginning, as people are waiting to see how this will develop and at various points during the conference when interest may seem to be waning. Consider preparing a few colleagues to encourage discussion by having them ready to respond to the moderator’s postings, ask questions or post their own ideas at times when the action needs to get started or participation is slow.
Evaluation: Have a routine “post mortem” after each conference where participants, by way of an evaluation, can suggest what learnings they may glean and what could have been done to make the conference more helpful, specifically for them.
Technical: Electronic conferences should be held in a venue separate from the one people use for normal discussions. This way, regular users are less likely to feel they are interfering with or interrupting the conference. Electronic conferencing can be enjoyable as well as efficient and convenient. Giving some thought to how the conference will progress before it even starts will help to make it a valuable and enjoyable experience for all.

Electronic Meeting Systems

Meetings are a way of life in every business. Meetings can be a source of tremendous frustration. Meetings are also costly: put six people in a weekly staff meeting, and you've eaten up $10,000 worth of time. Total quality management, business process re-engineering, team management, and other techniques of the 90's aren't helping---most of these techniques actually increase the number of meetings people attend. Because meetings are so expensive and so inefficient and so dissatisfying, it's no surprise that there are lots of software developers working on tools to improve meetings.

These tools span a wide range of meeting assistance and support tasks. At the low end, software schedulers keep track of people, appointments, and resources to coordinate meeting times and places. In the middle are tools which help groups by improving communications. This includes conferencing systems and bulletin board packages, which extend "meeting space" outside of the meeting room by letting participants discuss issues without having to sit together. Other mid-range tools are designed to assist communications during a meeting. Video and audio conferencing hardware can be integrated with personal computers to link people in diverse locations for a single meeting. Shared drawing and editing tools also help groups work on a single document or share a visual concept easily.

At the high end are systems with much loftier goals: the complete reinvention of the meeting process. Developers of these systems have developed ways of completely changing the way meetings are held, and they have numbers from customers proving massive and dramatic improvements in productivity. But these benefits come at a cost---attendees must stop thinking of meetings as a waste of time and start thinking of meetings as an opportunity to make decisions and share information.

Schedulers to Keep You on Track

The low end of the meeting support market focuses mainly on scheduling meetings and managing calendars. Although there are many products available for standalone use or which support only a single platform, only a few vendors have taken an enterprise-wide approach to scheduling. Even so, any organization with truly disparate platforms will find it impossible to find a vendor willing to support all popular platforms for this relatively simple task.
In evaluating group scheduling systems, network managers must keep in mind the underlying politics of scheduling. These are generally more important to the success of a group scheduler than quality of user interface or performance. If the group scheduler cannot successfully emulate people’s behavior regarding their own personal calendar, then it will not be accepted into the workplace. Groupware of this type must fit into the organization; it is not reasonable to expect people to change the way they operate simply to accommodate an appointment-scheduling program.

Spreading the Meeting Room Around

Traditional meetings are same-time, same-place activities everyone has to be in the same room at the same time. Software and hardware, which extends the meeting room across both time and space, can substitute for some face-to-face meetings, empower people in remote locations, and improve face-to-face, meetings by making everyone better prepared.
The oldest alternative to face-to-face meetings is computer conferencing systems (sometimes called bulletin boards, although purists make a distinction between the two). These conferencing systems grew out of multi-user systems and often: support both microcomputer and dumb terminal interfaces.
Most conferencing systems do little more than let people exchange information and follow a single message and its associated discussion. The largest multi-platform conferencing system of this type is the Usenet News system. With literally dozens of public-domain and commercial "news readers," and a good selection of minicomputer-based servers, a simple conferencing system can truly encompass all corporate computing platforms, including dumb terminals, all microcomputer systems, on up to window system terminals. Other commercial products which support multiple platforms include Digital's (800/DIGITAL) DEC Notes, Lotus' (800/522-6752) Notes, and Pacer Software's (800/722-3702) PacerForum.
Making Meetings Better
Let me explain how to make the meetings better with a product called GroupSystems. It isn’t a single package. It's actually a suite of software tools (sixteen in the DOS version, fewer in the Windows version) which automate and enhance many of the processes which occur in meetings. A meeting using GroupSystems requires a personal computer for each participant and a "facilitator," someone to lead the meeting and choose which GroupSystems tool is most appropriate to the task at hand.
For example, suppose you want to have a meeting to help decide on a new name for your company. In the GroupSystems world, the meeting would look like this. First, participants in the meeting would brainstorm ideas using the Electronic Brainstorming tool. Each participant would enter as many ideas as they could think of during a defined time period, say 10 minutes. As ideas were typed in, GroupSystems would shuffle them around and send them to other participants. By seeing the ideas of others, presumably, you come up with your own possibilities.
GroupSystems advocates claim two advantages over manual meetings: participants can type in more ideas more quickly than anyone could possibly write them down because everyone is typing at the same time. Each idea is evaluated on its own merits, rather than based on who said it. Because tools like Electronic Brainstorming are anonymous, people who are normally afraid to bring up opinions in a meeting will be able to bring their best ideas without fear. ? In most professional meetings, this anonymity is rarely abused.
Changing the Face
Group Systems does not augment existing meetings. When a company buys into, they are buying much more than a software package. To properly use the system, facilitators must be trained in maximizing meeting productivity using these tools---because GroupSystems changes the way companies hold meetings. Bringing in GroupSystems is not a trivial investment. GroupSystems requires a PC in front of each user, Windows or DOS, a LAN (any popular microcomputer LAN package will work) to link them together, and a facilitator's station to control the meeting tools. GroupSystems also uses a shared screen at the front of the room which has to be large enough for everyone to see. With a base software and training cost of $25,000, building a meeting room for GroupSystems usually costs about $100,000.
Electronic Discussions
We've all heard so much hype about the potential of computer mediated communication to revolutionize teaching that we've begun to dismiss it automatically. We need to recognize, though, that it's not all hype. Even once you've discounted the snake oil salesmen selling off-the-shelf electronic course guides, style checkers and "interactive" computer-assisted learning programs, in fact there are still some startling opportunities. There are many reasons to expect that computer-mediated written discussions -- to pick the one that I'm most interested in -- should afford unprecedented learning opportunities, combining the flexibility and interactive engagement of oral conversation and the power of written language to foster reflection and allow complex ideas to be accumulated, revised, extended and polished.
But there haven't been many demonstrations of this potential. Indeed, the most common consequence of setting up an "electronic discussion group" for a university class or a group of faculty at an institution, or a set of colleagues with common interests, is a flurry of initial greetings ("Hello, everyone, isn't it great to have this new way to communicate"), followed by an enduring silence. The flurry may last somewhat longer for students in class-oriented discussions -- especially if participation is made a course requirement -- but even in those cases, most often the quality of the participation quickly becomes perfunctory and unengaged . . . usually not long before the instructor quietly allows the requirement to lapse.
To think about why this happens, and what we might do to avoid it, it's important to be clear about what sorts of programs and situations we're talking about. For many people, "it's all e-mail," but that oversimplification masks some distinctions that are worth making.
There are a number of ways we can group such programs to help us think about the characteristics of the different kinds of thing we're talking about here. One is to distinguish between "synchronic" and "asynchronic" types. Synchronic programs work in "real time"; that is, you write, someone reads immediately, and the text is gone, usually scrolling up the screen to oblivion. These include structures ranging from Internet or local "chat rooms" to highly developed sites, where conversations take place in virtual environments which can be fairly richly detailed. On the other hand, "asynchronous" programs like list servers and bulletin boards tend more toward the status of written correspondence -- or even publication. Messages persist (for instance, in your incoming mail) until they're read, and in fact can be easily saved after reading, and responded to at your convenience. Most programs used in classes other than computer-dedicated writing classes are of this latter kind.

Electronic messaging

Electronic messaging is more than just text messages passed between human writers and readers -- it offers a great potential in our environment to automate a great deal of routine data passing as well. These notes are devoted to exploring the topic to illustrate some of this potential and to helping you make some wise investments upon which you can grow a flexible, robust capability. Don't let the growing pains and a seemingly endless supply of warts divert your attention too much.
The first broadening of use of e-mail for more than the accustomed interpersonal communications is command communications. It is the best replacement for the traditional telegram-style record message. This is the market that the Defense Message System is targeting.
The second use is in automated systems where at least one end -- either the originator or the recipient -- is a machine rather than a human. Secure, multipurpose e-mail is an extremely good idea -- and one whose time came some years ago. Unfortunately, the impediments to a deployed, usable system have been considerable, and the worst ones largely unforeseen.
On the other hand, this litany of growth pains should really be put into the perspective that e-mail is a very powerful tool ... that has improved enormously in a fairly short period of time. E-mail should be the choice of first resort for the enveloping definition for almost any information system. Including 'tactical' ones. If it fits, look no farther.
Electronic Publishing
Traditional publishing involves four steps. First the authors produce their material for review by editors working on behalf of publishers. Material found suitable for a publisher is then sent to be typeset, to format the content into individual pages with a chosen style. These pages are then reproduced in multiple sets by putting ink on paper, and are bound into individual books and journal issues. Finally these are distributed to the audience by post to mailing lists of subscribers, book club members, libraries with standing orders and people who order books on line, or through retail outlets like bookstores and news agents.
Computers have played certain parts in this process for some time. Authors use word processors to build up, revise and print the source material, and more complex typesetting software is used to format pages with humans putting in the required commands to specify type fonts and sizes; pages sizes and partitions; insertion of figures, equations and footnotes; linking parts from different sources, such as chapters by multiple authors or reproduction of existing material, into a complete volume etc.
Most of you would be familiar with MS Word, and some of you may have used Latex, which formats an ASCII text file with embedded type- setting commands, into a DVI file specifying the content of each page, and this is further processed using a DVI to printing file (such as PDF or Postscript) converter which specifies in minute detail what kind of symbol to put onto each page where. Desktop publishing software basically combines word processing with easy to use typesetting software so that an author or editor can produce camera ready pages to be sent to a printer on his own PC, while HTML is a desktop publishing language that specifies the content and format of single pages for output on a computer screen with provision to add colour, sound, video, etc. The old equipment that were used to do the job, typewriters, molten lead typesetting machines, even paper typesetting machines (to produce masters for offset printers) are hardly used these days.
While computers have also played some part in controlling printing machines and in helping with the mailing and selling of printed material, their impact there has been much less fundamental, since they have not changed the basic process of putting ink on paper and distributing the paper piles. This is a highly inefficient process since the paper contributes almost all the weight, but the information is only carried by the ink. It takes organization to record, store and move around all that heavy bulk. In fact, one reason authors have to go through publishers to publish is the latter control the physical distribution system: while authors can produce content and make copies on their own, they lack means to put the copies into the libraries and retail shops, or to use large mailing lists to send copies to readers.
The Internet has radically changed this: Content is now specified as modulations of electromagnetic waves, conducted instantaneously across the world via wires, satellite links and optical fibres, instead of ink on paper moved around on lorries and aeroplanes. With numerous search engines crawling the web looking at every page and cateloging the content, it does not take long before your pages would turn up in a search list of someone looking for related material.
In short, with the help of PC on Internet, anyone can write, format and distribute his writings in the most direct way. However, while this solves one problem, it creates a new sets of issues for authors and publishers: Content represented in this way is also easily reproduced, without any graphical quality loss nor the work of copying, collating and binding. The previous exclusivity of control is now lost. For commercial publishing, the question is how to get paid when someone reads something you own. For scholarly publishing, the issue is establishing who published a particular piece of work at what time.
Information Systems Security and Control
There is a little doubt that business use of computers is increasing – to the point where e-commerce business require all of the components to be functioning 24 hours a day.. in this environment managers need to know what threats they face and what technologies exist to protect the system. there are three aspects that affect all businesses but are particularly important in e-commerce: 1. interception of transmissions, 2. attacks on servers, and 3. monitoring systems to identify attacks.
Many potential threats exist to information systems and the data they hold. The complicated aspect is that the biggest information threat is from the legitimaste users and developers. Purely by accident, a user might enter incorrect data or delete important information. A designer must understand an important function and the system will provide erroneous results. An innocent programming mistake could result in incorrect or destroyed data. Minor changes to a frail system could result in a cascading failure of the entire system.
We can detect and prevent some of these problems through careful design, testing, training and backup provisions. However, modern information systems are extremely complex. We cannot guarantee they will work correctly all of the time. Plus, the world poses physical threats that cannot be avoided: hurricanes, earthquakes, floods and so on. Often, the best we can do is build contingency plans that enable the company to recover as fast as possible. The most important aspect of any disaster plan is to maintain adequate backup compiles. With careful planning, organisation, and enough money, firms are able to provide virtually continuous information systems support.
33.2 System Vulnerability and Abuse
As our society and the world itself come to depend on computers and information systems more and more, systems must become more reliable. The systems must also be more secure when processing transactions and maintaining data. These two issues, which we address in this lesson the biggest issues facing those wanting to do business on or expand their operations to the Internet. The threats are real, but so are the solutions.
Threats to Computerised Information Systems
• Hardware failure
• Fire
• Software failure
• Electrical problem
• Personnel actions
• User errors
• Terminal access penetration
• Program changes
• Theft of data, services, equipment
• Telecommunications problems
System Quality Problems: Software and Data
It would be nice to have a perfect world, but we donÕt. Defects in software and data are real. You as an end user can't do much about the software, but you can do something about the data you input.
Bugs and Defects
The term bug, used to describe a defect in a software program, has been around since the 1940s and 1950s. Back then, computers were powered by vacuum tubes - hundreds and thousands of them. Grace Hopper, an early pioneer, was troubleshooting a computer that had quit running. When her team opened the back of the computer to see what was wrong, they found a moth had landed on one of the tubes and burned it out. So the term "bug" came to describe problems with computers and software.
With millions of lines of code, it's impossible to have a completely error-free program. Most software manufacturers know their products contain bugs when they release them to the marketplace. They provide free updates and fixes on their Web sites. That's why its a good idea not to buy the original version of a new software program but to wait until some of the major bugs have been found by others and fixed by the company.
Because bugs are so easy to create, most unintentionally, you can reduce the number of them in your programs by using the tools discussed in other chapters to design good programs from the beginning. Many bugs originate in poorly defined and designed programs and just keep infiltrating all parts of the program.
The Maintenance Nightmare
You simply can't build a system and then ignore it. It needs constant and continual attention. The fact is that half of a company's technology staff time is devoted to maintenance.
When you're considering organizational changes, no matter how minor they may seem, you must consider what changes need to be made to the systems that support the business unit. Keep in mind that software is very complex nowadays. You just might have to search through thousands or millions of lines of code to find one small error that can cause major disruptions to the smooth functioning of the system.
In the SDLC lesson, we stress good system analysis and design. How well you did back then will play out in the maintenance of the system. If you did a good job, maintenance will be reduced. If you did a poor job analyzing and designing the system, maintenance will be a far more difficult task.
Data Quality Problems
Let's bring the problem of poor data quality closer to home. What if the person updating your college records fails to record your grade correctly for this course and gives you a D instead of a B or an A? What if your completion of this course isn't even recorded? Think of the time and difficulty you'll experience getting the data corrected.
Information Systems security is everyone's business. Use antivirus software on your computer and update it every 30-60 days. The "it won't happen to me" attitude is trouble. Many system quality problems can be solved by instituting measures to decrease the bugs and defects in software and data entry.
Creating a Control Environment
How do you help prevent some of the problems we've discussed? One of the best ways is to introduce controls into your Information System the same as you might in any other system: through methods, policies, and procedures.
Think about what a typical company does when it builds a new office building. From the beginning of the design phase until the building is occupied, the company decides how the physical security of the building and its occupants will be handled. It builds locks into the doors, maybe even designs a single entry control point. It builds a special wing for the executive offices that has extra-thick bulletproof glass. Fences around the perimeter of the building control the loading docks.
These are just a few examples to get you to think about the fact that the company designs the security into the building from the beginning. You should do the same thing with an Information System. It's no different from any system that requires preplanning and well-thought-out policies and procedures before the building begins.
Let's look at the two distinct types of controls: general controls, which focus on the design, security and use of computer programs and data files, and application controls, which are concerned with the actual application programs.
General Controls
General Controls in Information Systems consist of the systems software and manual procedures used to control the design, security, and use of the programs and the data files in the overall system. General controls would be the overall security system, which may consist of outside door locks, fencing around the building, and employee passes. General controls wouldn't be concerned with what happens in one particular area of the building.
• Implementation Controls: When you use implementation control methods, you audit the development procedures and processes to make sure they conform to the business's standards and policies. Were all the steps completed, or did you skip some of them? What input did users and management have in the design and implementation of the system? Were managers allowed to sign off on milestones during the development process, or were they left out of the loop altogether? There is a reason why you have to use good, sound development procedures.
• Software and Hardware Controls: How is your system software installed, maintained, and used? What security measures are in place to ensure only authorized users are allowed access to your system? Are you using the latest version of virus protection software? These concerns are part of the software controls you should develop.
Companies control all of their manufacturing equipment, office supplies, and production tools--or at least try to. They should apply the same hardware controls to computer equipment as they would any other piece of equipment. Sometimes they don't. Laptop computers are especially vulnerable to theft and abuse: Companies seem to be very lax about employees borrowing laptops and then never returning them.
• Computer Operations and Data Security Controls: Computer operations controls are the responsibility of the Information Technology Department staff and are concerned with the storage and processing of data. Often overlooked in this area is the need for protecting the system documentation that details how jobs are processed, how data are stored, and how the systems operate. Someone who steals this information could do serious damage.
Whether you're working with current data or archived data, you still need to protect them from unauthorized access or use. The movie "The Net" depicted a fictionalized version of data theft and manipulation. While it may have been an exaggeration, this could happen if a company doesn't do enough to protect its data.
• Data security controls should consist of passwords that allow only certain people access to the system or to certain areas of the system. While you may want to grant employees access to their payroll data or 401K data through an Intranet, you must make sure they can access only their information and not that of any other employee. You wouldn't want a co-worker to be able to access your paycheck information, would you?
Application Controls
We've talked about controls for the general use of an Information System. Application controls are specific controls within each computer application used in the system. Each activity in the system needs controls to ensure the integrity of the data input, how it's processed, and how it's stored and used.
• Input Controls :Are the data accurate and complete? We used an example earlier of a course grade being entered incorrectly. If your system had a method to check the data on the input documents against the actual data entered into the system, this kind of error could be caught and corrected at the time it was entered. Many companies are using source data automation to help eliminate input errors.
• Processing Controls: As the name describes, processing controls are used during the actual processing of the data. If Suzy says she entered 100 items into the system on Tuesday, your application program would have a method of checking and reporting the actual number of data entries for that day. Not that you think Suzy is lying; you just need to have a method of verifying and reconciling data entered against data processed.
• Output Controls: Is the information created from the data accurate, complete, and properly distributed? Output controls can verify who gets the output, and if they're authorized to use it. You can also use output controls to match the number of transactions input, the number of transactions processed, and the number of transactions output.
Security and the Internet
We can't stress enough the importance of security for Intranets, Extranets, and the Internet. Organizations must control access through firewalls, transaction logs, access security, and output controls. Software programs that track "footprints" of people accessing the system can be a good way to detect intruders in the system, what they did, what files they accessed, and how they entered your system initially.
The most important point is that you get the software, use it, and protect one of your most important organizational resources.
Developing a Control Structure: Costs and Benefits
You should be realistic about security and system controls. If you set up five layers of entry into your Web site, people probably won't access it that much. They'll either ignore it or find a way around your controls. You have to analyze the system and determine those areas that should receive more security and controls and those that probably can use less.
Returning to our building analogy, the Executive Wing, which houses the CEO and other key executives, will probably have more locks on the doors, more entry barriers, than the area the data workers occupy. You can't check absolutely every person who traverses the hallways each day, but you can have regular employees wear badges that readily identify them.
The Role of Auditing in the Control Process
Companies audit their financial data using outside firms to make sure there aren't any discrepancies in their accounting processes. Perhaps they audit their supply systems on a periodic basis to make sure everything is on the up-and-up. They should also audit their Information Systems. After all, information is as important a resource as any other in the organization. MIS audits verify that the system was developed according to specifications, that the input, processing, and output systems are operating according to requirements, and that the data are protected against theft, abuse, and misuse. In essence, an MIS audit checks all the controls we've discussed in this lesson.
We mentioned earlier that a bank teller wouldn't be the one to count the money in the till at the end of the workday. To ensure validity in an MIS audit, you would use someone totally disconnected from the system itself. Usually companies hire outside auditors to verify the integrity of the system, since they won't have any vested interest in hiding any flaws.
Ensuring System Quality                             
There's a reason why we explained all those methods and procedures and processes in previous chapters for building good, solid Information Systems. They ensure system quality so that the product produced by the system is as good as it can be.
Software Quality Assurance
Just as you must assure quality of other products and other work, you must assure the quality of your software.
Methodologies
It's easier to find the flaws in a system if you create all new systems and programs the same way every time. If you want to check the system, fix the system, add to the system, or audit the system, you won't have to spend time figuring out how it was built in the first place. In this case, predictability leads to efficiency. The documentation that most people fail to develop makes it easier to determine how the system is built and how it operates.
Most companies and most people spend the majority of their time in the programming phase of system development. Not a good idea. Just accept the fact that the more time you spend analyzing and designing a system, the easier the programming and the better the system. You will save a lot of time and headaches and money. Honest, it really does work that way!
Software Metrics
Be objective when you're assessing the system by using software metrics to measure your system. Emotions tend to cost money and use unnecessary resources. The text gives several good examples of metrics you can use to measure your system inputs, processes, and outputs. For metrics to be successful, they must be:
• Carefully designed
• Formal
• Objective
• Measure significant aspects of the system
• Used consistently
• Agreed to by users advance
Testing
You can't ignore testing as a vital part of any system. Even though your system may appear to be working normally, you should still verify that it is working according to the specifications. Walkthroughs are an excellent way to review system specifications and make sure they are correct. Walkthroughs are usually conducted before programming begins, although they can be done periodically throughout all phases of system development.
Once a system has been coded, it is much harder and more expensive to change it. We're beginning to sound like a broken record, but it's important that you understand and remember that the more work you do before the programming phase begins, the less trouble you'll have later. You can't just start pounding the keyboard and hope everything works out okay.
Quality Tools
Just as you would manage any big project--a house, a highway, a skyscraper--you must manage the entire systems development project. You can do it much easier using project management software that allows you to keep track of the thousands of details, deadlines, tasks, and people involved in the project. This type of software also helps you keep everything in sync.
Data Quality Audits
We spoke earlier of MIS audits, which check the system and its general controls and application controls. Data quality audits verify the data themselves. Many of the principles we discussed in the MIS audit apply to this type of audit. A company should formally record the number and types of errors customers report. Using this record can help managers do data quality audits by giving them ideas of where they can start looking for problems or areas that need to be improved.
A few comments regarding the three items in the text:
• Survey end users for their perceptions of data quality: How do they see it? Looking at your data through a different set of eyes can reveal problems you weren't aware of.
• Survey entire data files: This can be expensive and time-consuming, but very fruitful.
• Survey samples from data files: Make sure the sample is big enough and random enough to uncover problems.
Conclusion
Management information system will only reap the benefits if the companies gain insight to better align strategies and identify critical relationships and gaps along four key company dimensions – people, process, culture and infrastructure. information system provides a framework for companies to evaluate themselves relative to these dimensions. By understanding and improving alignment with these critical dimensions, companies can maximize the value and impact of information as a strategic corporate asset to gain competitive advantage.

Control the creation and growth of records despite decades of using various non-paper storage media, the amount of paper in our offices continues to escalate. An effective records information system addresses both creation control (limits the generation of records or copies not required to operate the business) and records retention (a system for destroying useless records or retiring inactive records), thus stabilizing the growth of records in all formats. reduce operating costs Recordkeeping requires administrative dollars for filing equipment, space in offices, and staffing to maintain an organized filing system (or to search for lost records when there is no organized system).It costs considerably less per linear foot of records to store inactive records in a Data Records Centre versus in the office and there is an opportunity to effect some cost savings in space and equipment, and an opportunity to utilize staff more productively - just by implementing a records management program.
 
Improve efficiency and productivity Time spent searching for missing or misfiled records is non-productive. A good records management program (e.g. a document system) can help any organization upgrade its recordkeeping systems so that information retrieval is enhanced, with corresponding improvements in office efficiency and productivity. A well designed and operated filing system with an effective index can facilitate retrieval and deliver information to users as quickly as they need it. Moreover, a well managed information system acting as a corporate asset enables organizations to objectively evaluate their use of information and accurately lay out a roadmap for improvements that optimize business returns. Assimilate new records management technologies A good records management program provides an organization with the capability to assimilate new technologies and take advantage of their many benefits. Investments in new computer systems whether this is financial, business or otherwise, don't solve filing problems unless current manual recordkeeping or bookkeeping systems are analyzed (and occasionally, overhauled) before automation is applied.
 Ensure regulatory compliance In terms of recordkeeping requirements, China is a heavily regulated country. These laws can create major compliance problems for businesses and government agencies since they can be difficult to locate, interpret and apply. The only way an organization can be reasonably sure that it is in full compliance with laws and regulations is by operating a good management information system which takes responsibility for regulatory compliance, while working closely with the local authorities. Failure to comply with laws and regulations could result in severe fines, penalties or other legal consequences.
Minimize litigation risks business organizations implement management information systems and programs in order to reduce the risks associated with litigation and potential penalties. This can be equally true in Government agencies. For example, a consistently applied records management program can reduce the liabilities associated with document disposal by providing for their systematic, routine disposal in the normal course of business.
Safeguard vital information every organization, public or private, needs a comprehensive program for protecting its vital records and information from catastrophe or disaster, because every organization is vulnerable to loss. Operated as part of a good management information system, vital records programs preserve the integrity and confidentiality of the most important records and safeguard the vital information assets according to a "Plan" to protect the records. This is especially the case for financial information whereby ERP (Enterprise Resource Planning) systems are being deployed in large companies.
Support better management decision making In today's business environment, the manager that has the relevant data first often wins, either by making the decision ahead of the competition, or by making a better, more informed decision. A good management information system can help ensure that managers and executives have the information they need when they need it. By implementing an enterprise-wide file organization, including indexing and retrieval capability, managers can obtain and assemble pertinent information quickly for current decisions and future business planning purposes. Likewise, implementing a good ERP system to take account of all the business’ processes both financial and operational will give an organization more advantages than one who was operating a manual based system. Preserve the corporate memory an organization's files, records and financial data contain its institutional memory, an irreplaceable asset that is often overlooked. Every business day, you create the records, which could become background data for future management decisions and planning. foster professionalism in running the business A business office with files, documents and financial data askew, stacked on top of file cabinets and in boxes everywhere, creates a poor working environment. The perceptions of customers and the public, and "image" and "morale" of the staff, though hard to quantify in cost-benefit terms, may be among the best reasons to establish a good management information system.
 
dd


   XXX  .  V00000 Wide-Area Wireless Communication: Microwave, Satellite, 3G, 4G & WiMAX

 planning on moving to a new house on the outskirts of the city. You're a little worried about whether you will be able to get wireless Internet on your mobile phone service at your new home. So, you go to a local electronics store to get some advice. Here are some of the options: 2G, 3G, WiMAX, satellite phone and satellite Internet. So what does that all mean? Time for a closer look at wide-area wireless communications.

Wireless Communication

A wireless communication network refers to any type of network that establishes connections without cables. Wireless communications use electromagnetic (EM) waves that travel through the air. There are three main categories of wireless communication, based on how far the signal travels.
In short-range wireless communication, the signal travels from a few centimeters to several meters. Examples include Bluetooth, infrared and ZigBee. In medium-range wireless communication, the signal travels up to 100 meters or so. WiFi is the best-known example. In wide-area wireless communication, the signal travels quite far, from several kilometers to several thousand kilometers. Examples of wide-area wireless communication systems are cellular communications, WiMAX and satellite communications. All of these use some form of microwave signals.

Microwaves

Microwaves are high-frequency signals in the 300 MHz to 300 GHz range. The signals can carry thousands of channels at the same time, making it a very versatile communication system. Microwaves are often used for point-to-point telecommunications, which means that the signal is focused into a narrow beam. You can typically recognize microwave-based communication systems by their use of a large antenna, often in a dish format. This is in contrast to radio signals, which are typically broadcast in all directions. Radio signals operate in the 3 Hz to 300 MHz range.
Microwave signals are used for both satellite and ground-based communications. Many TV and telephone communications in the world are transmitted over long distances using microwave signals. They use a collection of ground stations and communication satellites. Ground stations are typically placed roughly 50 kilometers apart so they 'see' each other.
Microwaves are also used in - you guessed it - microwave ovens. These microwaves, however, are quite different from those used in communication systems. First, the microwaves have a much higher power level so they can heat your food. Second, the signal doesn't carry any information because you're not trying to tell your lasagna something.

Cellular Communications

Mobile phones have become one of the most widely used wireless communication devices. You probably use one every day and don't think too much about how it actually works - except when it doesn't work.
So, how do mobile phones work? A mobile phone or cell phone is very much like a two-way radio; you can wirelessly send and receive information. There are a few major components to the mobile phone system. These include cell towers, network processing centers and the actual phones themselves.
Communications within a cellular network are made possible by cell towers. Your mobile phone establishes a wireless connection using electromagnetic waves with the nearest cell tower. This is a two-way connection, meaning you can both send and receive information. The cell tower has a wired connection to the telephone network. This makes it possible for you to connect with any other telephone in the world.
So when you make a phone call from one cell phone to another, your phone connects to the cell tower, the cell tower connects to a network processing center, the network connects to another cell tower and that cell tower connects to the cell phone you are trying to reach. All this happens within a split second back and forth.
As you move about your day, your wireless connection will jump from one cell tower to the next depending on where you are. This makes it possible to maintain a connection even if you travel great distances. The transmission distance of cell towers is in the order of several kilometers, so cell towers are placed close enough together for their signals to overlap. The area covered by a single cell tower is referred to as a 'cell' or 'site.' A large metropolitan area may have hundreds of cells to cover the entire region.
If you are in a remote area far from the nearest cell tower, your phone will lose its connection to the network. These are the dreaded 'dead zones' without service. You may also lose service if you are in a location where signals have difficulty penetrating. This includes metal and concrete - and this is why you often lose reception in an elevator or in an underground parking garage.
Anywhere else your phone doesn't work? You got it - in an airplane at 30,000 feet. The cellular network uses ground-based towers, so as soon as you get in the air, you're going to lose the signal.

1G, 2G, 3G and 4G

Mobile telephone systems have evolved through the various generations of technology. The first generation, or 1G technology, dates back to the 1980s and was based strictly on analog signals. The second generation, or 2G technology, was developed in the 1990s and used a completely digital system. A number of different 2G network technologies have been developed, including GSM (mostly used in Europe and Asia) and CDMA (mostly used in North America).
Third generation, or 3G technology, was developed around the year 2000 and is able to carry a lot more data. In fact, 3G does not refer to a specific technology but to any network technology that provides a data transfer rate of at least 200 kilobits per second. A number of different 3G technologies exist, including upgraded versions of 2G technologies as well as some new ones.
Each generation of technology uses different communication protocols. This includes details on which specific frequencies are being used, how many channels are involved and how information is converted between digital and analog. Different protocols means that with each generation, all the hardware needs to be upgraded, including the cell towers, network processing centers and the phones themselves. Different technologies within a single generation are also not always compatible, which means that you cannot assume you can use your 3G phone in a different country.
As of 2013, fourth generation, or 4G technology, is under development, including HSPA+, LTE and WiMAX. 4G promises to deliver higher data transfer rates, approximately 10 times faster compared to 3G. These speeds are competitive with broadband Internet services available in homes and offices.
While 4G is still under development, there has been a tremendous growth in data transfer over cellular networks. If you are like most smartphone users, you check email, upload photos and download videos. With around six billion mobile phones being used as of 2013, this means a whole lot of network traffic.

WiMAX

One of the 4G technologies is WiMAX, or Worldwide Interoperability for Microwave Access. WiMAX is a standard for metropolitan area networks. It is similar to WiFi but works over greater distances and at higher transmission speeds. This means you need fewer base stations to cover the same area relative to WiFi.
WiMAX is not only an example of a 4G cellular network technology; it can also provide direct Internet access to metropolitan areas. It can be used to reach areas that are currently not served by phone and cable companies.
WiMAX consists of two components: towers and receivers. WiMAX towers are similar to cell towers, and the signals can cover a large area. WiMAX receivers can be installed into mobile computing devices, similar to a WiFi card or at fixed locations in homes and offices. 



             XXX  .  V00000  The digital future of work: What skills will be needed? 

Robots have long carried out routine physical activities, but increasingly machines can also take on more sophisticated tasks. Experts provide advice on the skills people will need going forward.
For an 18-year-old today, figuring out what kind of education and skills to acquire is an increasingly difficult undertaking. Machines are already conducting data mining for lawyers and writing basic press releases and news stories. In coming years and decades, the technology is sure to develop and encompass ever more human work activities.
Yet machines cannot do everything. To be as productive as it could be, this new automation age will also require a range of human skills in the workplace, from technological expertise to essential social and emotional capabilities.
In this video, one in a four-part series, experts from academia and industry join McKinsey partners to discuss the skills likely to be in demand and how young people today can prepare for a world in which people will interact ever more closely with machines. The interviews were filmed in April at the Digital Future of Work Summit in New York, which was hosted by the McKinsey Global Institute (MGI) and New York University’s Stern School of Business.
Interviewees include NYU provost Katherine Fleming and professors Arun Sundararajan and Vasant Dhar; Tom Siebel, founder, chairman, and CEO of C3 IoT; Anne-Marie Slaughter, president and CEO of New America; Jeff Wald, cofounder and president of WorkMarket; Allen Blue, cofounder of LinkedIn; Mike Rosenbaum, CEO of Arena; along with MGI chairman and director James Manyika and MGI partners Michael Chui and Susan Lund.
 

Interview transcript

X : For young people today, what’s clear is that they’re going to need to continue to learn throughout their lifetime. The idea that you get an education when you’re young and then you stop and you go and work for 40 or 50 years with that educational training and that’s it—that’s over. All of us are going to have to continue to adapt, get new skills, and possibly go back for different types of training and credentials. What’s very clear is that what our kids need to do is learn how to learn and become very flexible and adaptable.
Y : The future of work that a college graduate is looking at today is so different from the future of work that I looked at when I was a college graduate. There’s far less structure, there’s far less predictability. You don’t know that you can invest in a particular set of capabilities today and that will be valuable in 20 years. We used to be able to say, “This is the career I’m going to choose.” That’s a difficult bet to make today with so much change.
Z : More generally what I tell students is that it would help if you had the skills that are required to deal with information because those are the core skills that are necessary these days to help you learn new things. This ability to learn things on your own to some extent will be driven by the core skills you have and how you can handle and process information.
P : The most important message is you need to prepare for yourself. If people are sitting back, waiting to be candidly taken care of by a welfare state, I don’t think that’s a very good answer.
R : We found that, for example, in something like 60 percent of all occupations an average 30 percent of their work activities are automatable. What does that mean? We’re going to see more people working alongside machines, whether you call that artificial augmentation or augmented intelligence, but we’re going to see a lot more of that. That’s quite important because it raises our whole sense of imperatives. It means that more skill is going to be required to make the most of what the machines can do for the humans. 
 
Workforce transitions in a time of automation ?

They’re going to need skills that they can only get by doing things. So every time they’re given the opportunity to do something, they should say yes to it, even if it doesn’t strike them initially as being exactly what they want to be doing.

Automation is happening, and it will bring substantial benefits to businesses and economies worldwide, but it won’t arrive overnight. A new McKinsey Global Institute report finds realizing automation’s full potential requires people and technology to work hand in hand. 
Automation, digital platforms, and other innovations are changing the fundamental nature of work. Understanding these shifts can help policy makers, business leaders, and workers move forward. 
 Online talent platforms are increasingly connecting people to the right work opportunities. By 2025 they could add $2.7 trillion to global GDP, and begin to ameliorate many of the persistent problems in the world’s labor markets.
Automation is happening, and it will bring substantial benefits to businesses and economies worldwide, but it won’t arrive overnight. A new McKinsey Global Institute report finds realizing automation’s full potential requires people and technology to work hand in hand.

       


                         XXX  .  V000000 ELECTRONIC DATA INTERCHANGE     


                          Electronic Data Interchange Edi 603 

Electronic data interchange (EDI) is the use of computer and telecommunication technology to move data between or within organizations in a structured, computer retrievable data format that permits information to be transferred from a computer program in one location to a computer program in another location, without manual intervention. An example is the transmission of an electronic invoice from a supplier's invoicing software to a customer's accounts receivable software. This definition includes the direct transmission of data between locations, transmission using an intermediary such as a communication network, and the exchange of digital storage devices such as magnetic tapes, diskettes, and CD-ROMs.
EDI is one of the most important subsets of electronic commerce —the use of computer and telecommunication technology to facilitate the information exchange between two parties in a commercial transaction. The intent of all electronic commerce is to automate business processes. Some transactions can be completely paperless and move data from one computer application to another computer application. By strict definition EDI falls under this type of electronic commerce. Other electronic commerce transactions are also paperless but involve manual intervention. Examples are Internet transactions requiring one party to enter data manually. Electronic mail is another example of paperless but manual electronic commerce. Sometimes firms claim to be doing EDI when they are really performing a manual-to-computer transaction such as electronic order entry.
Another form of electronic commerce is based on physical media interacting with computers and telecommunications processes. Examples of this third type are facsimile transmission (paper plus telecommunications) and processes that involve information captured by bar coding, optical character recognition, and radio frequency tagging.
Exhibit I shows how EDI contrasts with facsimile transmission (fax) and electronic mail (e-mail). Fax is the transfer of totally unstructured data. With fax, a digitized image of a paper document is transmitted. While mail time delays are avoided, the receiver of a facsimile transmission would not be able to enter the image directly into a computer program without rekeying. E-mail also moves data electronically but is designed for person-to-person applications. It uses a free format rather than a structured format. A party receiving an e-mail purchase order, for example, would not likely be able to automatically read it into an order entry program, and would most likely have to rekey the information.
Exhibit 1 Contrasting EDI with Fax and E-Mail
Exhibit 1
Contrasting EDI with Fax and E-Mail

EDI has two important subsets, illustrated diagrammatically in Exhibit 2. Electronic Funds Transfer (EFT) is EDI between financial institutions. The result of an EFT transaction is the transfer of monetary value from one account to another. Examples of EFT systems in the United States are FedWire and Automated Clearing House (ACH) payments. FedWire is the same-day, real-time, electronic transfer of funds between two financial institutions using the communication network of the Federal Reserve. ACH transfers are batch-processed electronic transfers that settle in one or two business days. An example of an ACH transfer is the direct deposit of payroll offered by many firms to their employees. Using the ACH network, the originating bank sends electronic payment instructions to each receiving bank. Another application of the ACH system is direct debiting, which many consumers use to make mortgage, utility, and insurance payments.
Exhibit 2 Relationship between EDI, EFT and FEDI
Exhibit 2
Relationship between EDI, EFT and FEDI

Financial EDI (FEDI) is EDI between banks and their customers or between banks when there is not a value transfer. For example, a firm may receive electronic reports from its bank listing all checks received the previous day. A bank may also send its monthly statement to a firm using FEDI. Some firms send FEDI payment orders to their banks to initiate supplier payments.

BENEFITS OF EDI


EDI was developed to solve the problems inherent in paper-based transaction processing and in other forms of electronic communication. In solving these problems, EDI is a tool that enables organizations to reengineer information flows and business processes. Problems with the paper-based transaction system are:
  • Time delays. Delays are caused primarily by two factors. Paper documents may take days to transport from one location to another. In addition, manual processing delays are caused by the need to key, file, retrieve, and compare data.
  • Labor costs. In non-EDI systems, manual processing is required for data keying, document storing and retrieving, sorting, matching, reconciling, envelope stuffing, stamping, signing, etc. While automated equipment can help with some of these processes, most managers will agree that labor costs for document processing represents a significant proportion of their overhead. In general, labor-based processes are much more expensive than non-labor-intensive operations involving computers and telecommunications.
  • Errors. Because information is keyed multiple times and documents are transported, stored, and retrieved by people, non-EDI systems tend to be error prone.
  • Uncertainty. Uncertainty exists in two areas. First, paper transportation and other manual processing delays mean that the time the document is received is uncertain. Once a transaction is sent, the sender does not know when the transaction will be received nor when it will be processed. Second, the sender does not even know whether the transaction has been received at all nor whether the receiver agrees with what was sent in the transaction.
  • High Inventories. Because of time delays and uncertainties in non EDI processing, inventories are often higher than necessary. Lead times with paper processing are long. In a manufacturing firm, it may be virtually impossible to achieve a just-in-time inventory system with the time delays inherent in non-EDI processing systems.
  • Information Access. EDI permits user access to a vast amount of detailed transaction data—in a non-EDI environment this is possible only with great effort and time delay. Because EDI data is already in computer-retrievable form, it is subject to automated processing and analysis. Such information helps one retailer, for example, monitor sales of toys by model, color, and customer zip code. This enables the retailer to respond very quickly to changes in consumer taste.

INFRASTRUCTURE FOR EDI


To make EDI happen, four elements of infrastructure must exist: (1) format standards are required to facilitate automated processing by all users; (2) translation software is required to translate from a user's proprietary format for internal data storage into the generic external format and back again; (3) value-added networks are very helpful in solving the technical problems of sending information between computers; and (4) inexpensive microcomputers are required to bring all potential users—even small ones—into the market. It has only been in the past several years that all of these ingredients have fallen into place.

FORMAT STANDARDS.

To permit the efficient use of computers, information must be highly organized into a consistent data format. A format defines how information in a message is organized: what data goes where, what data is mandatory, what is optional, how many characters are permitted for each data field, how data fields are ordered, and what codes or abbreviations are permitted.
Early EDI efforts in the 1960s used proprietary formats developed by one firm for exclusive use by its trading partners. This worked well until a firm wanted to exchange EDI documents with other firms who wanted to use their own formats. Since the different formats were not compatible, data exchange was difficult if not impossible.
To facilitate the widespread use of EDI, standard formats were developed so that an electronic message sent by one party could be understood by any receiver that subscribes to that format standard. In the United States the Transportation Data Coordinating Committee began in 1968 to design format standards for transportation documents. The first document was approved in 1975. This group pioneered the ideas that are used by all standards organizations today. North American standards are currently developed and maintained by a volunteer organization called ANSI (American National Standards Institute) X12 Accredited Standards Committee (or simply ANSI X12). The format for a document defined by ANSI Xl2 is broad enough to satisfy the needs of many different industries. Electronic documents (called transaction sets by ANSI X12) are typically of variable length and most of the information is optional. When a firm sends a standard EDI purchase order to another firm, it is possible for the receiving firm to pass the purchase order data through an EDI translation program directly to a business application, without manual intervention.

INDUSTRY CONVENTIONS.

To satisfy users from many different organizations with vastly different needs, the ANSI X12 standards must remain very generic. Some industries have developed their own subsets of the more generic EDI formats. These formats may be considered to be essentially customized formats of the more generic standard EDI formats. For example, the ANSI X12 standard defines two-digit codes for more than 400 units of measure. The automotive industry does not need nearly that many, so their industry convention documentation defines only a handful of units of measure. This makes EDI less confusing for implementers.

TYPES OF STANDARDIZED DOCUMENTS.

There are currently generic standards for more than 300 types of transactions, including: purchase order, invoice, functional acknowledgment, purchase order acknowledgment, payment order, request for quote, insurance claim, inventory data, grade transcript, student loan data, freight invoice, bill of lading, lockbox receipt, load tender, library loan request, promotion announcement, advanced ship notice, material release, telephone bill, price/sales catalog, and claim tracer.

EDIFACT STANDARDS.

Under the auspices of the United Nations, a format standard has been developed to reach a worldwide audience. These standards are called EDI for Administration, Commerce, and Transport (EDIFACT). They are similar in many respects to ANSI Xl2 standards but are accepted by a larger number of countries. In the future, all new ANSI X12 EDI documents will be developed using the EDIFACT format.

TRANSLATION SOFTWARE.

Translation software makes EDI work by translating data from the sending firm's internal format into a generic EDI format. Translation software also receives a sender's EDI message and translates it from the generic standard into the receiver's internal format. There are currently translation software packages for almost all types of computers and operating systems.

VALUE-ADDED NETWORKS (VANS).

When firms first began using EDI, most communications of EDI documents were directly between trading partners. Unfortunately, direct computer to-computer communications requires that both firms (1) use similar communication protocols, (2) have the same transmission speed, (3) have phone lines available at the same time, and (4) have compatible computer hardware. If these conditions are not met, then communication becomes difficult if not impossible. A value-added network (VAN) can solve these problems by providing an electronic mailbox service. By using a VAN, an EDI sender need only learn to send and receive messages to/from one party: the VAN. Since a VAN provides a very flexible computer interface, it can talk to virtually any type of computer. This means that to do EDI with hundreds of trading partners, an organization has to talk to only one party.
VANs also provide a secure interface between trading partners. Since trading partners send EDI messages only through the VAN, there is no fear that a trading partner may dip into sensitive information stored on the computer system nor that a trading partner nay send a computer virus to the other partners.
One of the most important recent developments in EDI is the use of the Internet as the delivery network for EDI transactions. Since the Internet is so widely available, even most smaller firms have access to and familiarity with the use of browsers. Some service providers are using browser technology to permit trading partners to enter data into forms on web pages. The data can then be translated into EDI format and transmitted to the receiving party. While not strictly computer-to-computer, this process allows a receiver to make greater use of EDI on their side of the transaction.

INEXPENSIVE COMPUTERS.

The fourth building block of EDI is inexpensive computers that permit even small firms to implement EDI. Since microcomputers are now so prevalent, it is possible for firms of all sizes to deal with each other using EDI.

EXAMPLES OF EDI


The Bergen Brunswig Drug Company, a wholesale pharmaceutical distributor in Orange, California, is one of the most successful companies in using EDI for many of its business processes. To generate an order to Bergen Brunswig, a customer (pharmacist) uses a handheld bar code scanner to capture the UPC (Uniform Product Code) number on a shelf label for a product to be ordered. The pharmacist enters the quantity desired into a keypad on the scanner and moves onto the next item. All items in the pharmacy can be scanned in only a few minutes. A microcomputer next reads the information contained in the scanner and an electronic order is prepared for the pharmacist's review. The order is sent via EDI to a Bergen Brunswig distribution center where the order is analyzed and resequenced to match the product location in the distribution center. Within five hours, the order is delivered to the pharmacy. Bergen Brunswig has been able to eliminate all order takers, reduce errors to near zero, fulfill orders faster, reduce overhead costs, and build customer loyalty. The company also uses EDI for sending purchase orders to pharmaceutical manufacturers, receiving invoices, and handling charge backs.
The JCPenney Company has an operating center in Salt Lake City, Utah, which uses EDI to receive almost all of its invoices from suppliers. The result has been substantial savings in terms of personnel costs (four processing centers were combined into one and several hundred people who did manual processing were no longer needed), a reduction in errors, faster matching of invoice and purchase order, more timely payments, and a reduction in paper storage requirements.
U.S. auto manufacturers are also extensive users of EDI. Chrysler, as illustrated in Figure 3, has applied EDI to reengineer its manufacturing processes. Once a contract has been negotiated with a parts supplier, Chrysler sends the supplier weekly electronic material releases specifying the intended use of parts over an eight-week horizon. Several days before the parts are needed, Chrysler sends an EDI delivery order to the supplier detailing precisely how many parts are needed for delivery on a certain date, where the parts are to be delivered, what bar coding is to be put on the containers, and when delivery is expected. Some suppliers are told how to sequence the parts on the truck for most effective unloading. After the supplier loads the parts, an EDI advanced shipping notice is sent to Chrysler verifying that the delivery is on the way. Chrysler scans the parts in as they arrive and may send an electronic discrepancy report if there are problems. Payments are often made electronically on a settlement date specified in the contract. No invoice is used in this reengineered process. In this environment Chrysler needs very little inventory and, in fact, has been able to shave approximately $ 1 billion from its parts inventory.
Exhibit 3 EDI in the Automotive Industry
Exhibit 3
EDI in the Automotive Industry

Use of EDI is spreading to many different types of organizations. The insurance industry is beginning to use EDI for health care claims, procedure authorization, and payments. Universities are using EDI for sending grade transcripts, interlibrary material requests, and student loan information. Retailers are sending EDI inventory data to suppliers and charging them with oversight of inventory levels and shipment initiation. The federal government and most states are now using EDI to collect tax filing information.

STATUS OF EDI


EDI appears to be entering into a rapid growth phase. According to an extensive market survey completed by the EDI Group, a division of Thomson EC Resources, well over 60 percent of U.S. firms with more than 5,000 employees are using EDI. Over the past several years there has been an enormous increase in the infrastructure supporting EDI, including seminars and conferences, books, periodicals, consultants, and software companies. From its current growth patterns, EDI is poised to become more and more important as a data communication tool that enables organizations to more efficiently design internal processes and external interactions with trading partners.


                                                       ELECTRONIC MAIL 

From its roots as an obscure mode of communication among computer hobbyists, academics, and military personnel, e-mail use has burgeoned to a medium of mass communication. According to published estimates from International Data Corp., a technology market research firm, as of 1998 there were 82 million personal and business e-mail accounts in the United States. For comparison, that figure was equal to half the number of telephones in use, an impressive proportion given that e-mail has been in existence only a quarter as long as the telephone.
E-mail's late 1960s inception came about from the work of Ray Tomlinson, a computer scientist working for a defense contractor. Tomlinson's work was for the U.S. Defense Department's Arpanet, the project that later spawned the Internet.

E-MAIL TECHNOLOGY


E-mail technology for the Internet (as opposed to closed private systems) follows a number of universal standards as codified by the International Telecommunication Union (ITU) and the International Organization for Standardization (ISO). These standards help ensure that e-mail, like Internet communications in general, is not bounded by platform or geography.
An e-mail message is essentially one or more files being copied between computers on a network such as the Internet. This transfer of files is automated and managed by a variety of computer programs working in consort. A simple e-mail may be a single text file; if special formatting, graphics, or attachments are used in the message, multiple binary files or encoded Hypertext Markup Language (HTML) files may be transmitted in a single e-mail message.
A typical e-mail system consists of at least three software components with supporting hardware:
  • the user's e-mail application
  • a message transfer agent or engine
  • a message store
The e-mail application is, of course, where all e-mail originates or terminates. Full-featured e-mail applications often reside on the user's local computer; however, it is also common to use server-based e-mail programs in which the user logs into a computer through a terminal program or a web browser and utilizes e-mail capabilities that reside entirely on the remote computer. The latter case is typical of many of the free e-mail services offered on the web. Lotus Development Group's Lotus Notes has been the market-leading e-mail application for corporate users, with a user base of 25 million in 1998. That year, it was followed by a fast-growing challenger, Microsoft Corporation's Exchange package, which garnered 18 million licensed users.
The message transfer agent (MTA), also called a mail engine, is behind-the-scenes software that runs on a mail server, the computer(s) dedicated to sorting and routing e-mail in a network. The MTA determines how incoming and outgoing messages should be routed. Thus, if a message is sent from one internal corporate user to another, the MTA normally routes the e-mail within the corporate system without sending it through the Internet. If, however, a message is intended for a user outside the organization, the mail server transmits the file to an external gateway or a public Internet backbone router that will, in turn, deliver the message to the recipient's system. This is a simplification, as a message may actually be passed through several intermediate computers on the Internet before reaching its destination.
The message store is the software and hardware that handles incoming mail once the MTA has determined that mail belongs to a specific user on the system. The store may be configured to work in different ways, but, in essence it is user-specific directory space on a network server in which unread incoming messages are stored for future retrieval by the e-mail applications. This is where, for instance, e-mail sent overnight sits until the recipient turns on his computer the next day and opens his e-mail program. The software configuring the message store may automatically delete copies of the messages once they are downloaded to the user's computer or it may archive them. Message stores are also capable of automatically categorizing mail and performing other mail-management tasks.

E-MAIL SECURITY AND PRIVACY


In spite of all its conveniences, e-mail is still a notoriously insecure method of communication, particularly in corporate environments. Technologically, most of the security encryption tools and other privacy safeguards available on the mass market are easily broken by experienced hackers. It is even simpler to forge an e-mail from someone using their address. Furthermore, what many employees don't realize is that their employers can—and do—legally monitor e-mail on the corporate system. A series of court cases have arisen from such activities, and the courts have upheld companies' rights to control information transmitted on their computer systems. Common information systems practices such as scheduled backups of network data can make the process of e-mail monitoring even easier for employers. Employer e-mail surveillance has already taken on seemingly excessive zeal: one company allegedly reported an employee to the police for sending a fellow employee an e-mail describing how he was forced to put a pet to sleep; the employer misunderstood the e-mail as a death threat against a coworker.
In response to such circumstances, some observers have concluded that corporate e-mail users should assume there is no privacy in their e-mail communications. A representative from a well-known Internet applications company likened it to sending a postcard in conventional mail—the entire message is visible to anyone who chooses to look.

CORPORATE E-MAIL POLICIES


Because of the contentious legal and ethical issues surrounding e-mail surveillance and misuse, as well as problems with excessive use of e-mail, copyright violation, and the ease of sending offensive messages, many corporations have instituted formal e-mail policies that inform employees of their rights and obligations. Some aspects of a strong e-mail policy include
  • informing employees that e-mail is considered company property and can be legally monitored
  • explaining the company's e-mail monitoring practices and philosophy
  • encouraging employees to be cautious about what they say in e-mail and how they use it
  • identifying clear examples of inappropriate use
  • requiring each employee to sign the e-mail policy statement
Articulating and enforcing such policies has helped companies prevail in lawsuits—or avoid them altogether. Companies could be liable, for example, if they consistently allow harassing messages to circulate between employees. Still, in 1998 a survey of businesses indicated that only a minority had any sort of e-mail policy, and even fewer had a strong policy that employees were required to sign.
Several forms of e-mail monitoring are practiced, ranging from routine, comprehensive surveillance to occasional inquiries when suspicions arise. Some security experts recommend the latter approach, which is much less resource-intensive. It is also possible to electronically automate surveillance so that, for example, all e-mail header and size information is logged by a computer. Consistent trafficking of very large e-mail messages may suggest inappropriate usage, depending on the employee's job. With automated tracking, corporate officials can focus on screening messages that are most likely to violate policy, e.g., identifying e-mail that contains discriminatory language. Monitoring can also be harnessed to protect the corporate system from external abuses such as large broadcasts of unsolicited e-mail, or "spam."

THE BUSINESS OF E-MAIL


In 1998 there were an estimated 40 million business e-mail users. For companies, e-mail represents simultaneously a source of costs to the business and a tool for cost-savings or even new revenues. Obvious costs associated with e-mail include buying the hardware and software, maintaining the system, and the staff hours spent using e-mail.
If the time spent reading and sending e-mail is less than the amount the employee would have spent otherwise (or accomplishes more per unit of time), then e-mail achieves a cost advantage for the business. It is not clear how often this is the case, however. A number of studies have reported that e-mail use hasn't so much supplanted other modes of business communication, but added to them. A 1998 Gallup poll suggested that people receive an average of 190 business communications of all kinds, including e-mail, each day, an increase over previous levels. Such findings have led some to believe that if left uncontrolled, e-mail could be sapping productivity.
Meanwhile, marketing organizations are increasingly using e-mail to reach customers, through both solicited and unsolicited messages. The market research firm Forrester Research claimed that, in 1998, broadcast e-mail service was a fledgling $8 million industry, but by 2002 it was expected to blossom into a $250 million trade. Moreover, for companies selling products and services via solicited e-mail, the medium was forecast to generate $952 million in revenue by 2002. Although laws passed in state legislatures could limit the uses of unsolicited e-mail, the arena clearly represents untapped revenue for some companies.



   EMBEZZLEMENT  

Embezzlement is a form of fraud that involves misappropriation of money or property by someone who has been entrusted with it by virtue of their position or employment. Larceny and theft, on the other hand, involve taking, by trespass or force, money or property that belongs to someone else.
Depending on the amount of money or value of property involved, embezzlement is a misdemeanor or felony and a statutory crime punishable by fine and/or imprisonment. According to one profile, however, the typical embezzler is not usually prosecuted, usually does not receive a jail sentence upon conviction, and usually does not repay the victim or pay court costs.
An embezzler may be a public official or someone employed in a fiduciary capacity. Such a person is entrusted with funds or property by virtue of his or her position. When they use funds or property belonging to another person or business for their own account, they are guilty of embezzlement. While virtually any business can be the victim of embezzlement, the crime most often occurs in financial institutions, healthcare companies, and a variety of small businesses.
Examples of embezzlement include bookkeepers stealing from their employers' accounts by making false journal entries, altering documents, and manipulating expense records. Embezzlement may take the form of making payments to "dummy" suppliers and vendors. Employees of financial institutions may attempt to embezzle by diverting funds from legitimate accounts into "dummy" accounts.
The criminal act of embezzlement usually involves three distinct phases. At each stage the embezzler leaves indications that a crime has occurred; unlike many other crimes, however, there often is little hard evidence to indicate that a crime has been committed. The first phase of embezzlement is the criminal act itself, taking money or property manually, by computer, or by telephone. Once the crime has been committed, the embezzler attempts to conceal it. Making a phony payment, falsifying a document, or making misleading journal entries are some of the ways that embezzlers try to hide their crime. Finally, the third phase of criminal activity involves converting the stolen assets into cash and spending it.
At each step the embezzler is subject to detection. The act itself must be witnessed in order for the embezzler to be caught at the first stage. Concealment of the act results in altered records or miscounted cash that can be detected by auditors. Conversion usually results in lifestyle changes that can be noted by fellow employees.
In order to protect themselves from embezzlement, companies need to develop a program to recognize signs of employee fraud. One aspect of such a program is recognizing accounting irregularities. Embezzlers usually alter, forge, or destroy checks, sales invoices, purchase orders, and similar documents. Examination of source documents can often lead to the detection of embezzlement.
Internal controls are another important aspect of a fraud prevention program. A company's internal control system should be examined for weaknesses. Internal controls are designed to protect a company's assets. If they are weak, then the assets are not safe. Typical controls that help prevent embezzlement include segregating duties, regular or programmed transfers of employees from department to department, and mandatory vacations. In the banking industry, the Office of the Controller of the Currency requires that all bank employees in the United States take at least seven consecutive days of vacation per year. Many cases of fraud have been discovered while employees were on vacation and unable to cover their tracks.
Other recognizable signs of embezzlement include anything out of the ordinary, such as unexplained inventory shortages, unmet specifications, excess scrap material, larger than usual purchases, significantly higher or lower account balances, excessive cash shortages or surpluses, and unreasonable expenses or reimbursements. Finally, embezzlers often reveal themselves through noticeable lifestyle and behavioral changes. Embezzlers are often first-time criminals whose feelings of guilt cause them to act erratically and unpredictably. They usually spend the money they have embezzled rather than saving it. A successful fraud prevention program should include incentives for employees to report unusual behavioral and lifestyle changes in fellow employees.

EMERGING MARKETS 

The term "emerging markets," while commonly used, is difficult to define. From the perspective of the United States, an emerging market would be one to which a previously untapped potential for U.S. exports or investment might be anticipated.
By the early 1990s, investors had begun to take emerging-market funds very seriously. By 1993, emerging-market funds returned average gains of 72.13 percent. Such high rates of return attracted the attention of the U.S. Department of Commerce which in 1994 identified ten nations as "Big Emerging Markets," or those that were predicted to have promising prospects for substantial incremental gains in exports from the United States. Included among these were: the Chinese Economic Area, Indonesia, India, South Korea, Mexico, Argentina, Brazil, South Africa, and Turkey. Yet before 1994 had passed, the Mexican peso crisis shook investor confidence in not only Mexico but other officially identified emerging markets such as Argentina and Brazil, and by extension all of Latin America.
Despite the collapse of Latin American markets, investor confidence in emerging markets continued to climb, topping $20 billion in total net assets in diversified emerging-market mutual funds by 1997, notably in East Asia.
Then in July 1997, the emerging markets collapsed. Made cautious by concerns regarding the return of Hong Kong to the People's Republic of China, many investors temporarily halted investments in Southeast Asia. Ironically, the transition of Hong Kong to China went smoothly, but the subsequent downturn in investments to the region caused a real estate crisis in Thailand, which devalued its currency so severely that the effects were felt in an equally overextended Malaysia and Indonesia. Unlike the essentially stable political systems of Thailand and Malaysia, however, the economic crisis in Indonesia uncovered racist tensions, political instability, and corruption so severe that it led to widespread anti-Chinese riots and eventual political collapse. Soon after, the economic crisis spread to other East Asian nations that had strong ties to Indonesia, Thailand, and Malaysia. Singapore was especially affected and faced its most serious downturn since its independence. South Korea underwent its own economic upheavals, including the collapse of major companies and its own change of government. By 1997 the effects of the East Asian economic crisis had destabilized other emerging markets such as Russia and began to affect the overall economic health of even such major developed economic powers as Japan.

IDENTIFYING EMERGING MARKETS


Despite this, emerging markets still remain attractive if they can carry such favorable prospects as stable governments, privatization of key sectors, changes in foreign investment opportunities and rapid economic growth. With some reservations, of the original ten "Big Emerging Markets" identified by the Commerce Department in 1994, only India, Turkey, and South Africa continue to meet those criteria, as does, to a lesser extent, the Chinese Economic Area.
Such East European markets as the Czech Republic, Slovenia, and Hungary also still offer strong growth opportunities as yet unaffected by the Latin American or East Asian economic crises.
Political changes remain among the main reasons certain nations come to the fore as emerging markets. For example, the lifting of various U.S. trade embargoes against South Africa has created a marked potential for U.S. exports following the elimination of that nation's apartheid policies. South Africa has for years been Africa's strongest economy, but the overwhelming response to the political situation in that nation stunted U.S. exports until fairly recently.
Identification as an emerging market could also reflect a change toward U.S. trade within the policy of another established market. This liberalization of trade previously restrictive to U.S. exports could thus inspire new and rapid demand for formerly unavailable U.S. products. The creation of economic zones within the People's Republic of China, for example, made it considerably easier than in past years for U.S. exports to enter China. This, coupled with the liberalization of central control in the designated free economic zones, has led to rapid development. Indeed, Guangdong province (next to Hong Kong) would be the world's fastest growing economy were it treated as an independent state.
Similarly, the post-communist governments of Eastern European nations as well as the newly independent states represent the opening of markets previously walled off from most U.S. exports. While many of these nations remain unstable or economically underdeveloped, the stability of nations such as Poland (the first of the Warsaw Pact nations to throw off a communist or Soviet-influenced government) or the strong manufacturing base of Hungary or the Czech Republic make them potentially promising as emerging markets.
Political factors from abroad have helped some nations emerge as U.S. export and investment destinations. Turkey's active role as an alternative Moslem economic system to such fundamentalist Islamic models as Iran have helped it emerge as a major regional player in the Islamic states of the former Soviet Union, such as Kazakhstan and Turkmenistan. Additionally, Turkey's trade ties with Europe and its repeated attempts to join the European Union (rebuffed for several political reasons unrelated to its economy) have further added to its aura of stability as an emerging market.
The lifting of restrictions in trade simultaneously from both the United States and from another nation could also earmark that nation as an emerging market for U.S. goods. The North American Free Trade Agreement (NAFTA), for example, represented a lifting of trade restrictions in Mexico and the United States taking place in both countries at the same time. This free trade allowed Mexico, already an important destination for U.S. exports, to "emerge" as an even more important export destination. Indeed, it is for this reason that Mexico is still viewed as an emerging market despite political difficulties and the economic instability that followed the 1994 peso crisis.
An emerging market could also reflect the potential for the rapid economic development of another nation with no change in existing trade restrictions directed against the United States or directed against that nation by the United States. For instance, the massive economic reform program begun in July 1991 in India, and still ongoing, has helped to transform its previously stagnant economy. India's moves to deregulate lending rates and bank reserve requirements, its plans to privatize such public services as the telephones, its revamping of foreign investment regulations, and its reduction of many tariffs, have created strong economic growth. With this comes a related U.S. export potential of goods and services directed toward India's steady economic strength.

AFFECTS OF MERCOSUR


The free trade inroads made following the creation in 1991 of the South American customs union, Mercosur, have strengthened the economies of its four signatory nations: Argentina, Brazil, Uruguay, and Paraguay. Mercosur, which formally took effect on January 1, 1995, has added a source of economic strength independent of trade outside of South America. It is at least in part the potential strength of Mercosur that has led to continued interest in the four nations. Thus, both Paraguay and Uruguay continue to prosper directly from their participation in the Mercosur trade liberalization. For example, Paraguay, a major producer of cotton, has traditionally exported only as raw material 90 percent of its cotton to Brazil. Following the formation of Mercosur, it has been able to export manufactured cotton products such as thread and cloth to Brazil without paying duties. The result has been a boom in textile manufacturing in Paraguay.
Similarly, the position of Uruguay directly in the path between Buenos Aires and São Paulo has strengthened the nation's role as a transportation center due to duty-free trucking. The result is the current proposal to build the 47-kilometer bridge over the Rio de la Plata connecting the Uruguayan border city with the Argentine capital on the opposite bank. The bridge, when completed, would be the longest of its type in the world—and would reinforce Uruguay's position as a commercial crossroads. Additionally, the bridge, unthinkable before Mercosur, would substantially cut the transportation time between Mercosur's two greatest industrial centers.
The stability linked to Mercosur, coupled with a decade of economic austerity measures and with fewer protectionist trade policies, has also resulted in increased U.S. export opportunities. In Brazil, the calming of political turmoil, the implementation in November 1998 of marked economic austerity measures, and debt-equity swaps have enhanced Brazil as an emerging market despite its potential economic risks. Finally, Argentina's currency reform—tying the newly coined peso to the U.S. dollar—as well as its elimination of exchange controls and import quotas, has added considerably to its reputation as Mercosur's most attractive emerging market for U.S. investment, and enabled it to recover fairly rapidly from the Latin American economic crisis that had spread from Mexico in 1994.

EVERCHANGING STATUS


Probably, the clearest lesson of both the Mexican peso crisis and the East Asian economic crisis is that no list of emerging markets is ever stable. Once an emerging market does remain consistent, it is no longer "emerging"; rather it is an "established" market. Earlier emerging markets such as Greece, Spain, and Portugal have long since become established. Other markets have come to seem too established to include in a list of emerging markets although they may have grown to the level of established only recently (for example, Singapore).
Some potentially important markets may not be fully justified in being labeled as emerging markets—yet. These future or nascent emerging markets are myriad. One could argue for including Morocco, for example, since at an 11 percent growth rate, it had among the world's fastest growing gross national products in the late 1990s. Yet Morocco has little infrastructure for capitalizing on its rapid growth. One could argue for countries such as Colombia or Venezuela, but their internal political unrest arguably makes them somewhat suspect despite their economic strengths. Thus, any list of emerging markets should not be viewed as definitive but rather as continually evolving.


                               XXX  .  V000000  FUNDAMENTALS OF E-COMMERCE 

1.0 INTRODUCTION

The cutting edge for business today is Electronic Commerce (E-commerce). Most people think E-commerce means online shopping. But Web shopping is only a small part of the E-commerce picture. The term also refers to online stock, bond transactions, buying and downloading software without ever going to a store. In addition, E-commerce includes business-to-business connections that make purchasing easier for big corporations. While there is no one correct definition of E-commerce, it is generally described as a method of buying and selling products and services electronically. The main vehicles of E-commerce remain the Internet and the World Wide Web, but use of email, fax, and telephone orders are also prevalent.
1.1 OBJECTIVES
 
 

After going through this unit, you will be able to
    • define what is E-commerce
    • discuss the applications of E-commerce
    • discuss the types of E-commerce
    • describe the life cycle of implementation of E-commerce
    • differentiate between E-commerce and other forms of commerce
    • list the modes of payments involved in E-commerce
1.2 WHAT IS E-COMMERCE?
 
 

Electronic commerce is the application of communication and information sharing technologies among trading partners to the pursuit of business objectives. E-Commerce can be defined as a modern business methodology that addresses the needs of organizations, merchants, and consumers to cut costs while improving the quality of goods and services and increasing the speed of service delivery. E-commerce is associated with the buying and selling of information, products and services via computer networks. Key element of e-commerce is information processing. The effects of e-commerce are already appearing in all areas of business, from customer service to new product design. It facilitates new types of information based business processes for reaching and interacting with customers – online advertising and marketing, online-order taking and on-line customer service etc. It can also reduce costs in managing orders and interacting with a wide range of suppliers and trading partners, areas that typically add significant overhead to the cost of products and services. Also E-commerce enables the formation of new types of information-based products such as interactive games, electronic books, and information-on demand that can be very profitable for content providers and useful for consumers. Virtual enterprises are business arrangements in which trading partners separated by geography and expertise are able to engage in complex joint business activities, as if they were a single enterprise. One example would be true supply chain integration, where planning and forecast data are transmitted quickly and accurately throughout a multi-tier supply chain. Another example would be non-competing suppliers with a common customer using E-commerce to allow that customer to do "one stop shopping" with the assurance that a single phone call will bring the right materials to the right location at the right time.
1.3 INFORMATION SUPERHIGHWAY (I-Way)
 
 

Any successful E-commerce application will require the I-Way infrastructure in the same way that regular commerce needs the interstate highway network to carry goods from point to point. A myriad of computers, communications networks, and communication software forms the nascent Information Superhighway (I-Way). The I-Way is not a U.S phenomenon but a global one, as reflected by its various labels worldwide. For instance, it is also called the National Information Infrastructure (NII) in the United States, Data-Dori in Japan and Jaring, which is Malay for "net" in Malaysia. The I-Way and yet-to-be developed technologies will be key elements in the business transformation. And while earlier resulted in small gains in productivity and efficiency, integrating them into the I-Way will fundamentally change the way business is done. These new ideas demand radical changes in the design of the entire business process. I-Way is not one monolithic data highway designed according to long-standing, well-defined rules and regulations based on well-known needs. The I-Way will be a mesh of interconnected data highways of many forms: telephone wires, cable TV wires, radio-based wireless-cellular and satellite. The I-Way is quickly acquiring new on-ramps and even small highway systems.
1.4 CONSUMER ORIENTED E-COMMERCE APPLICATIONS
 
 

The wide range of applications for the consumer marketplace can be broadly classified into
  • Entertainment: Movies on demand, Video cataloging, interactive ads, multi-user games, on-line discussions
  • Financial services and information: Home banking, financial services, financial news
  • Essential services: Home shopping, electronic catalogs, telemedicine, remote diagnostics
  • Educational and training: Interactive education, video conferencing, on-line databases
    1. BUILDING BLOCKS IN THE INFRASTRUCTURE OF E-COMMERCE APPLICATIONS
None of the applications would be possible without each of the building blocks in the infrastructure which are given as follows:
  • Common business services, for facilitating the buying and selling process
  • Messaging and information distribution, as a means of sending and retrieving information
  • Multimedia content and network publishing, for creating a product and a means to communicate about it.
  • The I-Way is the very foundation for providing the highway system along which all E-commerce must travel.
    1. PILLARS SUPPORTING THE E-COMMERCE APPLICATIONS
There are two pillars supporting all E-commerce applications and infrastructure. They are:
Public policy – To govern such issues as Universal access, privacy and information pricing
Technical standards – To dictate the nature of information publishing, user interfaces, and transport in the interest of compatibility across the entire network.
1.7 BENEFITS OF E-COMMERCE
 
 

Electronic Commerce can offer both short term and long-term benefits to the companies. Not only can it open new markets, enabling you to reach new customers, but it can also make it easier and faster for you to do business with your existing customer base. Moving business practices, Such as ordering, invoicing and customer support, to network-based system can also reduce the paperwork involved in business-to-business transactions. When more of the information is digital, one can better focus on meeting your customer’s needs. Tracking customer satisfaction, requesting more customer feedback, and presenting custom solutions for the clients are just some of the opportunities that can stem from E-commerce.

 

1.8 MULTIMEDIA CONTENT FOR E-COMMERCE APPLICATIONS
Multimedia content can be considered both fuel and traffic for E-commerce applications. The technical definition of Multimedia is the use of digital data in more than one format, such as the combination of text, audio, video and graphics in a computer file/document. Its purpose is to combine the interactivity of a user-friendly interface with multiple forms of content. The success of E-commerce applications also depends on the variety and innovativeness of multimedia content and packaging. E-commerce requires robust servers to store and distribute large amounts of digital content to consumers. These multimedia storage servers are large information warehouses capable of handling various content. Theses servers must handle large-scale distribution, guarantee security and complete reliability.
1.9 CLIENT-SERVER ARCHITECTURE IN E-COMMERCE
 
 

All E-commerce applications follow the client-server model. Clients are the devices plus software that request information from servers. Servers are the computers which server information upon the request by the clients. Client devices handle the user interface. The server manages application tasks, handles storage and security and provides scalability (ability to add more clients as needed for serving more customers). The client-server architecture links PC’s to a storage (or database) server, where most of the computing is done on the client.
The client-server model allows the client to interact with the server through a request-reply sequence governed by a paradigm known as message passing. Commercial users have only recently begun downsizing their applications to run on client-server networks, a trend that E-commerce is expected to accelerate.
1.10 TYPES OF E-COMMERCE
 
 

The following three strategies are the focal points for E-Commerce
1.10.1 Business-to-business E-commerce: The Internet can connect all businesses to each other, regardless of their location or position in the supply chain. This ability presents a huge threat to traditional intermediaries like wholesalers and brokers. Internet connections facilitate businesses’ ability to bargain directly with a range of suppliers -- thereby eliminating the need for such intermediaries.
      1. Business-to-consumer E-commerce: One-way marketing. Corporate web sites are still prominent distribution mechanisms for corporate brochures, the push, one-way marketing strategy.
Purchasing over the Web: Availability of secure web transactions is enabling companies to allow consumers to purchase products directly over the web. Electronic catalogs and virtual malls are becoming commonplace.
Relationship Marketing: The most prominent of these new paradigms is that of relationship marketing. Because consumer actions can be tracked on the web, companies are experimenting with this commerce methodology as a tool for market research and relationship marketing:
    • Consumer survey forms on the web
    • Using web tracking and other technology to make inferences about consumer buying profiles.
    • Customizing products and services
    • Achieving customer satisfaction and building long-term relationships
      1. Intra-company E-commerce: Companies are embracing intranets at a phenomenal growth rate because they achieve the following benefits:
Reducing cost - lowers print-intensive production processes, such as employee handbooks, phone books, and policies and procedures
Enhancing communications - effective communication and training of employees using web browsers builds a sense of belonging and community.
Distributing software - upgrades and new software can be directly distributed over the web to employees.
Sharing intellectual property - provides a platform for sharing expertise and ideas as well as creating and updating content - "Knowledge webs". This is common in organizations that value their intellectual capital as their competitive advantage.
Testing products - allows experimentation for applications that will be provided to customers on the external web.
1.11 TECHNOLOGIES OF E-COMMERCE
 
 

While many technologies can fit within the definition of "Electronic commerce," the most important are:
  • Electronic data interchange (EDI)
  • Bar codes
  • Electronic mail
  • Internet
  • World Wide Web
  • Product data exchange
  • Electronic forms
Electronic Data Interchange (EDI)
 

EDI is the computer-to-computer exchange of structured business information in a standard electronic format. Information stored on one computer is translated by software programs into standard EDI format for transmission to one or more trading partners. The trading partners’ computers, in turn, translate the information using software programs into a form they can understand.
Bar Codes
 

Bar codes are used for automatic product identification by a computer. They are a rectangular pattern of lines of varying widths and spaces. Specific characters (e.g. numbers 0-9) are assigned unique patterns, thus creating a "font" which computers can recognize based on light reflected from a laser.
The most obvious example of bar codes is on consumer products such as packaged foods. These codes allow the products to be scanned at the check out counter. As the product is identified the price is entered in the cash register, while internal systems such as inventory and accounting are automatically updated.
The special value of a bar code is that objects can be identified at any point where a stationary or hand held laser scanner could be employed. Thus the technology carries tremendous potential to improve any process requiring tight control of material flow. Good examples would be shipping, inventory management, and work flow in discrete parts manufacturing.
Electronic Mail
 

Messages composed by an individual and sent in digital form to other recipients via the Internet.
Internet
 

The Internet is a decentralized global network of millions of diverse computers and computer networks. These networks can all "talk" to each other because they have agreed to use a common communications protocol called TCP/IP. The Internet is a tool for communications between people and businesses. The network is growing very, very fast and as more and more people are gaining access to the Internet, it is becoming more and more useful.
World Wide Web
 

The World Wide Web is a collection of documents written and encoded with the Hypertext Markup Language (HTML). With the aid of a relatively small piece of software (called a "browser"), a user can ask for these documents and display them on the user’s local computer, although the document can be on a computer on a totally different network elsewhere in the world. HTML documents (or "pages," as they are called) can contain many different kinds of information such as text, pictures, video, sound, and pointers, which take users immediately to other web pages. Because Web pages are continually available through the Internet, these pointers may call up pages from anywhere in the world. It is this ability to jump from site to site that gave rise to the term "World Wide Web." Browsing the Web (or "surfing the Net") can be a fascinating activity, especially to people new to the Internet. The World Wide Web is by far the most heavily used application on the Internet.
Product Data Exchange
 

Product data refers to any data that is needed to describe a product. Sometimes that data is in graphical form, as in the case of pictures, drawings and CAD files. In other cases the data may be character based (numbers and letters), as in the case of specifications, bills of material, manufacturing instructions, engineering change notices and test results.
Product data exchange differs from other types of business communications in two important ways. First, because graphics are involved users must contend with large computer files and with problems of compatibility between software applications. (The difficulty of exchanging CAD files from one system to another is legendary.) Second, version control very quickly gets very complicated. Product designs, even late in the development cycle, are subject to a great deal of change, and because manufacturing processes are involved, even small product changes can have major consequences for getting a product into production.
Electronic Forms
 

Electronic forms is a technology that combines the familiarity of paper forms with the power of storing information in digital form. Imagine an ordinary paper form, a piece of paper with lines, boxes, check-off lists, and places for signatures. To the user an electronic form is simply a digital analogue of such a paper form, an image, which looks like a form but which appears on a computer screen and is filled out via mouse, and keyboard. Behind the screen, however, lie numerous functions that paper and pencil cannot provide. Those extra functions come about because the data from electronic forms are captured in digital form, thus allowing storage in data bases, automatic information routing, and integration into other applications.
1.12 DETERMINING TECHNOLOGICAL FEASIBILITY
 
 

As business needs are determined, it is necessary to establish the technological feasibility of various E-commerce plans that could meet the needs. The starting point should be a clear sense of what functions each E-commerce technology can provide to improve business functioning. We summarize these in the below given table :
Most Powerful Functions of Each E-commerce Technology
Technology
Business Value
EDI
  1. Integration of incoming and outgoing structured data into other applications (e.g., use of customer orders to schedule production)
  2. Lowers cost when transaction volume is high
  3. Eases communication with many different trading partners (customers, suppliers, vendors)
Bar Code
  1. Locate and identify material
  2. Integrate location and identification information with other applications and data bases (e.g., bar codes inserted at loading dock can be integrated into an advance ship notice EDI transaction).
Electronic mail
  1. Free-text queries to individuals or groups
  2. Share information via simple messages
  3. Share complex information (via attachments)
  4. Collaboration across distance (by making it easier to communicate and share information)
World Wide Web
  1. Present information about company
  2. Search for information from a large number of sources
  3. Electronic commerce -- buy/sell products and services
  4. Collaboration, information sharing among selected users within or without a company
Product Data Exchange
  1. Accurate product details transmitted to trading partners
  2. Oversight of trading partners design work
  3. Collaborative engineering across distance
Electronic Forms
  1. Managing processes when human oversight, approvals, or information input needs to be combined with standard elements of information (e.g., catalogue data)
  2. Tracking progress in a process where many people are involved doing different activities
  3. Integrating human input data with automated data bases or applications
  4. Electronic commerce (through integration with the WWW and internal systems)


1.13 ELECTRONIC COMMERCE VERSUS OTHER FORMS OF COMMERCE
 
 

The methods of doing business differ from traditional commerce to the extent to which electronic commerce combines information technology, telecommunications technology, and business processes to make it practical to do business in ways that could not otherwise be done. To illustrate, let’s draw on some examples. In each of these cases technology and business process must work together if EC is to be successful.
Example of EC
Technology
Business Process
Information access Manufacturer provides suppliers with access to data base on Electronic commerce networks Customer
  1. Database with reliable information.
  2. Security fire-wall to control outsiders’ access
Supplier
  1. Computer with network access capability
  1. Customer commits that data are current.
  2. Customer commits to inform supplier that a change has been made.
  3. Supplier agrees to use database as source of ECN information.
Interpersonal communication services Joint customer - supplier design
  1. Computer Aided Design systems which can understand each others’ files
  2. Version control applications
  1. Agreement that joint design will take place
  2. Adoption of compatible design methods
  3. Training of groups in collaborative design
Shopping services Web to shop for commodities Seller
  1. Web site capable of allowing on-line shopping
  2. Web site capable of secure transmission
Buyer
  1. Web browsing capability
Seller
  1. Ability to keep site current in an environment of rapidly changing availability and price.

Buyer
  1. Purchasing system that can commit to a purchase without paper.
Virtual enterprise Integrated supply chain
  1. EDI
  2. MRP
  3. Email (for exceptions)
  1. Process reengineering of order entry and purchasing systems to allow integration of MRP and EDI.
  2. Staff assigned to resolving exceptions.

1.14 IMPLEMENTATION OF E-COMMERCE: A LIFE CYCLE APPROACH
 
 

Proper implementation requires deliberate attention to seven stages of technology life cycle :
  1. Awareness Training: Provides an understanding of what the technology is, a general sense of what it can do for a business, and how to begin implementation.
  2. Business Analysis: It is easy to jump immediately from "awareness" to the details of "requirements analysis", but doing so is a mistake. To assure maximum value from EC, there must be a thorough understanding of how the new technology can help the business.
  3. Requirements Analysis: Yields an understanding of what kind of EC functionality is needed to meet business requirements. As an example: business need equals to keep customers informed of changing product availability and price. Requirement equals to web based catalogue.
  4. Design: Sets out specifics, e.g. Who are my potential vendors? By when do I need different parts of the system up and running? What will the system cost?
  5. Implementation: The system becomes real. New technology comes in the door. Training is conducted. New business process begins to function. And so on.
  6. Integration and Validation: Make sure the system performs as per its specifications.
  7. Maintenance: Keeps the system running, deals with unforeseen circumstances, and plans for improvement.
The main reason to employ these stages is that failure to do so can result in wasted time, wasted money, and sub-optimal systems. While it is important to assure that all stages are invoked, the effort expended on each may vary greatly with circumstance. As an example, a company contemplating a Web based catalogue may have a critical mass of workers who have used the Web and who appreciate what it can do. In this case little awareness training is needed. It may be important to make sure that people involved have a specific appreciation of what Web catalogues can do, but certainly this situation does not require that great resources be invested in the Awareness stage. As a second example, a company may implement e-mail, a technology that draws on well proven off-the-shelf software, and which requires no complex system integration. While "integration and testing" must certainly be carried out, the resources invested in this life cycle stage should be relatively small.
1.15 ELECTRONIC SHOPPING CART
 
 

An electronic shopping cart works the same way a shopping cart does in the physical world. As you browse through an online store, you can place products in your virtual shopping cart, which keeps track of the products you have placed in it. When you're ready to leave the store, you click a "check out" link that shows you what you've placed in your virtual shopping cart. You can usually remove items that you're no longer interested in purchasing and then enter your shipping and payment information to process your order.
1.16 IS E-COMMERCE SAFE?
 
 

No e-commerce system can guarantee 100-percent protection for your credit card, but you're less likely to get your pocket picked online than in a real store. Although Internet security breaches have received a lot of press attention, most vendors and analysts argue that transactions are actually less dangerous in cyberspace than in the physical world. For merchants, E-commerce is actually safer than opening a store that could be looted, burned, or flooded. The difficulty is in getting customers to believe that E-commerce is safe for them. Consumers don't really believe it yet, but experts say E-commerce transactions are safer than ordinary credit card purchases. Ever since the 1.0 versions of Netscape Navigator and Microsoft Internet Explorer, transactions can be encrypted using Secure Sockets Layer, a protocol that creates a secure connection to the server, protecting the information as it travels over the Internet. SSL uses public key encryption, one of the strongest encryption methods around. A way to tell that a Web site is secured by SSL is when the URL begins with https instead of http.
Browser makers and credit card companies are promoting an additional security standard called Secure Electronic Transactions (SET). SET encodes the credit card numbers that sit on vendors' servers so that only banks and credit card companies can read the numbers.
1.17 SYSTEMS OF PAYMENTS IN E-COMMERCE
 
 

E-commerce is rife with buzzwords and catchphrases. Here are some of the current terms people like to throw around:
1.17.1 Credit card-based: If consumers want to purchase a product or service, they simply send their credit card details to the service provider involved and the credit card organization will handle this payment like any other.
1.17.2 Smart cards: These are credit and debit cards and other card products enhanced with microprocessors capable of holding more information than the traditional magnetic stripe. The chip can store significantly greater amounts of data, estimated to be 80 times more than a magnetic stripe. Smart cards are basically of two types:
Relationship based smart credit cards: This is an enhancement of existing card services and/or the addition of new services that a financial institution delivers to its customers via a chip-based card or other device. These new services may include access to multiple financial accounts, value-added marketing programs, or other information cardholders may want to store on their card.
Electronic Purses: These are wallet-sized smart cards embedded with programmable microchips that store sums of money for people to use instead of cash for everything from buying food to paying subway fares.
1.17.3 Digital or electronic cash: Also called e-cash, these terms refer to any of several schemes that allow a person to pay for goods or services by transmitting a number from one computer to another. The numbers, just like those on a dollar bill, are issued by a bank and represent specified sums of real money. One of the key features of digital cash is that it's anonymous and reusable, just like real cash. This is a key difference between e-cash and credit card transactions over the Internet.
1.17.4 Electronic checks: Currently being tested by Cybercash, electronic checking systems such as PayNow take money from users' checking accounts to pay utility and phone bills.
1.17.5 Electronic wallet: This is a payment scheme, such as Cybercash’s Internet Wallet, that stores your credit card numbers on your hard drive in an encrypted form. You can then make purchases at Web sites that support that particular electronic wallet. When you go to a participating online store, you click a Pay button to initiate a credit card payment via a secure transaction enabled by the electronic wallet company's server. The major browser vendors have struck deals to include electronic wallet technology in their products.

1.17.6 : Transactions in amounts between 25 cents and $10, typically made in order to download or access graphics, games, and information, are known as micropayments. Pay-as-you-go micropayments were supposed to revolutionize the world of E-commerce..
1.18 SUMMARY
 
 

E-commerce is a new way of conducting, managing and executing business transactions using computer and telecommunications networks. As awareness of the Internet throughout the commercial world and general public increases, competitiveness will force lower entry barriers, continued rapid innovation and expansion of markets. The real key to making electronic commerce over the Internet a normal, everyday business activity is the convergence of the telecommunications, content/media and software industries. E-Commerce is expected to improve the productivity and competitiveness of participating businesses by unprecedented access to an on-line global market place with millions of customers and thousands of products and services.
1.19 GLOSSARY
 
 

CGI script: Common gateway Interface is a scripting system designed to work with HTTP Web Servers. The scripts, usually written in the Perl coding language, are ofter used to exchange data between a Web server and databases.
Digital Cash: An electronic replacement of cash.
Joint Electronic Payments Initiative (JEPI): This initiative, led by the World Wide Web Consortium and CommerceNet, is an attempt to standardize payment negotiations. On the buyer’s side (the client side), JEPI serves as an interface that enables a Web browser, and wallets, to use a variety of payment protocols. On the merchant’s side(the server side), JEPI acts between the network and transport layers to pass off the incoming transactions to the proper transport and payment protocols.
Microcash: Small denomination digital tokens.
Microtransactions: Low-cost, real-time transactions using microcash.
Smart cards: A credit card-sized plastic card with a special type of integrated circuit embedded in it. The integrated circuit holds information in electronic form and controls who uses this information and how.
Tokens: Strings of digits representing a certain amount of currency. The issuing bank validates each token with a digital stamp.
Value added networks: Networks that are maintained privately and dedicated to EDI between business parteners.
 



= MA THEREFORE TRANSMISION AND DISTRIBUTION ON MODERN FINANCE MATIC=

          
 












Tidak ada komentar:

Posting Komentar