Jumat, 24 November 2017

international financial data acquisition techniques in intermediate electronics process AMNIMARJESLOW GOVERNMENT 91220017 LOR ELPROCESS ACQUISITION IN COORDINATE STRUCTURE FINTECH 02096010014 LJBUSAF XAM$$$$$ NEXT PROG YES


                                          Hasil gambar untuk international financial 

Information processing , the acquisition, recording, organization, retrieval, display, and dissemination of information. In recent years, the term has often been applied to computer-based operations specifically.
In popular usage, the term information refers to facts and opinions provided and received during the course of daily life: one obtains information directly from other living beings, from mass media, from electronic data banks, and from all sorts of observable phenomena in the surrounding environment. A person using such facts and opinions generates more information, some of which is communicated to others during discourse, by instructions, in letters and documents, and through other media.
     
                            Big data concepts, methods, and analytics

Size is the first, and at times, the only dimension that leaps out at the mention of big data. This paper attempts to offer a broader definition of big data that captures its other unique and defining characteristics. The rapid evolution and adoption of big data by industry has leapfrogged the discourse to popular outlets, forcing the academic press to catch up. Academic journals in numerous disciplines, which will benefit from a relevant discussion of big data, have yet to cover the topic. This paper presents a consolidated description of big data by integrating definitions from practitioners and academics. The paper's primary focus is on the analytic methods used for big data. A particular distinguishing feature of this paper is its focus on analytics related to unstructured data, which constitute 95% of big data. This paper highlights the need to develop appropriate and efficient analytical methods to leverage massive volumes of heterogeneous data in unstructured text, audio, and video formats. This paper also reinforces the need to devise new tools for predictive analytics for structured big data. The statistical methods in practice were devised to infer from sample data. The heterogeneity, noise, and the massive size of structured big data calls for developing computationally efficient algorithms that may avoid big data pitfalls, such as spurious correlation. 

1. Introduction
This paper documents the basic concepts relating to big data. It attempts to consolidate the hitherto fragmented discourse on what constitutes big data, what metrics define the size and other characteristics of big data, and what tools and technologies exist to harness the potential of big data.
From corporate leaders to municipal planners and academics, big data are the subject of attention, and to some extent, fear. The sudden rise of big data has left many unprepared. In the past, new technological developments first appeared in technical and academic publications. The knowledge and synthesis later seeped into other avenues of knowledge mobilization, including books. The fast evolution of big data technologies and the ready acceptance of the concept by public and private sectors left little time for the discourse to develop and mature in the academic domain. Authors and practitioners leapfrogged to books and other electronic media for immediate and wide circulation of their work on big data. Thus, one finds several books on big data, including Big Data for Dummies, but not enough fundamental discourse in academic publications.
The leapfrogging of the discourse on big data to more popular outlets implies that a coherent understanding of the concept and its nomenclature is yet to develop. For instance, there is little consensus around the fundamental question of how big the data has to be to qualify as ‘big data’. Thus, there exists the need to document in the academic press the evolution of big data concepts and technologies.
A key contribution of this paper is to bring forth the oft-neglected dimensions of big data. The popular discourse on big data, which is dominated and influenced by the marketing efforts of large software and hardware developers, focuses on predictive analytics and structured data. It ignores the largest component of big data, which is unstructured and is available as audio, images, video, and unstructured text. It is estimated that the analytics-ready structured data forms only a small subset of big data. The unstructured data, especially data in video format, is the largest component of big data that is only partially archived.
This paper is organized as follows. We begin the paper by defining big data. We highlight the fact that size is only one of several dimensions of big data. Other characteristics, such as the frequency with which data are generated, are equally important in defining big data. We then expand the discussion on various types of big data, namely text, audio, video, and social media. We apply the analytics lens to the discussion on big data. Hence, when we discuss data in video format, we focus on methods and tools to analyze data in video format.
Given that the discourse on big data is contextualized in predictive analytics frameworks, we discuss how analytics have captured the imaginations of business and government leaders and describe the state-of-practice of a rapidly evolving industry. We also highlight the perils of big data, such as spurious correlation, which have hitherto escaped serious inquiry. The discussion has remained focused on correlation, ignoring the more nuanced and involved discussion on causation. We conclude by highlighting the expected developments to realize in the near future in big data analytics.

2. Defining big data

While it is ubiquitous today, however, ‘big data’ as a concept is nascent and has uncertain origins. Diebold (2012) argues that the term “big data … probably originated in lunch-table conversations at Silicon Graphics Inc. (SGI) in the mid-1990s, in which John Mashey figured prominently”. Despite the references to the mid-nineties, Fig. 1 shows that the term became widespread as recently as in 2011. The current hype can be attributed to the promotional initiatives by IBM and other leading technology companies who invested in building the niche analytics market.
Frequency distribution of documents containing the term “big data” in ProQuest…
 
Fig. 1. Frequency distribution of documents containing the term “big data” in ProQuest Research Library.
Big data definitions have evolved rapidly, which has raised some confusion. This is evident from an online survey of 154 C-suite global executives conducted by Harris Interactive on behalf of SAP in April 2012 (“Small and midsize companies look to make big gains with big data,” 2012). Fig. 2 shows how executives differed in their understanding of big data, where some definitions focused on what it is, while others tried to answer what it does.
Definitions of big data based on an online survey of 154 global executives in…
 
Fig. 2. Definitions of big data based on an online survey of 154 global executives in April 2012.
Clearly, size is the first characteristic that comes to mind considering the question “what is big data?” However, other characteristics of big data have emerged recently. For instance, Laney (2001) suggested that Volume, Variety, and Velocity (or the Three V's) are the three dimensions of challenges in data management. The Three V's have emerged as a common framework to describe big data (Chen, Chiang, & Storey, 2012; Kwon, Lee, & Shin, 2014). For example, Gartner, Inc. defines big data in similar terms:
Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.” (“Gartner IT Glossary, n.d.”)

Similarly, TechAmerica Foundation defines big data as follows:
Big data is a term that describes large volumes of high velocity, complex and variable data that require advanced techniques and technologies to enable the capture, storage, distribution, management, and analysis of the information.” (TechAmerica Foundation's Federal Big Data Commission, 2012)

We describe the Three V's below.
Volume refers to the magnitude of data. Big data sizes are reported in multiple terabytes and petabytes. A survey conducted by IBM in mid-2012 revealed that just over half of the 1144 respondents considered datasets over one terabyte to be big data (Schroeck, Shockley, Smart, Romero-Morales, & Tufano, 2012). One terabyte stores as much data as would fit on 1500 CDs or 220 DVDs, enough to store around 16 million Facebook photographs. Beaver, Kumar, Li, Sobel, and Vajgel (2010) report that Facebook processes up to one million photographs per second. One petabyte equals 1024 terabytes. Earlier estimates suggest that Facebook stored 260 billion photos using storage space of over 20 petabytes.
Definitions of big data volumes are relative and vary by factors, such as time and the type of data. What may be deemed big data today may not meet the threshold in the future because storage capacities will increase, allowing even bigger data sets to be captured. In addition, the type of data, discussed under variety , defines what is meant by ‘big’. Two datasets of the same size may require different data management technologies based on their type, e.g., tabular versus video data. Thus, definitions of big data also depend upon the industry. These considerations therefore make it impractical to define a specific threshold for big data volumes.
Variety refers to the structural heterogeneity in a dataset. Technological advances allow firms to use various types of structured, semi-structured, and unstructured data. Structured data, which constitutes only 5% of all existing data (Cukier, 2010), refers to the tabular data found in spreadsheets or relational databases. Text, images, audio, and video are examples of unstructured data, which sometimes lack the structural organization required by machines for analysis. Spanning a continuum between fully structured and unstructured data, the format of semi-structured data does not conform to strict standards. Extensible Markup Language (XML), a textual language for exchanging data on the Web, is a typical example of semi-structured data. XML documents contain user-defined data tags which make them machine-readable.
A high level of variety, a defining characteristic of big data, is not necessarily new. Organizations have been hoarding unstructured data from internal sources (e.g., sensor data) and external sources (e.g., social media). However, the emergence of new data management technologies and analytics, which enable organizations to leverage data in their business processes, is the innovative aspect. For instance, facial recognition technologies empower the brick-and-mortar retailers to acquire intelligence about store traffic, the age or gender composition of their customers, and their in-store movement patterns. This invaluable information is leveraged in decisions related to product promotions, placement, and staffing. Clickstream data provides a wealth of information about customer behavior and browsing patterns to online retailers. Clickstream advises on the timing and sequence of pages viewed by a customer. Using big data analytics, even small and medium-sized enterprises (SMEs) can mine massive volumes of semi-structured data to improve website designs and implement effective cross-selling and personalized product recommendation systems.
Velocity refers to the rate at which data are generated and the speed at which it should be analyzed and acted upon. The proliferation of digital devices such as smartphones and sensors has led to an unprecedented rate of data creation and is driving a growing need for real-time analytics and evidence-based planning. Even conventional retailers are generating high-frequency data. Wal-Mart, for instance, processes more than one million transactions per hour (Cukier, 2010). The data emanating from mobile devices and flowing through mobile apps produces torrents of information that can be used to generate real-time, personalized offers for everyday customers. This data provides sound information about customers, such as geospatial location, demographics, and past buying patterns, which can be analyzed in real time to create real customer value.
Given the soaring popularity of smartphones, retailers will soon have to deal with hundreds of thousands of streaming data sources that demand real-time analytics. Traditional data management systems are not capable of handling huge data feeds instantaneously. This is where big data technologies come into play. They enable firms to create real-time intelligence from high volumes of ‘perishable’ data.
In addition to the three V's, other dimensions of big data have also been mentioned. These include:
Veracity. IBM coined Veracity as the fourth V, which represents the unreliability inherent in some sources of data. For example, customer sentiments in social media are uncertain in nature, since they entail human judgment. Yet they contain valuable information. Thus the need to deal with imprecise and uncertain data is another facet of big data, which is addressed using tools and analytics developed for management and mining of uncertain data.
Variability (and complexity). SAS introduced Variability and Complexity as two additional dimensions of big data. Variability refers to the variation in the data flow rates. Often, big data velocity is not consistent and has periodic peaks and troughs. Complexity refers to the fact that big data are generated through a myriad of sources. This imposes a critical challenge: the need to connect, match, cleanse and transform data received from different sources.
Value. Oracle introduced Value as a defining attribute of big data. Based on Oracle's definition, big data are often characterized by relatively “low value density”. That is, the data received in the original form usually has a low value relative to its volume. However, a high value can be obtained by analyzing large volumes of such data.

The relativity of big data volumes discussed earlier applies to all dimensions. Thus, universal benchmarks do not exist for volume, variety, and velocity that define big data. The defining limits depend upon the size, sector, and location of the firm and these limits evolve over time. Also important is the fact that these dimensions are not independent of each other. As one dimension changes, the likelihood increases that another dimension will also change as a result. However, a ‘three-V tipping point’ exists for every firm beyond which traditional data management and analysis technologies become inadequate for deriving timely intelligence. The Three-V tipping point is the threshold beyond which firms start dealing with big data. The firms should then trade-off the future value expected from big data technologies against their implementation costs.

3. Big data analytics

Big data are worthless in a vacuum. Its potential value is unlocked only when leveraged to drive decision making. To enable such evidence-based decision making, organizations need efficient processes to turn high volumes of fast-moving and diverse data into meaningful insights. The overall process of extracting insights from big data can be broken down into five stages (Labrinidis & Jagadish, 2012), shown in Fig. 3. These five stages form the two main sub-processes: data management and analytics. Data management involves processes and supporting technologies to acquire and store data and to prepare and retrieve it for analysis. Analytics, on the other hand, refers to techniques used to analyze and acquire intelligence from big data. Thus, big data analytics can be viewed as a sub-process in the overall process of ‘insight extraction’ from big data.
Processes for extracting insights from big data
 
Fig. 3. Processes for extracting insights from big data.
In the following sections, we briefly review big data analytical techniques for structured and unstructured data. Given the breadth of the techniques, an exhaustive list of techniques is beyond the scope of a single paper. Thus, the following techniques represent a relevant subset of the tools available for big data analytics.

3.1. Text analytics

Text analytics (text mining) refers to techniques that extract information from textual data. Social network feeds, emails, blogs, online forums, survey responses, corporate documents, news, and call center logs are examples of textual data held by organizations. Text analytics involve statistical analysis, computational linguistics, and machine learning. Text analytics enable businesses to convert large volumes of human generated text into meaningful summaries, which support evidence-based decision-making. For instance, text analytics can be used to predict stock market based on information extracted from financial news (Chung, 2014). We present a brief review of text analytics methods below.
Information extraction (IE) techniques extract structured data from unstructured text. For example, IE algorithms can extract structured information such as drug name, dosage, and frequency from medical prescriptions. Two sub-tasks in IE are Entity Recognition (ER) and Relation Extraction (RE) (Jiang, 2012). ER finds names in text and classifies them into predefined categories such as person, date, location, and organization. RE finds and extracts semantic relationships between entities (e.g., persons, organizations, drugs, genes, etc.) in the text. For example, given the sentence “Steve Jobs co-founded Apple Inc. in 1976”, an RE system can extract relations such as FounderOf [Steve Jobs, Apple Inc.] or FoundedIn [Apple Inc., 1976].
Text summarization techniques automatically produce a succinct summary of a single or multiple documents. The resulting summary conveys the key information in the original text(s). Applications include scientific and news articles, advertisements, emails, and blogs. Broadly speaking, summarization follows two approaches: the extractive approach and the abstractive approach. In extractive summarization, a summary is created from the original text units (usually sentences). The resulting summary is a subset of the original document. Based on the extractive approach, formulating a summary involves determining the salient units of a text and stringing them together. The importance of the text units is evaluated by analyzing their location and frequency in the text. Extractive summarization techniques do not require an ‘understanding’ of the text. In contrast, abstractive summarization techniques involve extracting semantic information from the text. The summaries contain text units that are not necessarily present in the original text. In order to parse the original text and generate the summary, abstractive summarization incorporates advanced Natural Language Processing (NLP) techniques. As a result, abstractive systems tend to generate more coherent summaries than the extractive systems do (Hahn & Mani, 2000). However, extractive systems are easier to adopt, especially for big data.
Question answering (QA) techniques provide answers to questions posed in natural language. Apple's Siri and IBM's Watson are examples of commercial QA systems. These systems have been implemented in healthcare, finance, marketing, and education. Similar to abstractive summarization, QA systems rely on complex NLP techniques. QA techniques are further classified into three categories: the information retrieval (IR)-based approach, the knowledge-based approach, and the hybrid approach. IR-based QA systems often have three sub-components. First is the question processing, used to determine details, such as the question type, question focus, and the answer type, which are used to create a query. Second is document processing which is used to retrieve relevant pre-written passages from a set of existing documents using the query formulated in question processing. Third is answer processing, used to extract candidate answers from the output of the previous component, rank them, and return the highest-ranked candidate as the output of the QA system. Knowledge-based QA systems generate a semantic description of the question, which is then used to query structured resources. The Knowledge-based QA systems are particularly useful for restricted domains, such as tourism, medicine, and transportation, where large volumes of pre-written documents do not exist. Such domains lack data redundancy, which is required for IR-based QA systems. Apple's Siri is an example of a QA system that exploits the knowledge-based approach. In hybrid QA systems, like IBM's Watson, while the question is semantically analyzed, candidate answers are generated using the IR methods.
Sentiment analysis (opinion mining) techniques analyze opinionated text, which contains people's opinions toward entities such as products, organizations, individuals, and events. Businesses are increasingly capturing more data about their customers’ sentiments that has led to the proliferation of sentiment analysis (Liu, 2012). Marketing, finance, and the political and social sciences are the major application areas of sentiment analysis. Sentiment analysis techniques are further divided into three sub-groups, namely document-level, sentence-level, and aspect-based. Document-level techniques determine whether the whole document expresses a negative or a positive sentiment. The assumption is that the document contains sentiments about a single entity. While certain techniques categorize a document into two classes, negative and positive, others incorporate more sentiment classes (like the Amazon's five-star system) (Feldman, 2013). Sentence-level techniques attempt to determine the polarity of a single sentiment about a known entity expressed in a single sentence. Sentence-level techniques must first distinguish subjective sentences from objective ones. Hence, sentence-level techniques tend to be more complex compared to document-level techniques. Aspect-based techniques recognize all sentiments within a document and identify the aspects of the entity to which each sentiment refers. For instance, customer product reviews usually contain opinions about different aspects (or features) of a product. Using aspect-based techniques, the vendor can obtain valuable information about different features of the product that would be missed if the sentiment is only classified in terms of polarity.

3.2. Audio analytics

Audio analytics analyze and extract information from unstructured audio data. When applied to human spoken language, audio analytics is also referred to as speech analytics. Since these techniques have mostly been applied to spoken audio, the terms audio analytics and speech analytics are often used interchangeably. Currently, customer call centers and healthcare are the primary application areas of audio analytics.
Call centers use audio analytics for efficient analysis of thousands or even millions of hours of recorded calls. These techniques help improve customer experience, evaluate agent performance, enhance sales turnover rates, monitor compliance with different policies (e.g., privacy and security policies), gain insight into customer behavior, and identify product or service issues, among many other tasks. Audio analytics systems can be designed to analyze a live call, formulate cross/up-selling recommendations based on the customer's past and present interactions, and provide feedback to agents in real time. In addition, automated call centers use the Interactive Voice Response (IVR) platforms to identify and handle frustrated callers.
In healthcare, audio analytics support diagnosis and treatment of certain medical conditions that affect the patient's communication patterns (e.g., depression, schizophrenia, and cancer) (Hirschberg, Hjalmarsson, & Elhadad, 2010). Also, audio analytics can help analyze an infant's cries, which contain information about the infant's health and emotional status (Patil, 2010). The vast amount of data recorded through speech-driven clinical documentation systems is another driver for the adoption of audio analytics in healthcare.
Speech analytics follows two common technological approaches: the transcript-based approach (widely known as large-vocabulary continuous speech recognition, LVCSR) and the phonetic-based approach. These are explained below.
LVCSR systems follow a two-phase process: indexing and searching. In the first phase, they attempt to transcribe the speech content of the audio. This is performed using automatic speech recognition (ASR) algorithms that match sounds to words. The words are identified based on a predefined dictionary. If the system fails to find the exact word in the dictionary, it returns the most similar one. The output of the system is a searchable index file that contains information about the sequence of the words spoken in the speech. In the second phase, standard text-based methods are used to find the search term in the index file.
Phonetic-based systems work with sounds or phonemes. Phonemes are the perceptually distinct units of sound in a specified language that distinguish one word from another (e.g., the phonemes/k/and/b/differentiate the meanings of “cat” and “bat”). Phonetic-based systems also consist of two phases: phonetic indexing and searching. In the first phase, the system translates the input speech into a sequence of phonemes. This is in contrast to LVCSR systems where the speech is converted into a sequence of words. In the second phase, the system searches the output of the first phase for the phonetic representation of the search terms.

3.3. Video analytics

Video analytics, also known as video content analysis (VCA), involves a variety of techniques to monitor, analyze, and extract meaningful information from video streams. Although video analytics is still in its infancy compared to other types of data mining (Panigrahi, Abraham, & Das, 2010), various techniques have already been developed for processing real-time as well as pre-recorded videos. The increasing prevalence of closed-circuit television (CCTV) cameras and the booming popularity of video-sharing websites are the two leading contributors to the growth of computerized video analysis. A key challenge, however, is the sheer size of video data. To put this into perspective, one second of a high-definition video, in terms of size, is equivalent to over 2000 pages of text (Manyika et al., 2011). Now consider that 100 hours of video are uploaded to YouTube every minute (YouTube Statistics, n.d.).
Big data technologies turn this challenge into opportunity. Obviating the need for cost-intensive and risk-prone manual processing, big data technologies can be leveraged to automatically sift through and draw intelligence from thousands of hours of video. As a result, the big data technology is the third factor that has contributed to the development of video analytics.
The primary application of video analytics in recent years has been in automated security and surveillance systems. In addition to their high cost, labor-based surveillance systems tend to be less effective than automatic systems (e.g., Hakeem et al., 2012 report that security personnel cannot remain focused on surveillance tasks for more than 20 minutes). Video analytics can efficiently and effectively perform surveillance functions such as detecting breaches of restricted zones, identifying objects removed or left unattended, detecting loitering in a specific area, recognizing suspicious activities, and detecting camera tampering, to name a few. Upon detection of a threat, the surveillance system may notify security personnel in real time or trigger an automatic action (e.g., sound alarm, lock doors, or turn on lights).
The data generated by CCTV cameras in retail outlets can be extracted for business intelligence. Marketing and operations management are the primary application areas. For instance, smart algorithms can collect demographic information about customers, such as age, gender, and ethnicity. Similarly, retailers can count the number of customers, measure the time they stay in the store, detect their movement patterns, measure their dwell time in different areas, and monitor queues in real time. Valuable insights can be obtained by correlating this information with customer demographics to drive decisions for product placement, price, assortment optimization, promotion design, cross-selling, layout optimization, and staffing.
Another potential application of video analytics in retail lies in the study of buying behavior of groups. Among family members who shop together, only one interacts with the store at the cash register, causing the traditional systems to miss data on buying patterns of other members. Video analytics can help retailers address this missed opportunity by providing information about the size of the group, the group's demographics, and the individual members’ buying behavior.
Automatic video indexing and retrieval constitutes another domain of video analytics applications. The widespread emergence of online and offline videos has highlighted the need to index multimedia content for easy search and retrieval. The indexing of a video can be performed based on different levels of information available in a video including the metadata, the soundtrack, the transcripts, and the visual content of the video. In the metadata-based approach, relational database management systems (RDBMS) are used for video search and retrieval. Audio analytics and text analytics techniques can be applied to index a video based on the associated soundtracks and transcripts, respectively. A comprehensive review of approaches and techniques for video indexing is presented in Hu, Xie, Li, Zeng, and Maybank (2011).
In terms of the system architecture, there exist two approaches to video analytics, namely server-based and edge-based:
Server-based architecture. In this configuration, the video captured through each camera is routed back to a centralized and dedicated server that performs the video analytics. Due to bandwidth limits, the video generated by the source is usually compressed by reducing the frame rates and/or the image resolution. The resulting loss of information can affect the accuracy of the analysis. However, the server-based approach provides economies of scale and facilitates easier maintenance.
Edge-based architecture. In this approach, analytics are applied at the ‘edge’ of the system. That is, the video analytics is performed locally and on the raw data captured by the camera. As a result, the entire content of the video stream is available for the analysis, enabling a more effective content analysis. Edge-based systems, however, are more costly to maintain and have a lower processing power compared to the server-based systems.

3.4. Social media analytics

Social media analytics refer to the analysis of structured and unstructured data from social media channels. Social media is a broad term encompassing a variety of online platforms that allow users to create and exchange content. Social media can be categorized into the following types: Social networks (e.g., Facebook and LinkedIn), blogs (e.g., Blogger and WordPress), microblogs (e.g., Twitter and Tumblr), social news (e.g., Digg and Reddit), social bookmarking (e.g., Delicious and StumbleUpon), media sharing (e.g., Instagram and YouTube), wikis (e.g., Wikipedia and Wikihow), question-and-answer sites (e.g., Yahoo! Answers and Ask.com) and review sites (e.g., Yelp, TripAdvisor) (Barbier & Liu, 2011; Gundecha & Liu, 2012). Also, many mobile apps, such as Find My Friend, provide a platform for social interactions and, hence, serve as social media channels.
Although the research on social networks dates back to early 1920s, nevertheless, social media analytics is a nascent field that has emerged after the advent of Web 2.0 in the early 2000s. The key characteristic of the modern social media analytics is its data-centric nature. The research on social media analytics spans across several disciplines, including psychology, sociology, anthropology, computer science, mathematics, physics, and economics. Marketing has been the primary application of social media analytics in recent years. This can be attributed to the widespread and growing adoption of social media by consumers worldwide (He, Zha, & Li, 2013), to the extent that Forrester Research, Inc., projects social media to be the second-fastest growing marketing channel in the US between 2011 and 2016 (VanBoskirk, Overby, & Takvorian, 2011).
User-generated content (e.g., sentiments, images, videos, and bookmarks) and the relationships and interactions between the network entities (e.g., people, organizations, and products) are the two sources of information in social media. Based on this categorization, the social media analytics can be classified into two groups:
Content-based analytics. Content-based analytics focuses on the data posted by users on social media platforms, such as customer feedback, product reviews, images, and videos. Such content on social media is often voluminous, unstructured, noisy, and dynamic. Text, audio, and video analytics, as discussed earlier, can be applied to derive insight from such data. Also, big data technologies can be adopted to address the data processing challenges.
Structure-based analytics. Also referred to as social network analytics, this type of analytics are concerned with synthesizing the structural attributes of a social network and extracting intelligence from the relationships among the participating entities. The structure of a social network is modeled through a set of nodes and edges, representing participants and relationships, respectively. The model can be visualized as a graph composed of the nodes and the edges. We review two types of network graphs, namely social graphs and activity graphs (Heidemann, Klier, & Probst, 2012). In social graphs, an edge between a pair of nodes only signifies the existence of a link (e.g., friendship) between the corresponding entities. Such graphs can be mined to identify communities or determine hubs (i.e., the users with a relatively large number of direct and indirect social links). In activity networks, however, the edges represent actual interactions between any pair of nodes. The interactions involve exchanges of information (e.g., likes and comments). Activity graphs are preferable to social graphs, because an active relationship is more relevant to analysis than a mere connection.

Various techniques have recently emerged to extract information from the structure of social networks. We briefly discuss these below.
Community detection, also referred to as community discovery, extracts implicit communities within a network. For online social networks, a community refers to a sub-network of users who interact more extensively with each other than with the rest of the network. Often containing millions of nodes and edges, online social networks tend to be colossal in size. Community detection helps to summarize huge networks, which then facilitates uncovering existing behavioral patterns and predicting emergent properties of the network. In this regard, community detection is similar to clustering (Aggarwal, 2011), a data mining technique used to partition a data set into disjoint subsets based on the similarity of data points. Community detection has found several application areas, including marketing and the World Wide Web (Parthasarathy, Ruan, & Satuluri, 2011). For example, community detection enables firms to develop more effective product recommendation systems.
Social influence analysis refers to techniques that are concerned with modeling and evaluating the influence of actors and connections in a social network. Naturally, the behavior of an actor in a social network is affected by others. Thus, it is desirable to evaluate the participants’ influence, quantify the strength of connections, and uncover the patterns of influence diffusion in a network. Social influence analysis techniques can be leveraged in viral marketing to efficiently enhance brand awareness and adoption.
A salient aspect of social influence analysis is to quantify the importance of the network nodes. Various measures have been developed for this purpose, including degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality (for more details refer to Tang & Liu, 2010). Other measures evaluate the strength of connections represented by edges or model the spread of influence in social networks. The Linear Threshold Model (LTM) and Independent Cascade Model (ICM) are two well-known examples of such frameworks (Sun & Tang, 2011).
Link prediction specifically addresses the problem of predicting future linkages between the existing nodes in the underlying network. Typically, the structure of social networks is not static and continuously grows through the creation of new nodes and edges. Therefore, a natural goal is to understand and predict the dynamics of the network. Link prediction techniques predict the occurrence of interaction, collaboration, or influence among entities of a network in a specific time interval. Link prediction techniques outperform pure chance by factors of 40–50, suggesting that the current structure of the network surely contains latent information about future links (Liben-Nowell & Kleinberg, 2003).
In biology, link prediction techniques are used to discover links or associations in biological networks (e.g., protein–protein interaction networks), eliminating the need for expensive experiments (Hasan & Zaki, 2011). In security, link prediction helps to uncover potential collaborations in terrorist or criminal networks. In the context of online social media, the primary application of link prediction is in the development of recommendation systems, such as Facebook's “People You May Know”, YouTube's “Recommended for You”, and Netflix's and Amazon's recommender engines.

3.5. Predictive analytics

Predictive analytics comprise a variety of techniques that predict future outcomes based on historical and current data. In practice, predictive analytics can be applied to almost all disciplines – from predicting the failure of jet engines based on the stream of data from several thousand sensors, to predicting customers’ next moves based on what they buy, when they buy, and even what they say on social media.
At its core, predictive analytics seek to uncover patterns and capture relationships in data. Predictive analytics techniques are subdivided into two groups. Some techniques, such as moving averages, attempt to discover the historical patterns in the outcome variable(s) and extrapolate them to the future. Others, such as linear regression, aim to capture the interdependencies between outcome variable(s) and explanatory variables, and exploit them to make predictions. Based on the underlying methodology, techniques can also be categorized into two groups: regression techniques (e.g., multinomial logit models) and machine learning techniques (e.g., neural networks). Another classification is based on the type of outcome variables: techniques such as linear regression address continuous outcome variables (e.g., sale price of houses), while others such as Random Forests are applied to discrete outcome variables (e.g., credit status).
Predictive analytics techniques are primarily based on statistical methods. Several factors call for developing new statistical methods for big data. First, conventional statistical methods are rooted in statistical significance: a small sample is obtained from the population and the result is compared with chance to examine the significance of a particular relationship. The conclusion is then generalized to the entire population. In contrast, big data samples are massive and represent the majority of, if not the entire, population. As a result, the notion of statistical significance is not that relevant to big data. Secondly, in terms of computational efficiency, many conventional methods for small samples do not scale up to big data. The third factor corresponds to the distinctive features inherent in big data: heterogeneity, noise accumulation, spurious correlations, and incidental endogeneity We describe these below.
Heterogeneity. Big data are often obtained from different sources and represent information from different sub-populations. As a result, big data are highly heterogeneous. The sub-population data in small samples are deemed outliers because of their insufficient frequency. However, the sheer size of big data sets creates the unique opportunity to model the heterogeneity arising from sub-population data, which would require sophisticated statistical techniques.
Noise accumulation. Estimating predictive models for big data often involves the simultaneous estimation of several parameters. The accumulated estimation error (or noise) for different parameters could dominate the magnitudes of variables that have true effects within the model. In other words, some variables with significant explanatory power might be overlooked as a result of noise accumulation.
Spurious correlation. For big data, spurious correlation refers to uncorrelated variables being falsely found to be correlated due to the massive size of the dataset. Fan and Lv (2008) show this phenomenon through a simulation example, where the correlation coefficient between independent random variables is shown to increase with the size of the dataset. As a result, some variables that are scientifically unrelated (due to their independence) are erroneously proven to be correlated as a result of high dimensionality.
Incidental endogeneity. A common assumption in regression analysis is the exogeneity assumption: the explanatory variables, or predictors, are independent of the residual term. The validity of most statistical methods used in regression analysis depends on this assumption. In other words, the existence of incidental endogeneity (i.e., the dependence of the residual term on some of the predictors) undermines the validity of the statistical methods used for regression analysis. Although the exogeneity assumption is usually met in small samples, incidental endogeneity is commonly present in big data. It is worthwhile to mention that, in contrast to spurious correlation, incidental endogeneity refers to a genuine relationship between variables and the error term.

The irrelevance of statistical significance, the challenges of computational efficiency, and the unique characteristics of big data discussed above highlight the need to develop new statistical techniques to gain insights from predictive models.

4. Concluding remarks

The objective of this paper is to describe, review, and reflect on big data. The paper first defined what is meant by big data to consolidate the divergent discourse on big data. We presented various definitions of big data, highlighting the fact that size is only one dimension of big data. Other dimensions, such as velocity and variety are equally important. The paper's primary focus has been on analytics to gain valid and valuable insights from big data. We highlight the point that predictive analytics, which deals mostly with structured data, overshadows other forms of analytics applied to unstructured data, which constitutes 95% of big data. We reviewed analytics techniques for text, audio, video, and social media data, as well as predictive analytics. The paper makes the case for new statistical techniques for big data to address the peculiarities that differentiate big data from smaller data sets. Most statistical methods in practice have been devised for smaller data sets comprising samples.
Technological advances in storage and computations have enabled cost-effective capture of the informational value of big data in a timely manner. Consequently, one observes a proliferation in real-world adoption of analytics that were not economically feasible for large-scale applications prior to the big data era. For example, sentiment analysis (opinion mining) have been known since the early 2000s . However, big data technologies enabled businesses to adopt sentiment analysis to glean useful insights from millions of opinions shared on social media. The processing of unstructured text fueled by the massive influx of social media data is generating business value by adopting conventional (pre-big data) sentiment analysis techniques, which may not be ideally suited to leverage big data.
Although major innovations in analytical techniques for big data have not yet taken place, one anticipates the emergence of such novel analytics in the near future. For instance, real-time analytics will likely become a prolific field of research because of the growth in location-aware social media and mobile apps. Since big data are noisy, highly interrelated, and unreliable, it will likely lead to the development of statistical techniques more readily apt for mining big data while remaining sensitive to the unique characteristics. Going beyond samples, additional valuable insights could be obtained from the massive volumes of less ‘trustworthy’ data. 



     

                                             XXX  .  V  Big Data Acquisition  


Different data processing architectures for big data have been proposed to address the different characteristics of big data. Data acquisition has been understood as the process of gathering, filtering, and cleaning data before the data is put in a data warehouse or any other storage solution. The acquisition of big data is most commonly governed by four of the Vs: volume, velocity, variety, and value. Most data acquisition scenarios assume high-volume, high-velocity, high-variety, but low-value data, making it important to have adaptable and time-efficient gathering, filtering, and cleaning algorithms that ensure that only the high-value fragments of the data are actually processed by the data-warehouse analysis. The goals of this chapter are threefold: First, it aims to identify the current requirements for data acquisition by presenting open state-of-the-art frameworks and protocols for big data acquisition for companies. The second goal is to unveil the current approaches used for data acquisition in the different sectors. Finally, it discusses how the requirements of data acquisition are met by current approaches as well as possible future developments in the same area.

Introduction
Over the last years, the term big data was used by different major players to label data with different attributes. Moreover, different data processing architectures for big data have been proposed to address the different characteristics of big data. Overall, data acquisition has been understood as the process of gathering, filtering, and cleaning data before the data is put in a data warehouse or any other storage solution.
The position of big data acquisition within the overall big data value chain can be seen in Fig. 4.1. The acquisition of big data is most commonly governed by four of the Vs: volume, velocity, variety, and value. Most data acquisition scenarios assume high-volume, high-velocity, high-variety, but low-value data, making it important to have adaptable and time-efficient gathering, filtering, and cleaning algorithms that ensure that only the high-value fragments of the data are actually processed by the data-warehouse analysis. However, for some organizations, most data is of potentially high value as it can be important to recruit new customers. For such organizations, data analysis, classification, and packaging on very high data volumes play the most central role after the data acquisition.
Fig. 4.1
Data acquisition in the big data value chain
The goals of this chapter are threefold: First, it aims to identify the present general requirements for data acquisition by presenting open state-of-the-art frameworks and protocols for big data acquisition for companies. Our second goal is then to unveil the current approaches used for data acquisition in the different sectors. Finally, it discusses how the requirements to data acquisition are met by current approaches as well as possible future developments in the same area.

4.2 Key Insights for Big Data Acquisition

To get a better understanding of data acquisition, the chapter will first take a look at the different big data architectures of Oracle, Vivisimo, and IBM. This will integrate the process of acquisition within the big data processing pipeline.
The big data processing pipeline has been abstracted in numerous ways in previous works. Oracle (2012) relies on a three-step approach for data processing. In the first step, the content of different data sources is retrieved and stored within a scalable storage solution such as a NoSQL database or the Hadoop Distributed File System (HDFS). The stored data is subsequently processed by first being reorganized and stored in an SQL-capable big data analytics software and finally analysed by using big data analytics algorithms.
Velocity (Vivisimo 2012) relies on a different view on big data. Here, the approach is more search-oriented. The main component of the architecture is a connector layer, in which different data sources can be addressed. The content of these data sources is gathered in parallel, converted, and finally added to an index, which builds the basis for data analytics, business intelligence, and all other data-driven applications. Other big players such as IBM rely on architectures similar to Oracle’s (IBM 2013).
Throughout the different architectures to big data processing, the core of data acquisition boils down to gathering data from distributed information sources with the aim of storing them in scalable, big data-capable data storage. To achieve this goal, three main components are required:
  1. 1.
    Protocols that allow the gathering of information for distributed data sources of any type (unstructured, semi-structured, structured)
     
  2. 2.
    Frameworks with which the data is collected from the distributed sources by using different protocols
     
  3. 3.
    Technologies that allow the persistent storage of the data retrieved by the frameworks
     

4.3 Social and Economic Impact of Big Data Acquisition

Over the last years, the sheer amount of data that is produced in a steady manner has increased. Ninety percent of the data in the world today was produced over the last 2 years. The source and nature of this data is diverse. It ranges from data gathered by sensors to data depicting (online) transactions. An ever-increasing part is produced in social media and via mobile devices. The type of data (structured vs. unstructured) and semantics are also diverse. Yet, all this data must be aggregated to help answer business questions and form a broad picture of the market.
For business this trend holds several opportunities and challenges to both creating new business models and improving current operations, thereby generating market advantages. Tools and methods to deal with big data driven by the four Vs can be used for improved user-specific advertisement or market research in general. For example, smart metering systems are tested in the energy sector. Furthermore, in combination with new billing systems these systems could also be beneficial in other sectors such as telecommunication and transport.
Big data has already influenced many businesses and has the potential to impact all business sectors. While there are several technical challenges, the impact on management and decision-making and even company culture will be no less great (McAfee and Brynjolfsson 2012).
There are still several boundaries though. Namely privacy and security concerns need to be addressed by these systems and technologies. Many systems already generate and collect large amounts of data, but only a small fragment is used actively in business processes. In addition, many of these systems lack real-time requirements.

4.4 Big Data Acquisition: State of the Art

The bulk of big data acquisition is carried out within the message queuing paradigm, sometimes also called the streaming paradigm, publish/subscribe paradigm (Carzaniga et al. 2000), or event processing paradigm (Cugola and Margara 2012; Luckham 2002). Here, the basic assumption is that manifold volatile data sources generate information that needs to be captured, stored, and analysed by a big data processing platform. The new information generated by the data source is forwarded to the data storage by means of a data acquisition framework that implements a predefined protocol. This section describes the two core technologies for acquiring big data.

4.4.1 Protocols

Several of the organizations that rely internally on big data processing have devised enterprise-specific protocols of which most have not been publicly released and can thus not be described in this chapter. This section presents the commonly used open protocols for data acquisition.

4.4.1.1 AMQP

The reason for the development of Advanced Message Queuing Protocol (AMQP) was the need for an open protocol that would satisfy the requirements of large companies with respect to data acquisition. To achieve this goal, 23 companies compiled a sequence of requirements for a data acquisition protocol. The resulting AMQP (Advanced Message Queuing Protocol) became an OASIS standard in October 2012. The rationale behind AMQP (Bank of America et al. 2011) was to provide a protocol with the following characteristics:
  • Ubiquity: This property of AMQP refers to its ability to be used across different industries within both current and future data acquisition architectures. AMQP’s ubiquity was achieved by making it easily extensible and simple to implement. The large number of frameworks that implement it, including SwiftMQ, Microsoft Windows Azure Service Bus, Apache Qpid, and Apache ActiveMQ, reflects how easy the protocol is to implement.
  • Safety: The safety property was implemented across two different dimensions. First, the protocol allows the integration of message encryption to ensure that even intercepted messages cannot be decoded easily. Thus, it can be used to transfer business-critical information. The protocol is robust against the injection of spam, making the AMQP brokers difficult to attack. Second, the AMQP ensures the durability of messages, meaning that it allows messages to be transferred even when the sender and receiver are not online at the same time.
  • Fidelity: This third characteristic is concerned with the integrity of the message. AMQP includes means to ensure that the sender can express the semantics of the message and thus allow the receiver to understand what it is receiving. The protocol implements reliable failure semantics that allow systems to detect errors from the creation of the message at the sender’s end before the storage of the information by the receiver.
  • Applicability: The intention behind this property is to ensure that AMQP clients and brokers can communicate by using several of the protocols of the Open Systems Interconnection (OSI) model layers such as Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and also Stream Control Transmission Protocol (SCTP). By these means, AMQP is applicable in many scenarios and industries where not all the protocols of the OSI model layers are required and used. Moreover, the protocol was designed to support different messaging patterns including direct messaging, request/reply, publish/subscribe, etc.
  • Interoperability: The protocol was designed to be independent of particular implementations and vendors. Thus, clients and brokers with fully independent implementations, architectures, and ownership can interact by means of AMQP. As stated above, several frameworks from different organizations now implement the protocol.
  • Manageability: One of the main concerns during the specification of the AMQP was to ensure that frameworks that implement it could scale easily. This was achieved by ensuring that AMQP is a fault-tolerant and lossless wire protocol through which information of all types (e.g. XML, audio, video) can be transferred.
To implement these requirements, AMQP relies on a type system and four different layers: a transport layer, a messaging layer, a transaction layer, and a security layer. The type system is based on primitive types from databases (integers, strings, symbols, etc.), described types as known from programming, and descriptor values that can be extended by the users of the protocol. In addition, AMQP allows the use of encoding to store symbols and values as well as the definition of compound types that consist of combinations of several primary types.
The transport layer defines how AMQP messages are to be processed. An AMQP network consists of nodes that are connected via links. Messages can originate from (senders), be forwarded by (relays), or be consumed by nodes (receivers). Messages are only allowed to travel across a link when this link abides by the criteria defined by the source of the message. The transport layer supports several types of route exchanges including message fanout and topic exchange.
The messaging layer of AMQP describes the structure of valid messages. A bare message is a message as submitted by the sender to an AMQP network.
The transaction layer allows for the “coordinated outcome of otherwise independent transfers” (Bank of America et al. 2011, p. 95). The basic idea behind the architecture of the transactional messaging approach followed by the layer lies in the sender of the message acting as controller while the receiver acts as a resource as messages are transferred as specified by the controller. By these means, decentralized and scalable message processing can be achieved.
The final AMQP layer is the security layer, which enables the definition of means to encrypt the content of AMQP messages. The protocols for achieving this goal are supposed to be defined externally from AMQP itself. Protocols that can be used to this end include transport layer security (TSL) and simple authentication and security layer (SASL).
Due to its adoption across several industries and its high flexibility, it is likely that AMQP will become the standard approach for message processing in industries that cannot afford to implement their own dedicated protocols. With the upcoming data-as-a-service industry, it also promises to be the go-to solution for implementing services around data streams. One of the most commonly used AMQP brokers is RabbitMQ, whose popularity is mostly due to the fact that it implements several messaging protocols including JMS.

4.4.1.2 Java Message Service

Java Message Service (JMS) API was included in the Java 2 Enterprise Edition on 18 March 2002, after the Java Community Process in its final version 1.1 ratified it as a standard.
According to the 1.1 specification JMS “provides a common way for Java programs to create, send, receive and read an enterprise messaging system’s messages”. Administrative tools allow one to bind destinations and connection factories into a Java Naming and Directory Interface (JNDI) namespace. A JMS client can then use resource injection to access the administered objects in the namespace and then establish a logical connection to the same objects through the JMS provider.
The JNDI serves in this case as the moderator between different clients who want to exchange messages. Note that the term “client” is used here (as the spec does) to denote the sender as well as receiver of a message, because JMS was originally designed to exchange message peer-to-peer. Currently, JMS offers two messaging models: point-to-point and publisher-subscriber, where the latter is a one-to-many connection.
AMQP is compatible with JMS, which is the de facto standard for message passing in the Java world. While AMQP is defined at the format level (i.e. byte stream of octets), JMS is standardized at API level and is therefore not easy to implement in other programing languages (as the “J” in “JMS” suggests). Also JMS does not provide functionality for load balancing/fault tolerance, error/advisory notification, administration of services, security, wire protocol, or message type repository (database access).
A considerable advantage of AMQP is, however, the programming language independence of the implementation that avoids vendor-lock in and platform compatibility.

4.4.2 Software Tools

With respect to software tools for data acquisition, many of them are well known and many use cases are available all over the web so it is feasible to have a first approach to them. Despite this, the correct use of each tool requires a deep knowledge on the internal working and the implementation of the software. Different paradigms of data acquisition have appeared depending on the scope these tools have been focused on. The architectural diagram in Fig. 4.2 shows an overall picture of the complete big data workflow highlighting the data acquisition part.
Fig. 4.2
Big data workflow
In the remainder of this section, these tools and others relating to data acquisition are described in detail.

4.4.2.1 Storm

Storm is an open-source framework for the robust distributed real-time computation on streams of data. It started off as an open-source project and now has a large and active community. Storm supports a wide range of programming languages and storage facilities (relational databases, NoSQL stores, etc.). One of the main advantages of Storm is that it can be utilized in many data gathering scenarios including stream processing and distributed RPC for solving computationally intensive functions on-the-fly, and continuous computation applications (Gabriel 2012). Many companies and applications are using Storm to power a wide variety of production systems processing data, including Groupon, The Weather Channel, fullcontact.com, and Twitter.
The logical network of Storm consists of three types of nodes: a master node called Nimbus, a set of intermediate Zookeeper nodes, and a set of Supervisor nodes.
  • The Nimbus: is equivalent to Hadoop’s JobTracker: it uploads the computation for execution, distributes code across the cluster, and monitors computation.
  • The Zookeepers: handle the complete cluster coordination. This cluster organization layer is based upon the Apache ZooKeeper project.
  • The Supervisor Daemon: spawns worker nodes; it is comparable to Hadoop’s TaskTracker. This is the place where most of the work of application developers goes into. The worker nodes communicate with the Nimbus via the Zookeepers to determine what to run on the machine, starting and stopping workers.
A computation is called topology in Storm. Once deployed, topologies run indefinitely. There are four concepts and abstraction layers within Storm:
  • Streams: unbounded sequence of tuples, which are named lists of values. Values can be arbitrary objects implementing a serialization interface.
  • Spouts: are sources of streams in a computation, e.g. readers for data sources such as the Twitter Streaming APIs.
  • Bolts: process any number of input streams and produce any number of output streams. This is where most of the application logic goes.
  • Topologies: are the top-level abstractions of Storm. Basically, a topology is a network of spouts and bolts connected by edges. Every edge is a bolt subscribing to the stream of a spout or another bolt.
Both spouts and bolts are stateless nodes and inherently parallel, executing as many tasks across the cluster. From a physical point of view a worker is a Java Virtual Machine (JVM) process with a number of tasks running within. Both spouts and bolts are distributed over a number of tasks and workers. Storm supports a number of stream grouping approaches ranging from random grouping to tasks, to field grouping, where tuples are grouped by specific fields to the same tasks (Madsen 2012).
Storm uses a pull model; each bolt pulls events from its source. Tuples traverse the entire network within a specified time window or are considered as failed. Therefore, in terms of recovery the spouts are responsible to keep tuples ready for replay.

4.4.2.2 S4

S4 (simply scalable streaming system) is a distributed, general-purpose platform for developing applications that process streams of data. Started in 2008 by Yahoo! Inc., since 2011 it is an Apache Incubator project. S4 is designed to work on commodity hardware, avoiding I/O bottlenecks by relying on an all-in-memory approach (Neumeyer 2011).
In general keyed data events are routed to processing elements (PE). PEs receive events and either emit resulting events and/or publish results. The S4 engine was inspired by the MapReduce model and resembles the Actors model (encapsulation semantics and location transparency). Among others it provides a simple programming interface for processing data streams in a decentralized, symmetric, and pluggable architecture.
A stream in S4 is a sequence of elements (events) of both tuple-valued keys and attributes. A basic computational unit PE is identified by the following four components: (1) its functionality provided by the PE class and associated configuration, (2) the event types it consumes, (3) the keyed attribute in this event, and (4) the value of the keyed attribute of the consuming events. A PE is instantiated by the platform for each value of the key attribute. Keyless PEs are a special class of PEs with no keyed attribute and value. These PEs consume all events of the corresponding type and are typically at the input layer of an S4 cluster. There is a large number of standard PEs available for a number of typical tasks such as aggregate and join. The logical hosts of PEs are the processing nodes (PNs). PNs listen to events, execute operations for incoming events, and dispatch events with the assistance of the communication layer.
S4 routes each event to PNs based on a hash function over all known values of the keyed attribute in the event. There is another special type of PE object: the PE prototype. It is identified by the first three components. These objects are configured upon initialization and for any value it can clone itself to create a fully qualified PE. This cloning event is triggered by the PN for each unique value of the keyed attribute. An S4 application is a graph composed of PE prototypes and streams that produce, consume, and transmit messages, whereas PE instances are clones of the corresponding prototypes containing the state and are associated with unique keys (Neumeyer et al. 2011).
As a consequence of this design S4 guarantees that all events with a specific value of the keyed attribute arrive at the corresponding PN and within it are routed to the specific PE instance (Bradic 2011). The current state of a PE is inaccessible to other PEs. S4 is based upon a push model: events are routed to the next PE as fast as possible. Therefore, if a receiver buffer fills up events may be dropped. Via lossy checkpointing S4 provides state recovery. In the case of a node crash a new one takes over its task from the most recent snapshot. The communication layer is based upon the Apache ZooKeeper project. It manages the cluster and provides failover handling to stand-by nodes. PEs are built in Java using a fairly simple API and are assembled into the application using the Spring framework.

4.4.2.3 Kafka

Kafka is a distributed publish-subscribe messaging system designed to support mainly persistent messaging with high-throughput. Kafka aims to unify offline and online processing by providing a mechanism for a parallel load into Hadoop as well as the ability to partition real-time consumption over a cluster of machines. The use for activity stream processing makes Kafka comparable to Apache Flume, though the architecture and primitives are very different and make Kafka more comparable to a traditional messaging system.
Kafka was originally developed at LinkedIn for tracking the huge volume of activity events generated by the website. These activity events are critical for monitoring user engagement as well as improving relevancy in their data-driven products. The previous diagram gives a simplified view of the deployment topology at LinkedIn.
Note that a single Kafka cluster handles all activity data from different sources. This provides a single pipeline of data for both online and offline consumers. This tier acts as a buffer between live activity and asynchronous processing. Kafka can also be used to replicate all data to a different data centre for offline consumption.
Kafka can be used to feed Hadoop for offline analytics, as well as a way to track internal operational metrics that feed graphs in real time. In this context, a very appropriate use for Kafka and its publish-subscribe mechanism would be processing related stream data, from tracking user actions on large-scale websites to relevance and ranking tasks.
In Kafka, each stream is called a “topic”. Topics are partitioned for scaling purposes. Producers of messages provide a key which is used to determine the partition the message is sent to. Thus, all messages partitioned by the same key are guaranteed to be in the same topic partition. Kafka brokers handle some partitions and receive and store messages sent by producers.
Kafka consumers read from a topic by getting messages from all partitions of the topic. If a consumer wants to read all messages with a specific key (e.g. a user ID in case of website clicks) he only has to read messages from the partition the key is on, not the complete topic. Furthermore, it is possible to reference any point in a brokers log file using an offset. This offset determines where a consumer is in a specific topic/partition pair. The offset is incremented once a consumer reads the topic/partition pair.
Kafka provides an at-least-once messaging guarantee and highly available partitions. To store and cache messages Kafka relies on file systems, whereas all data is written immediately to a persistent log without necessarily flushing to disk. In combination the protocol is built upon a message set abstraction, which groups messages together. Therewith, it minimizes the network overhead and sequential disk operations. Both consumer and producer share the same message format.

4.4.2.4 Flume

Flume is a service for efficiently collecting and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tuneable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows online analytic applications. The system was designed with these four key goals in mind: reliability, scalability, manageability, and extensibility
The purpose of Flume is to provide a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of log data from many different sources to a centralized data store. The architecture of Flume NG is based on a few concepts that together help achieve this objective:
  • Event: a byte payload with optional string headers that represent the unit of data that Flume can transport from its point of origin to its final destination.
  • Flow: movement of events from the point of origin to their final destination is considered a data flow, or simply flow.
  • Client: an interface implementation that operates at the point of origin of events and delivers them to a Flume agent.
  • Agent: an independent process that hosts flume components such as sources, channels, and sinks, and thus has the ability to receive, store, and forward events to their next-hop destination.
  • Source: an interface implementation that can consume events delivered to it via a specific mechanism.
  • Channel: a transient store for events, where events are delivered to the channel via sources operating within the agent. An event put in a channel stays in that channel until a sink removes it for further transport.
  • Sink: an interface implementation that can remove events from a channel and transmit them to the next agent in the flow, or to the event’s final destination.
These concepts help in simplifying the architecture, implementation, configuration, and deployment of Flume.
A flow in Flume NG starts from the client. The client transmits the event to its next-hop destination. This destination is an agent. More precisely, the destination is a source operating within the agent. The source receiving this event will then deliver it to one or more channels. The channels that receive the event are drained by one or more sinks operating within the same agent. If the sink is a regular sink, it will forward the event to its next-hop destination, which will be another agent. If instead it is a terminal sink, it will forward the event to its final destination. Channels allow for the decoupling of sources from sinks using the familiar producer-consumer model of data exchange. This allows sources and sinks to have different performance and runtime characteristics and yet be able to effectively use the physical resources available to the system.
The primary use case for Flume is as a logging system that gathers a set of log files on every machine in a cluster and aggregates them to a centralized persistent store such as the Hadoop Distributed File System (HDFS). Also, Flume can be used as an HTTP event manager that deals with different types of requests and drives each of them to any specific data store during a data acquisition process, such as an NoSQL databases like HBase.
Therefore, Apache Flume is not a pure data acquisition system but acts in a complementary fashion by managing the different data types acquired and transforming them to specific data stores or repositories.

4.4.2.5 Hadoop

Apache Hadoop is an open-source project developing a framework for reliable, scalable, and distributed computing on big data using clusters of commodity hardware. It was derived from Google’s MapReduce and the Google File System (GFS) and written in JAVA. It is used and supported by a large community and is both used in production and research environments by many organizations, most notably: Facebook, a9.com, AOL, Baidu, IBM, Imageshack, and Yahoo. The Hadoop project consists of four modules:
  • Hadoop Common: for common utilities used throughout Hadoop.
  • Hadoop Distributed File System (HDFS): a highly available and efficient file system.
  • Hadoop YARN (Yet Another Resource Negotiator): a framework for job scheduling and cluster management.
  • Hadoop MapReduce: a system to parallel processing large amounts of data.
A Hadoop cluster is designed according to the master-slave principle. The master is the name node. It keeps track of the metadata about the file distribution. Large files are typically split into chunks of 128 MB. These parts are copied three times and the replicas are distributed through the cluster of data nodes (slave nodes). In the case of a node failure its information is not lost; the name node is able to allocate the data again. To monitor the cluster every slave node regularly sends a heartbeat to the name node. If a slave is not recognized over a specific period it is considered dead. As the master node is a single point of failure it is typically run on highly reliable hardware. And, as precaution a secondary name node can keep track of changes in the metadata; with its help it is possible to rebuild the functionality of the name node and thereby ensure the functionality of the cluster.
YARN is Hadoop’s cluster scheduler. It allocates a number of containers (which are essential processes) in a cluster of machines and executes arbitrary commands on them. YARN consists of three main pieces: a ResourceManager, a NodeManager, and an ApplicationMaster. In a cluster each machine runs a NodeManager, responsible for running processes on the local machine. ResourceManagers tell NodeManagers what to run, and Applications tell the ResourceManager when to run something on the cluster.
Data is processed according to the MapReduce paradigm. MapReduce is a framework for parallel-distributed computation. As data storage processing works in a master-slave fashion, computation tasks are called jobs and are distributed by the job tracker. Instead of moving the data to the calculation, Hadoop moves the calculation to the data. The job tracker functions as a master distributing and administering jobs in the cluster. Task trackers carry out the actual work on jobs. Typically each cluster node is running a task tracker instance and a data node. The MapReduce framework eases programming of highly distributed parallel programs. A programmer can focus on writing the more simpler map() and reduce() functions dealing with the task at hand while the MapReduce infrastructure takes care of running and managing the tasks in the cluster.
In the orbit of the Hadoop project a number of related projects have emerged. The Apache Pig project for instance is built upon Hadoop and simplifies writing and maintaining Hadoop implementations. Hadoop is very efficient for batch processing. The Apache HBase project aims to provide real-time access to big data.

4.5 Future Requirements and Emerging Trends for Big Data Acquisition

Big data acquisition tooling has to deal with high-velocity, variety, and real-time data acquisition. Thus, tooling for data acquisition has to ensure a very high throughput. This means that data can come from multiple resources (social networks, sensors, web mining, logs, etc.) with different structures, or be unstructured (text, video, pictures, and media files) and at a very high pace (tens or hundreds of thousands events per second). Therefore, the main challenge in acquiring big data is to provide frameworks and tools that ensure the required throughput for the problem at hand without losing any data in the process.
In this context, emerging challenges for the acquisition of big data include the following:
  • Data acquisition is often started by tools that provide some kind of input data to the system, such as social networks and web mining algorithms, sensor data acquisition software, logs periodically injected, etc. Typically the data acquisition process starts with single or multiple end points where the data comes from. These end points could take different technical appearances, such as log importers, Storm-based algorithms, or even the data acquisition may offer APIs to the external world to inject the data, by using RESTful services or any other programmatic APIs. Hence, any technical solution that aims to acquire data from different sources should be able to deal with this wide range of different implementations.
  • To provide the mechanisms to connect the data acquisition with the data pre- and post-processing (analysis) and storage, both in the historical and real-time layers. In order to do so, the batch and real-time processing tools (i.e. Storm and Hadoop) should be able to be contacted by the data acquisition tools. This is implemented in different ways. For instance Apache Kafka uses a publish-subscribe mechanism where both Hadoop and Storm can be subscribed, and therefore the messages received will be available to them. Apache Flume on the other hand follows a different approach, storing the data in a NoSQL key-value store to ensure velocity, and pushing the data to one or several receivers (i.e. Hadoop and Storm). There is a red thin line between data acquisition, storage, and analysis in this process, as data acquisition typically ends by storing the raw data in an appropriate master dataset, and connecting with the analytical pipeline (especially for real-time, but also batch processing).
  • To come up with a structured or semi-structured model valid for data analysis, to effectively pre-process acquired data, especially unstructured data. The borders between data acquisition and analysis are blurred in the pre-processing stage. Some may argue that pre-processing is part of processing, and therefore of data analysis, while others believe that data acquisition does not end with the actual gathering, but also with cleaning the data and providing a minimal set of coherence and metadata on top of it. Data cleaning usually takes several steps, such as boilerplate removal (i.e. removing HTML headers in web mining acquisition), language detection and named entities recognition (for textual resources), and providing extra metadata such as timestamp, provenance information (yet another overlap with data curation), etc.
  • The acquisition of media (pictures, video) is a significant challenge, but it is an even bigger challenge to perform the analysis and storage of video and images.
  • Data variety requires processing the semantics in the data in order to correctly and effectively merge data from different sources while processing. Works on semantic event processing such as semantic approximations (Hasan and Curry 2014a), thematic event processing (Hasan and Curry 2014b), and thingsonomy tagging (Hasan and Curry 2015) are emerging approaches in this area, within this context.
  • In order to perform post- and pre-processing of acquired data, the current state-of the art provides a set of open-source and commercial tools and frameworks. The main goal when defining a correct data acquisition strategy is therefore to understand the needs of the system in terms of data volume, variety, and velocity, and take the right decision on which tool is best to ensure the acquisition and desired throughput.

4.6 Sector Case Studies for Big Data Acquisition

This section analyses the use of big data acquisition technology within a number of sectors.

4.6.1 Health Sector

Within the health sector big data technology aims to establish a holistic approach whereby clinical, financial, and administrative data as well as patient behavioural data, population data, medical device data, and any other related health data are combined and used for retrospective, real-time, and predictive analysis.
In order to establish a basis for the successful implementation of big data health applications, the challenge of data digitalization and acquisition (i.e. putting health data in a form suitable as input for analytic solutions) needs to be addressed.
As of today, large amounts of health data are stored in data silos and data exchange is only possible via Scan, Fax, or email. Due to inflexible interfaces and missing standards, the aggregation of health data relies on individualized solutions with high costs.
In hospitals patient data is stored in CIS (clinical information system) or EHR (electronic health record) systems. However, different clinical departments might use different systems, such as RIS (radiology information system), LIS (laboratory information system), or PACS (picture archiving and communication system) to store their data. There is no standard data model or EHR system. Existing mechanisms for data integration are either adaptations of standard data warehouse solutions from horizontal IT providers like Oracle Healthcare Data Model, Teradata’s Healthcare Logical Data Model, IBM Healthcare Provider Data Model, or new solutions like the i2b2 platform. While the first three are mainly used to generate benchmarks regarding the performance of the overall hospital organization, the i2b2 platform establishes a data warehouse that allows the integration of data from different clinical departments in order to support the task of identifying patient cohorts. In doing so, structured data such as diagnoses and lab values are mapped to standardized coding systems. However, unstructured data is not further labelled with semantic information. Besides its main functionality of patient cohorts identification, the i2b2 hive offers several additional modules. Besides specific modules for data import, export, and visualization tasks, modules to create and use additional semantics are available. For example, the natural language processing (NLP) tool offers a means to extract concepts out of specific terms and connect them with structured knowledge.
Today, data can be exchanged by using exchange formats such as HL7. However, due to non-technical reasons such as privacy, health data is commonly not shared across organizations (phenomena of organizational silos). Information about diagnoses, procedures, lab values, demographics, medication, provider, etc., is in general provided in a structured format, but not automatically collected in a standardized manner. For example, lab departments use their own coding system for lab values without an explicit mapping to the LOINC (Logical Observation Identifiers Names and Codes) standard. Also, different clinical departments often use different but customized report templates without specifying the common semantics. Both scenarios lead to difficulties in data acquisition and consequent integration.
Regarding unstructured data like texts and images, standards for describing high-level meta-information are only partially collected. In the imaging domain, the DICOM (Digital Imaging and Communications in Medicine) standard for specifying image metadata is available. However, for describing meta-information of clinical reports or clinical studies a common (agreed) standard is missing. To the best of our knowledge, for the representation of the content information of unstructured data like images, texts, or genomics data, no standard is available. Initial efforts to change this situation are initiatives such as the structured reporting initiative by RSNA or semantic annotations using standardized vocabularies. For example, the Medical Subject Headings (MeSH) is a controlled vocabulary thesaurus of the US National Library of Medicine to capture topics of texts in the medical and biological domain. There also exist several translations to other languages.
Since each EHR vendor provides their own data model, there is no standard data model for the usage of coding systems to represent the content of clinical reports. In terms of the underlying means for data representation, existing EHR systems rely on a case-centric rather than on a patient-centric representation of health data. This hinders longitudinal health data acquisition and integration.
Easy to use structured reporting tools are required which do not create extra work for clinicians, i.e. these systems need to be seamlessly integrated into the clinical workflow. In addition, available context information should be used to assist the clinicians. Given that structured reporting tools are implemented as easy-to-use tools, they can gain acceptance by clinicians such that most of the clinical documentation is carried out in a semi-structured form and the quality and quantity of semantic annotations increases.
From an organizational point of view, the storage, processing, access, and protection of big data has to be regulated on several different levels: institutional, regional, national, and international level. There is a need to define who authorizes which processes, who changes processes, and who implements process changes. Therefore, a proper and consistent legal framework or guidelines [e.g. ISO/IEC 27000] for all four levels are required.
IHE (integrating the healthcare enterprise) enables plug-and-play and secure access to health information whenever and wherever it is needed. It provides different specifications, tools, and services. IHE also promotes the use of well-established and internationally accepted standards (e.g. Digital Imaging and Communications in Medicine, Health Level 7). Pharmaceutical and R&D data that encompass clinical trials, clinical studies, population and disease data, etc. is typically owned by the pharmaceutical companies, research labs/academia, or the government. As of today, a lot of manual effort is taken to collect all the datasets for conducting clinical studies and related analysis. The manual effort for collecting the data is quite high.

4.6.2 Manufacturing, Retail, and Transport

Big data acquisition in the context of the retail, transportation, and manufacturing sectors becomes increasingly important. As data processing costs decrease and storage capacities increase, data can now be continuously gathered. Manufacturing companies as well as retailers may monitor channels like Facebook, Twitter, or news for any mentions and analyse these data (e.g. customer sentiment analysis). Retailers on the web are also collecting large amounts of data by storing log files and combining that information with other data sources such as sales data in order to analyse and predict customer behaviour. In the field of manufacturing, all participating devices are nowadays interconnected (e.g. sensors, RFID), such that vital information is constantly gathered in order to predict defective parts at an early stage.
All three sectors have in common that the data comes from very heterogeneous sources (e.g. log files, data from social media that needs to be extracted via proprietary APIs, data from sensors, etc.). Data comes in at a very high pace, requiring that the right technologies be chosen for extraction (e.g. MapReduce). Challenges may also include data integration. For example, product names used by customers on social media platforms need to be matched against IDs used for product pages on the web and then matched against internal IDs used in Enterprise Resource Planning (ERP) systems. Tools used for data acquisition in retail can be grouped by the two types of data typically collected in retail:
  • Sales data from accounting and controlling departments
  • Data from the marketing departments
The dynamite data channel monitor, recently bought by Market Track LLC, provides a solution to gather information about product prices on more than 1 billion “buy” pages at more than 4000 global retailers in real time, and thus allows to study the impact of promotional investments, monitor prices, and track consumer sentiment on brands and products.
The increasing use of social media not only empowers consumers to easily compare services and products both with respect to price and quality, but also enables retailers to collect, manage, and analyse large volumes and velocity of data, providing a great opportunity for the retail industry. To gain competitive advantages, real-time information is essential for accurate prediction and optimization models. From a data acquisition perspective means for stream data computation are necessary, which can deal with the challenges of the Vs of the data.
In order to bring a benefit for the transportation sector (especially multimodal urban transportation), tools that support big data acquisition have to achieve mainly two tasks (DHL 2013; Davenport 2013). First, they have to handle large amounts of personalized data (e.g. location information) and deal with the associated privacy issues. Second, they have to integrate data from different service providers, including geographically distributed sensors (i.e. Internet of Things (IoT)) and open data sources.
Different players benefit from big data in the transport sector. Governments and public institutions use an increasing amount of data for traffic control, route planning, and transport management. The private sector exploits increasing amounts of date for route planning and revenue management to gain competitive advantages, save time, and increase fuel efficiency. Individuals increasingly use data via websites, mobile device applications, and GPS information for route planning to increase efficiency and save travel time.
In the manufacturing sector, tools for data acquisition need to mainly process large amounts of sensor data. Those tools need to handle sensor data that may be incompatible with other sensor data and thus data integration challenges need to be tackled, especially when sensor data is passed through multiple companies in a value chain.
Another category of tools needs to address the issue of integrating data produced by sensors in a production environment with data from, e.g. ERP systems within enterprises. This is best achieved when tools produce and consume standardized metadata formats.

4.6.3 Government, Public, Non-profit

Integrating and analysing large amounts of data play an increasingly important role in today’s society. Often, however, new discoveries and insights can only be attained by integrating information from dispersed sources. Despite recent advances in structured data publishing on the web (such as using RDF in attributes (RDFa) and the schema.org initiative), the question arises how larger datasets can be published in a manner that makes them easily discoverable and facilitates integration as well as analysis.
One approach for addressing this problem is data portals, which enable organizations to upload and describe datasets using comprehensive metadata schemes. Similar to digital libraries, networks of such data portals can support the description, archiving, and discovery of datasets on the web. Recently, a rapid growth has been seen of data catalogues being made available on the web. The data catalogue registry datacatalogs.org lists 314 data catalogues worldwide. Examples for the increasing popularity of data catalogues are Open Government Data portals, data portals of international organizations and NGOs, as well as scientific data portals. In the public and governmental sector a few catalogues and data hubs can be used to find metadata or at least to find locations (links) to interesting media files such as publicdata.eu.
The public sector is centred around the activities of the citizens. Data acquisition in the public sector includes tax collection, crime statistics, water and air pollution data, weather reports, energy consumption, Internet business regulation: online gaming, online casinos, intellectual property protection, and others.
The open data initiatives of the governments (data.gov, data.gov.uk for open public data, or govdata.de) are recent examples of the increasing importance of public and non-profit data. There exist similar initiatives in many countries. Most data collected by public institutions and governments of these countries is in principle available for reuse. The W3C guidance on opening up government data (Bennett and Harvey 2009) suggests that data should be published as soon as available in the original raw format, then to enhance it with semantics and metadata. However, in many cases governments struggle to publish certain data, due to the fact that the data needs to be strictly non-personal and non-sensitive and compliant with data privacy and protection regulations. Many different sectors and players can benefit from this public data.
The following presents several case studies for implementing big data technologies in different areas of the public sector.

4.6.3.1 Tax Collection Area

One key area for big data solutions is for the tax revenue recovery of millions of dollars per year. The challenge for such an application is to develop a fast, accurate identity resolution and matching capability for a budget-constrained, limited-staffed state tax department in order to determine where to deploy scarce auditing resources and enhance tax collection efficiency. The main implementation highlights are:
  • Rapidly identify exact and close matches
  • Enable de-duplication from data entry errors
  • High throughput and scalability handles growing data volumes
  • Quickly and easily accommodate file format changes, and addition of new data sources
One solution is based on software developed by the Pervasive Software company: the Pervasive DataRush engine, the Pervasive DataMatcher, and the Pervasive Data Integrator. Pervasive DataRush provides simple constructs to:
  • Create units of work (processes) that can each individually be made parallel.
  • Tie processes together in a dataflow graph (assemblies), but then enable the reuse of complex assemblies as simple operators in other applications.
  • Further tie operators into new, broader dataflow applications.
  • Run a compiler that can traverse all sub-assemblies while executing customizers to automatically define parallel execution strategies based on then-current resources and/or more complex heuristics (this will only improve over time).
This is achieved using techniques such as fuzzy matching, record linking, and the ability to match any combination of fields in a dataset. Other key techniques include data integration and Extract, Transform, Load (ETL) processes that save and store all design metadata in an open XML-based design repository for easy metadata interchange and reuse. This enables fast implementation and deployment and reduces the cost of the entire integration process.

4.6.3.2 Energy Consumption

An article reports on the problems in the regulation of energy consumption. The main issue is that when energy is put on the distribution network it must be used at that time. Energy providers are experimenting with storage devices to assist with this problem, but they are nascent and expensive. Therefore the problem is tackled with smart metering devices.
When collecting data from smart metering devices, the first challenge is to store the large volume of data. For example, assuming that 1 million collection devices retrieve 5 kB of data per single collection, the potential data volume growth in a year can be up to 2920 TB.
The consequential challenges are to analyse this huge volume of data, cross-reference that data with customer information, network distribution, and capacity information by segment, local weather information, and energy spot market cost data.
Harnessing this data will allow the utilities to better understand the cost structure and strategic options within their network, which could include:
  • Adding generation capacity versus purchasing energy off the spot market (e.g. renewables such as wind, solar, electric cars during off-peak hours)
  • Investing in energy storage devices within the network to offset peak usage and reduce spot purchases and costs
  • Provide incentives to individual consumers, or groups of consumers, to change energy consumption behaviours
One such approach from the Lavastorm company is a project that explores analytics problems with innovative companies such as FalbygdensEnergi AB (FEAB) and Sweco. To answer key questions, the Lavastorm Analytic Platform is utilized. The Lavastorm Analytics Engine is a self-service business analytics solution that empowers analysts to rapidly acquire, transform, analyse, and visualize data, and share key insights and trusted answers to business questions with non-technical managers and executives. The engine offers an integrated set of analytics capabilities that enables analysts to independently explore enterprise data from multiple data sources, create and share trusted analytic models, produce accurate forecasts, and uncover previously hidden insights in a single, highly visual and scalable environment.

4.6.4 Media and Entertainment

Media and entertainment is centred on knowledge included in the media files. With the significant growth of media files and associated metadata, due to evolution of the Internet and the social web, data acquisition in this sector has become a substantial challenge.
According to a Quantum report, managing and sharing content can be a challenge, especially for media and entertainment industries. With the need to access video footage, audio files, high-resolution images, and other content, a reliable and effective data sharing solution is required.
Commonly used tools in the media and entertainment sector include:
  • Specialized file systems that are used as a high-performance alternative to NAS and network shares
  • Specialized archiving technologies that allow the creation of a digital archive that reduces costs and protects content
  • Specialized clients that enable both LAN-based applications and SAN-based applications to share a single content pool
  • Various specialized storage solutions (for high-performance file sharing, cost-effective near-line storage, offline data retention, for high-speed primary storage)
Digital on-demand services have radically changed the importance of schedules for both consumers and broadcasters. The largest media corporations have already invested heavily in the technical infrastructure to support the storage and streaming of content. For example, the number of legal music download and streaming sites, and Internet radio services, has increased rapidly in the last few years—consumers have an almost-bewildering choice of options depending on what music genres, subscription options, devices, Digital rights management (DRM) they like. Over 391 million tracks were sold in Europe in 2012, and 75 million tracks played on online radio stations.
According to Eurostat, there has been a massive increase in household access to broadband in the years since 2006. Across the “EU27” (EU member states and six other countries in the European geographical area) broadband penetration was at around 30 % in 2006 but stood at 72 % in 2012. For households with high-speed broadband, media streaming is a very attractive way of consuming content. Equally, faster upload speeds mean that people can create their own videos for social media platforms.
There has been a huge shift away from mass, anonymized mainstream media, towards on-demand, personalized experiences. Large-scale shared consumer experiences such as major sporting events, reality shows, and soap operas are now popular. Consumers expect to be able to watch or listen to whatever they want, whenever they want.
Streaming services put control in the hands of users who choose when to consume their favourite shows, web content, or music. The largest media corporations have already invested heavily in the technical infrastructure to support the storage and streaming of content.
Media companies hold significant amounts of personal data, whether on customers, suppliers, content, or their own employees. Companies have responsibility not just for themselves as data controllers, but also their cloud service providers (data processors). Many large and small media organizations have already suffered catastrophic data breaches—two of the most high-profile casualties were Sony and LinkedIn. They incurred not only the costs of fixing their data breaches, but also fines from data protection bodies such as the Information Commissioner’s Office (ICO) in the UK.

4.6.5 Finance and Insurance

Integrating large amounts of data with business intelligence systems for analysis plays an important role in financial and insurance sectors. Some of the major areas for acquiring data in these sectors are exchange markets, investments, banking, customer profiles, and behaviour.
According to McKinsey Global Institute Analysis, “Financial Services has the most to gain from big data”. For ease of capturing and value potential, “financial players get the highest marks for value creation opportunities”. Banks can add value by improving a number of products, e.g., customizing UX, improved targeting, adapting business models, reducing portfolio losses and capital costs, office efficiencies, and new value propositions. Some of the publicly available financial data are provided by international statistical agencies like Eurostat, World Bank, European Central Bank, International Monetary Fund, International Financial Corporation, Organization for Economic Co-operation and Development. While these data sources are not as time sensitive in comparison to exchange markets, they provide valuable complementary data.
Fraud detection is an important topic in finance. According to the Global Fraud Study 2014, a typical organization loses about 5 % of revenues each year to fraud. The banking and financial services sector has a great number of frauds. Approximately 30 % of fraud schemes were detected by tip off and up to 10 % by accident, but only up to 1 % by IT controls (ACFE 2014). Better and improved fraud detection methods rely on real-time analysis of big data (Sensmeier 2013). For more accurate and less intrusive fraud detection method, banks and financial service institutions are increasingly using algorithms that rely on real-time data about transactions. These technologies make use of large volumes of data being generated at a high velocity and from hybrid sources. Often, data from mobile sources and social data such as geographical information is used for prediction and detection (Krishnamurthy 2013). By using machine-learning algorithms, modern systems are able to detect fraud more reliably and faster (Sensmeier 2013). But there are limitations for such systems. Because financial services operate in a regulatory environment, the use of customer data is subject to privacy laws and regulations.

                                         Conclusions

Data acquisition is an important process and enables the subsequent tools of the data value chain to do their work properly (e.g. data analysis tools). The state of the art regarding data acquisition tools showed that there are plenty of tools and protocols, including open-source solutions that support the process of data acquisition. Many of these tools have been developed and are operational within production environments or major players such as Facebook or Amazon.
Nonetheless there are many open challenges to successfully deploy effective big data solutions for data acquisition in the different sectors (see section “Future Requirements and Emerging Trends for Big Data Acquisition”). The main issue remains producing highly scalable robust solutions for today and researching next generation systems for the ever-increasing industrial requirements.

                                                          Hasil gambar untuk international financial

    
                                       XXX  .  V00  International finance 

International finance (also referred to as international monetary economics or international macroeconomics) is the branch of financial economics broadly concerned with monetary and macroeconomic interrelations between two or more countries. International finance examines the dynamics of the global financial system, international monetary systems, balance of payments, exchange rates, foreign direct investment, and how these topics relate to international trade.[1][2][3]
Sometimes referred to as multinational finance, international finance is additionally concerned with matters of international financial management. Investors and multinational corporations must assess and manage international risks such as political risk and foreign exchange risk, including transaction exposure, economic exposure, and translation exposure.
Some examples of key concepts within international finance are the Mundell–Fleming model, the optimum currency area theory, purchasing power parity, interest rate parity, and the international Fisher effect. Whereas the study of international trade makes use of mostly microeconomic concepts, international finance research investigates predominantly macroeconomic concepts.
The three major components setting international finance apart from its purely domestic counterpart are as follows:
  1. Foreign exchange and political risks.
  2. Market imperfections.
  3. Expanded opportunity sets.
These major dimensions of international finance largely stem from the fact that sovereign nations have the right and power to issue currencies, formulate their own economic policies, impose taxes, and regulate movement of people, goods, and capital across their borders.[


                                                     International Finance  

What is 'International Finance'

International finance – sometimes known as international macroeconomics – is a section of financial economics that deals with the monetary interactions that occur between two or more countries. This section is concerned with topics that include foreign direct investment and currency exchange rates. International finance also involves issues pertaining to financial management, such as political and foreign exchange risk that comes with managing multinational corporations.

BREAKING DOWN 'International Finance'

International finance research deals with macroeconomics; that is, it is concerned with economies as a whole instead of individual markets. Financial institutions and companies that conduct international finance research include the World Bank, the International Finance Corporation (IFC), the International Monetary Fund (IMF) and the National Bureau of Economic Research (NBER). There is an international finance division at the U.S. Federal Reserve that conducts analysis of policies that are relevant to U.S. capital flow, external trade and development of markets in countries around the world.

Key Concepts

Concepts and theories that are key parts of international finance and its research include the Mundell-Fleming model, the International Fisher Effect, the optimum currency area theory, purchasing power parity and interest rate parity.

The Bretton Woods System

The Bretton Woods system, which was introduced in the late 1940s, after World War II, established a fixed exchange rate system, having been agreed upon at the Bretton Woods conference by the more than 40 countries that participated. The system was developed to give structure to international monetary exchanges and policies and to maintain stability in all international finance transactions and interactions.
The Bretton Woods conference acted as a catalyst for the formation of essential international institutions that play a foundational role in the global economy. These institutions – the IMF and the International Bank for Reconstruction and Development (which became known as the World Bank) – continue to play pivotal roles in the area of international finance.

Significance

International or foreign trading is arguably the most important factor in the prosperity and growth of economies that participate in the exchange. The growing popularity and rate of globalization has magnified the importance of international finance. Another aspect to consider, in terms of international finance, is that the United States has shifted from being the largest international creditor (lending money to foreign nations) and has since become the world's largest international debtor; the United States is taking money and funding from organizations and countries around the world. These aspects are key elements of international finance.



                                             XXX  .  V000  Global financial system  


The global financial system is the worldwide framework of legal agreements, institutions, and both formal and informal economic actors that together facilitate international flows of financial capital for purposes of investment and trade financing. Since emerging in the late 19th century during the first modern wave of economic globalization, its evolution is marked by the establishment of central banks, multilateral treaties, and intergovernmental organizations aimed at improving the transparency, regulation, and effectiveness of international markets.[ In the late 1800s, world migration and communication technology facilitated unprecedented growth in international trade and investment. At the onset of World War I, trade contracted as foreign exchange markets became paralyzed by money market illiquidity. Countries sought to defend against external shocks with protectionist policies and trade virtually halted by 1933, worsening the effects of the global Great Depression until a series of reciprocal trade agreements slowly reduced tariffs worldwide. Efforts to revamp the international monetary system after World War II improved exchange rate stability, fostering record growth in global finance.
A series of currency devaluations and oil crises in the 1970s led most countries to float their currencies. The world economy became increasingly financially integrated in the 1980s and 1990s due to capital account liberalization and financial deregulation. A series of financial crises in Europe, Asia, and Latin America followed with contagious effects due to greater exposure to volatile capital flows. The global financial crisis, which originated in the United States in 2007, quickly propagated among other nations and is recognized as the catalyst for the worldwide Great Recession. A market adjustment to Greece's noncompliance with its monetary union in 2009 ignited a sovereign debt crisis among European nations known as the Eurozone crisis.
A country's decision to operate an open economy and globalize its financial capital carries monetary implications captured by the balance of payments. It also renders exposure to risks in international finance, such as political deterioration, regulatory changes, foreign exchange controls, and legal uncertainties for property rights and investments. Both individuals and groups may participate in the global financial system. Consumers and international businesses undertake consumption, production, and investment. Governments and intergovernmental bodies act as purveyors of international trade, economic development, and crisis management. Regulatory bodies establish financial regulations and legal procedures, while independent bodies facilitate industry supervision. Research institutes and other associations analyze data, publish reports and policy briefs, and host public discourse on global financial affairs.
While the global financial system is edging toward greater stability, governments must deal with differing regional or national needs. Some nations are trying to orderly discontinue unconventional monetary policies installed to cultivate recovery, while others are expanding their scope and scale. Emerging market policymakers face a challenge of precision as they must carefully institute sustainable macroeconomic policies during extraordinary market sensitivity without provoking investors to retreat their capital to stronger markets. Nations' inability to align interests and achieve international consensus on matters such as banking regulation has perpetuated the risk of future global financial catastrophes.


A map showing the route of the first transatlantic cable laid to connect North America and Europe.
The SS Great Eastern, a steamship which laid the transatlantic cable beneath the ocean.
The world experienced substantial changes prior to 1914, which created an environment favorable to an increase in and development of international financial centers. Principal among such changes were unprecedented growth in capital flows and the resulting rapid financial center integration, as well as faster communication. Before 1870, London and Paris existed as the world's only prominent financial centers.[4]:1 Soon after, Berlin and New York grew to become major financial centres. An array of smaller financial centers became important as they found market niches, such as Amsterdam, Brussels, Zurich, and Geneva. London remained the leading international financial center in the four decades leading up to World War I.[2]:74–75[5]:12–15
The first modern wave of economic globalization began during the period of 1870–1914, marked by transportation expansion, record levels of migration, enhanced communications, trade expansion, and growth in capital transfers.[2]:75 During the mid-nineteenth century, the passport system in Europe dissolved as rail transport expanded rapidly. Most countries issuing passports did not require their carry, thus people could travel freely without them.[6] The standardization of international passports would not arise until 1980 under the guidance of the United Nations' International Civil Aviation Organization.[7] From 1870 to 1915, 36 million Europeans migrated away from Europe. Approximately 25 million (or 70%) of these travelers migrated to the United States, while most of the rest reached Canada, Australia, Argentina, and Brazil. Europe itself experienced an influx of foreigners from 1860 to 1910, growing from 0.7% of the population to 1.8%. While the absence of meaningful passport requirements allowed for free travel, migration on such an enormous scale would have been prohibitively difficult if not for technological advances in transportation, particularly the expansion of railway travel and the dominance of steam-powered boats over traditional sailing ships. World railway mileage grew from 205,000 kilometers in 1870 to 925,000 kilometers in 1906, while steamboat cargo tonnage surpassed that of sailboats in the 1890s. Advancements such as the telephone and wireless telegraphy (the precursor to radio) revolutionized telecommunication by providing instantaneous communication. In 1866, the first transatlantic cable was laid beneath the ocean to connect London and New York, while Europe and Asia became connected through new landlines.[2]:75–76[8]:5
Economic globalization grew under free trade, starting in 1860 when the United Kingdom entered into a free trade agreement with France known as the Cobden–Chevalier Treaty. However, the golden age of this wave of globalization endured a return to protectionism between 1880 and 1914. In 1879, German Chancellor Otto von Bismarck introduced protective tariffs on agricultural and manufacturing goods, making Germany the first nation to institute new protective trade policies. In 1892, France introduced the Méline tariff, greatly raising customs duties on both agricultural and manufacturing goods. The United States maintained strong protectionism during most of the nineteenth century, imposing customs duties between 40 and 50% on imported goods. Despite these measures, international trade continued to grow without slowing. Paradoxically, foreign trade grew at a much faster rate during the protectionist phase of the first wave of globalization than during the free trade phase sparked by the United Kingdom.[2]:76–77
Unprecedented growth in foreign investment from the 1880s to the 1900s served as the core driver of financial globalization. The worldwide total of capital invested abroad amounted to US$44 billion in 1913 ($1.02 trillion in 2012 dollars[9]), with the greatest share of foreign assets held by the United Kingdom (42%), France (20%), Germany (13%), and the United States (8%). The Netherlands, Belgium, and Switzerland together held foreign investments on par with Germany at around 12%.[2]:77–78

Panic of 1907[

A crowd forms on Wall Street during the Panic of 1907.
In October 1907, the United States experienced a bank run on the Knickerbocker Trust Company, forcing the trust to close on October 23, 1907, provoking further reactions. The panic was alleviated when U.S. Secretary of the Treasury George B. Cortelyou and John Pierpont "J.P." Morgan deposited $25 million and $35 million, respectively, into the reserve banks of New York City, enabling withdrawals to be fully covered. The bank run in New York led to a money market crunch which occurred simultaneously as demands for credit heightened from cereal and grain exporters. Since these demands could only be serviced through the purchase of substantial quantities of gold in London, the international markets became exposed to the crisis. The Bank of England had to sustain an artificially high discount lending rate until 1908. To service the flow of gold to the United States, the Bank of England organized a pool from among twenty-four different nations, for which the Banque de France temporarily lent £3 million (GBP, 305.6 million in 2012 GBP[10]) in gold.[2]:123–124

Birth of the U.S. Federal Reserve System: 1913[

The United States Congress passed the Federal Reserve Act in 1913, giving rise to the Federal Reserve System. Its inception drew influence from the Panic of 1907, underpinning legislators' hesitance in trusting individual investors, such as John Pierpont Morgan, to serve again as a lender of last resort. The system's design also considered the findings of the Pujo Committee's investigation of the possibility of a money trust in which Wall Street's concentration of influence over national financial matters was questioned and in which investment bankers were suspected of unusually deep involvement in the directorates of manufacturing corporations. Although the committee's findings were inconclusive, the very possibility was enough to motivate support for the long-resisted notion of establishing a central bank. The Federal Reserve's overarching aim was to become the sole lender of last resort and to resolve the inelasticity of the United States' money supply during significant shifts in money demand. In addition to addressing the underlying issues that precipitated the international ramifications of the 1907 money market crunch, New York's banks were liberated from the need to maintain their own reserves and began undertaking greater risks. New access to rediscount facilities enabled them to launch foreign branches, bolstering New York's rivalry with London's competitive discount market.[

Interwar period: 1915–1944[

German infantry crossing a battlefield in France in August 1914.
British soldiers resting before the Battle of Mons with German troops along the French border in August 1914.
Economists have referred to the onset of World War I as the end of an age of innocence for foreign exchange markets, as it was the first geopolitical conflict to have a destabilizing and paralyzing impact. The United Kingdom declared war on Germany on August 4, 1914 following Germany's invasion of France and Belgium. In the weeks prior, the foreign exchange market in London was the first to exhibit distress. European tensions and increasing political uncertainty motivated investors to chase liquidity, prompting commercial banks to borrow heavily from London's discount market. As the money market tightened, discount lenders began rediscounting their reserves at the Bank of England rather than discounting new pounds sterling. The Bank of England was forced to raise discount rates daily for three days from 3% on July 30 to 10% by August 1. As foreign investors resorted to buying pounds for remittance to London just to pay off their newly maturing securities, the sudden demand for pounds led the pound to appreciate beyond its gold value against most major currencies, yet sharply depreciate against the French franc after French banks began liquidating their London accounts. Remittance to London became increasingly difficult and culminated in a record exchange rate of $6.50 USD/GBP. Emergency measures were introduced in the form of moratoria and extended bank holidays, but to no effect as financial contracts became informally unable to be negotiated and export embargoes thwarted gold shipments. A week later, the Bank of England began to address the deadlock in the foreign exchange markets by establishing a new channel for transatlantic payments whereby participants could make remittance payments to the U.K. by depositing gold designated for a Bank of England account with Canada's Minister of Finance, and in exchange receive pounds sterling at an exchange rate of $4.90. Approximately $104 million USD in remittances flowed through this channel in the next two months. However, pound sterling liquidity ultimately did not improve due to inadequate relief for merchant banks receiving sterling bills. As the pound sterling was the world's reserve currency and leading vehicle currency, market illiquidity and merchant banks' hesitance to accept sterling bills left currency markets paralyzed.[11]:23–24
The U.K. government attempted several measures to revive the London foreign exchange market, the most notable of which were implemented on September 5 to extend the previous moratorium through October and allow the Bank of England to temporarily loan funds to be paid back upon the end of the war in an effort to settle outstanding or unpaid acceptances for currency transactions. By mid-October, the London market began functioning properly as a result of the September measures. The war continued to present unfavorable circumstances for the foreign exchange market, such as the London Stock Exchange's prolonged closure, the redirection of economic resources to support a transition from producing exports to producing military armaments, and myriad disruptions of freight and mail. The pound sterling enjoyed general stability throughout World War I, in large part due to various steps taken by the U.K. government to influence the pound's value in ways that yet provided individuals with the freedom to continue trading currencies. Such measures included open market interventions on foreign exchange, borrowing in foreign currencies rather than in pounds sterling to finance war activities, outbound capital controls, and limited import restrictions.[
In 1930, the Allied powers established the Bank for International Settlements (BIS). The principal purposes of the BIS were to manage the scheduled payment of Germany's reparations imposed by the Treaty of Versailles in 1919, and to function as a bank for central banks around the world. Nations may hold a portion of their reserves as deposits with the institution. It also serves as a forum for central bank cooperation and research on international monetary and financial matters. The BIS also operates as a general trustee and facilitator of financial settlements between nations

Smoot–Hawley tariff of 1930

U.S. President Herbert Hoover signed the Smoot–Hawley Tariff Act into law on June 17, 1930. The tariff's aim was to protect agriculture in the United States, but congressional representatives ultimately raised tariffs on a host of manufactured goods resulting in average duties as high as 53% on over a thousand various goods. Twenty-five trading partners responded in kind by introducing new tariffs on a wide range of U.S. goods. Hoover was pressured and compelled to adhere to the Republican Party's 1928 platform, which sought protective tariffs to alleviate market pressures on the nation's struggling agribusinesses and reduce the domestic unemployment rate. The culmination of the Stock Market Crash of 1929 and the onset of the Great Depression heightened fears, further pressuring Hoover to act on protective policies against the advice of Henry Ford and over 1,000 economists who protested by calling for a veto of the act.[8]:175–176[15]:186–187[16]:43–44 Exports from the United States plummeted 60% from 1930 to 1933.[8]:118 Worldwide international trade virtually ground to a halt.[17]:125–126 The international ramifications of the Smoot-Hawley tariff, comprising protectionist and discriminatory trade policies and bouts of economic nationalism, are credited by economists with prolongment and worldwide propagation of the Great Depression.[3]:2[17]:108[18]:33

Formal abandonment of the Gold Standard[

Income per capita throughout the Great Depression as viewed from an international perspective. Triangles mark points at which nations abandoned the gold standard by suspending gold convertibility or devaluing their currencies against gold.
The classical gold standard was established in 1821 by the United Kingdom as the Bank of England enabled redemption of its banknotes for gold bullion. France, Germany, the United States, Russia, and Japan each embraced the standard one by one from 1878 to 1897, marking its international acceptance. The first departure from the standard occurred in August 1914 when these nations erected trade embargoes on gold exports and suspended redemption of gold for banknotes. Following the end of World War I on November 11, 1918, Austria, Hungary, Germany, Russia, and Poland began experiencing hyperinflation. Having informally departed from the standard, most currencies were freed from exchange rate fixing and allowed to float. Most countries throughout this period sought to gain national advantages and bolster exports by depreciating their currency values to predatory levels. A number of countries, including the United States, made unenthusiastic and uncoordinated attempts to restore the former gold standard. The early years of the Great Depression brought about bank runs in the United States, Austria, and Germany, which placed pressures on gold reserves in the United Kingdom to such a degree that the gold standard became unsustainable. Germany became the first nation to formally abandon the post-World War I gold standard when the Dresdner Bank implemented foreign exchange controls and announced bankruptcy on July 15, 1931. In September 1931, the United Kingdom allowed the pound sterling to float freely. By the end of 1931, a host of countries including Austria, Canada, Japan, and Sweden abandoned gold. Following widespread bank failures and a hemorrhaging of gold reserves, the United States broke free of the gold standard in April 1933. France would not follow suit until 1936 as investors fled from the franc due to political concerns over Prime Minister Léon Blum's government.

Trade liberalization in the United States

The disastrous effects of the Smoot–Hawley tariff proved difficult for Herbert Hoover's 1932 re-election campaign. Franklin D. Roosevelt became the 32nd U.S. president and the Democratic Party worked to reverse trade protectionism in favor of trade liberalization. As an alternative to cutting tariffs across all imports, Democrats advocated for trade reciprocity. The U.S. Congress passed the Reciprocal Trade Agreements Act in 1934, aimed at restoring global trade and reducing unemployment. The legislation expressly authorized President Roosevelt to negotiate bilateral trade agreements and reduce tariffs considerably. If a country agreed to cut tariffs on certain commodities, the U.S. would institute corresponding cuts to promote trade between the two nations. Between 1934 and 1947, the U.S. negotiated 29 such agreements and the average tariff rate decreased by approximately one third during this same period. The legislation contained an important most-favored-nation clause, through which tariffs were equalized to all countries, such that trade agreements would not result in preferential or discriminatory tariff rates with certain countries on any particular import, due to the difficulties and inefficiencies associated with differential tariff rates. The clause effectively generalized tariff reductions from bilateral trade agreements, ultimately reducing worldwide tariff rates.[

Rise of the Bretton Woods financial order: 1945

Assistant U.S. Treasury Secretary, Harry Dexter White (left) and John Maynard Keynes, honorary adviser to the U.K. Treasury at the inaugural meeting of the International Monetary Fund's Board of Governors in Savannah, Georgia, U.S., March 8, 1946.
As the inception of the United Nations as an intergovernmental entity slowly began formalizing in 1944, delegates from 44 of its early member states met at a hotel in Bretton Woods, New Hampshire for the United Nations Monetary and Financial Conference, now commonly referred to as the Bretton Woods conference. Delegates remained cognizant of the effects of the Great Depression, struggles to sustain the international gold standard during the 1930s, and related market instabilities. Whereas previous discourse on the international monetary system focused on fixed versus floating exchange rates, Bretton Woods delegates favored pegged exchange rates for their flexibility. Under this system, nations would peg their exchange rates to the U.S. dollar, which would be convertible to gold at $35 USD per ounce.[8]:448[19]:34[20]:3[21]:6 This arrangement is commonly referred to as the Bretton Woods system. Rather than maintaining fixed rates, nations would peg their currencies to the U.S. dollar and allow their exchange rates to fluctuate within a 1% band of the agreed-upon parity. To meet this requirement, central banks would intervene via sales or purchases of their currencies against the dollar.[13]:491–493[15]:296[22]:21 Members could adjust their pegs in response to long-run fundamental disequillibria in the balance of payments, but were responsible for correcting imbalances via fiscal and monetary policy tools before resorting to repegging strategies The adjustable pegging enabled greater exchange rate stability for commercial and financial transactions which fostered unprecedented growth in international trade and foreign investment. This feature grew from delegates' experiences in the 1930s when excessively volatile exchange rates and the reactive protectionist exchange controls that followed proved destructive to trade and prolonged the deflationary effects of the Great Depression. Capital mobility faced de facto limits under the system as governments instituted restrictions on capital flows and aligned their monetary policy to support their pegs.
An important component of the Bretton Woods agreements was the creation of two new international financial institutions, the International Monetary Fund (IMF) and the International Bank for Reconstruction and Development (IBRD). Collectively referred to as the Bretton Woods institutions, they became operational in 1947 and 1946 respectively. The IMF was established to support the monetary system by facilitating cooperation on international monetary issues, providing advisory and technical assistance to members, and offering emergency lending to nations experiencing repeated difficulties restoring the balance of payments equilibrium. Members would contribute funds to a pool according to their share of gross world product, from which emergency loans could be issued.[22]:21[27]:9–10[28]:20–22 Member states were authorized and encouraged to employ capital controls as necessary to manage payments imbalances and meet pegging targets, but prohibited from relying on IMF financing to cover particularly short-term capital hemorrhages.[24]:38 While the IMF was instituted to guide members and provide a short-term financing window for recurrent balance of payments deficits, the IBRD was established to serve as a type of financial intermediary for channeling global capital toward long-term investment opportunities and postwar reconstruction projects.[29]:22 The creation of these organizations was a crucial milestone in the evolution of the international financial architecture, and some economists consider it the most significant achievement of multilateral cooperation following World War II.[24]:39[30]:1–3 Since the establishment of the International Development Association (IDA) in 1960, the IBRD and IDA are together known as the World Bank. While the IBRD lends to middle-income developing countries, the IDA extends the Bank's lending program by offering concessional loans and grants to the world's poorest nations.[31]

General Agreement on Tariffs and Trade:

In 1947, 23 countries concluded the General Agreement on Tariffs and Trade (GATT) at a UN conference in Geneva. Delegates intended the agreement to suffice while member states would negotiate creation of a UN body to be known as the International Trade Organization (ITO). As the ITO never became ratified, GATT became the de facto framework for later multilateral trade negotiations. Members emphasized trade reprocity as an approach to lowering barriers in pursuit of mutual gains.[16]:46 The agreement's structure enabled its signatories to codify and enforce regulations for trading of goods and services.[32]:11 GATT was centered on two precepts: trade relations needed to be equitable and nondiscriminatory, and subsidizing non-agricultural exports needed to be prohibited. As such, the agreement's most favored nation clause prohibited members from offering preferential tariff rates to any nation that it would not otherwise offer to fellow GATT members. In the event of any discovery of non-agricultural subsidies, members were authorized to offset such policies by enacting countervailing tariffs.[13]:460 The agreement provided governments with a transparent structure for managing trade relations and avoiding protectionist pressures.[17]:108 However, GATT's principles did not extend to financial activity, consistent with the era's rigid discouragement of capital movements.[33]:70–71 The agreement's initial round achieved only limited success in reducing tariffs. While the U.S. reduced its tariffs by one third, other signatories offered much smaller trade concessions

Resurgence of financial globalization

World reserves of foreign exchange and gold in billions of U.S. dollars in 2009.
Although the exchange rate stability sustained by the Bretton Woods system facilitated expanding international trade, this early success masked its underlying design flaw, wherein there existed no mechanism for increasing the supply of international reserves to support continued growth in trade.[22]:22 The system began experiencing insurmountable market pressures and deteriorating cohesion among its key participants in the late 1950s and early 1960s. Central banks needed more U.S. dollars to hold as reserves, but were unable to expand their money supplies if doing so meant exceeding their dollar reserves and threatening their exchange rate pegs. To accommodate these needs, the Bretton Woods system depended on the United States to run dollar deficits. As a consequence, the dollar's value began exceeding its gold backing. During the early 1960s, investors could sell gold for a greater dollar exchange rate in London than in the United States, signaling to market participants that the dollar was overvalued. Belgian-American economist Robert Triffin defined this problem now known as the Triffin dilemma, in which a country's national economic interests conflict with its international objectives as the custodian of the world's reserve currency.[19]:34–35
France voiced concerns over the artificially low price of gold in 1968 and called for returns to the former gold standard. Meanwhile, excess dollars flowed into international markets as the United States expanded its money supply to accommodate the costs of its military campaign in the Vietnam War. Its gold reserves were assaulted by speculative investors following its first current account deficit since the 19th century. In August 1971, President Richard Nixon suspended the exchange of U.S. dollars for gold as part of the Nixon Shock. The closure of the gold window effectively shifted the adjustment burdens of a devalued dollar to other nations. Speculative traders chased other currencies and began selling dollars in anticipation of these currencies being revalued against the dollar. These influxes of capital presented difficulties to foreign central banks, which then faced choosing among inflationary money supplies, largely ineffective capital controls, or floating exchange rates.[19]:34–35[34]:14–15 Following these woes surrounding the U.S. dollar, the dollar price of gold was raised to $38 USD per ounce and the Bretton Woods system was modified to allow fluctuations within an augmented band of 2.25% as part of the Smithsonian Agreement signed by the G-10 members in December 1971. The agreement delayed the system's demise for a further two years.[21]:6–7 The system's erosion was expedited not only by the dollar devaluations that occurred, but also by the oil crises of the 1970s which emphasized the importance of international financial markets in petrodollar recycling and balance of payments financing. Once the world's reserve currency began to float, other nations began adopting floating exchange rate regimes.[14]:5–7
The post-Bretton Woods financial order: 1976[
Headquarters of the International Monetary Fund in Washington, D.C.
As part of the first amendment to its articles of agreement in 1969, the IMF developed a new reserve instrument called special drawing rights (SDRs), which could be held by central banks and exchanged among themselves and the Fund as an alternative to gold. SDRs entered service in 1970 originally as units of a market basket of sixteen major vehicle currencies of countries whose share of total world exports exceeded 1%. The basket's composition changed over time and presently consists of the U.S. dollar, euro, Japanese yen, Chinese yuan, and British pound. Beyond holding them as reserves, nations can denominate transactions among themselves and the Fund in SDRs, although the instrument is not a vehicle for trade. In international transactions, the currency basket's portfolio characteristic affords greater stability against the uncertainties inherent with free floating exchange rates.[18]:34–35[24]:50–51[25]:117[27]:10 Special drawing rights were originally equivalent to a specified amount of gold, but were not directly redeemable for gold and instead served as a surrogate in obtaining other currencies that could be exchanged for gold. The Fund initially issued 9.5 billion XDR from 1970 to 1972.[29]:182–183
IMF members signed the Jamaica Agreement in January 1976, which ratified the end of the Bretton Woods system and reoriented the Fund's role in supporting the international monetary system. The agreement officially embraced the flexible exchange rate regimes that emerged after the failure of the Smithsonian Agreement measures. In tandem with floating exchange rates, the agreement endorsed central bank interventions aimed at clearing excessive volatility. The agreement retroactively formalized the abandonment of gold as a reserve instrument and the Fund subsequently demonetized its gold reserves, returning gold to members or selling it to provide poorer nations with relief funding. Developing countries and countries not endowed with oil export resources enjoyed greater access to IMF lending programs as a result. The Fund continued assisting nations experiencing balance of payments deficits and currency crises, but began imposing conditionality on its funding that required countries to adopt policies aimed at reducing deficits through spending cuts and tax increases, reducing protective trade barriers, and contractionary monetary policy.[18]:36[28]:47–48[35]:12–13
The second amendment to the articles of agreement was signed in 1978. It legally formalized the free-floating acceptance and gold demonetization achieved by the Jamaica Agreement, and required members to support stable exchange rates through macroeconomic policy. The post-Bretton Woods system was decentralized in that member states retained autonomy in selecting an exchange rate regime. The amendment also expanded the institution's capacity for oversight and charged members with supporting monetary sustainability by cooperating with the Fund on regime implementation.[24]:62–63[25]:138 This role is called IMF surveillance and is recognized as a pivotal point in the evolution of the Fund's mandate, which was extended beyond balance of payments issues to broader concern with internal and external stresses on countries' overall economic policies.[25]:148[30]:10–11
Under the dominance of flexible exchange rate regimes, the foreign exchange markets became significantly more volatile. In 1980, newly elected U.S. President Ronald Reagan's administration brought about increasing balance of payments deficits and budget deficits. To finance these deficits, the United States offered artificially high real interest rates to attract large inflows of foreign capital. As foreign investors' demand for U.S. dollars grew, the dollar's value appreciated substantially until reaching its peak in February 1985. The U.S. trade deficit grew to $160 billion in 1985 ($341 billion in 2012 dollars[9]) as a result of the dollar's strong appreciation. The G5 met in September 1985 at the Plaza Hotel in New York City and agreed that the dollar should depreciate against the major currencies to resolve the United States' trade deficit and pledged to support this goal with concerted foreign exchange market interventions, in what became known as the Plaza Accord. The U.S. dollar continued to depreciate, but industrialized nations became increasingly concerned that it would decline too heavily and that exchange rate volatility would increase. To address these concerns, the G7 (now G8) held a summit in Paris in 1987, where they agreed to pursue improved exchange rate stability and better coordinate their macroeconomic policies, in what became known as the Louvre Accord. This accord became the provenance of the managed float regime by which central banks jointly intervene to resolve under- and overvaluations in the foreign exchange market to stabilize otherwise freely floating currencies. Exchange rates stabilized following the embrace of managed floating during the 1990s, with a strong U.S. economic performance from 1997 to 2000 during the Dot-com bubble. After the 2000 stock market correction of the Dot-com bubble the country's trade deficit grew, the September 11 attacks increased political uncertainties, and the dollar began to depreciate in 2001.[
European Monetary System: 1979[
Following the Smithsonian Agreement, member states of the European Economic Community adopted a narrower currency band of 1.125% for exchange rates among their own currencies, creating a smaller scale fixed exchange rate system known as the snake in the tunnel. The snake proved unsustainable as it did not compel EEC countries to coordinate macroeconomic policies. In 1979, the European Monetary System (EMS) phased out the currency snake. The EMS featured two key components: the European Currency Unit (ECU), an artificial weighted average market basket of European Union members' currencies, and the Exchange Rate Mechanism (ERM), a procedure for managing exchange rate fluctuations in keeping with a calculated parity grid of currencies' par values.[11]:130[18]:42–44[37]:185 The parity grid was derived from parities each participating country established for its currency with all other currencies in the system, denominated in terms of ECUs. The weights within the ECU changed in response to variances in the values of each currency in its basket. Under the ERM, if an exchange rate reached its upper or lower limit (within a 2.25% band), both nations in that currency pair were obligated to intervene collectively in the foreign exchange market and buy or sell the under- or overvalued currency as necessary to return the exchange rate to its par value according to the parity matrix. The requirement of cooperative market intervention marked a key difference from the Bretton Woods system. Similarly to Bretton Woods however, EMS members could impose capital controls and other monetary policy shifts on countries responsible for exchange rates approaching their bounds, as identified by a divergence indicator which measured deviations from the ECU's value.[ The central exchange rates of the parity grid could be adjusted in exceptional circumstances, and were modified every eight months on average during the systems' initial four years of operation.[25]:160 During its twenty-year lifespan, these central rates were adjusted over 50 times.

Birth of the World Trade Organization: 1994

WTO Fourth Global Review of Aid for Trade: "Connecting to value chains" - 8–10 July 2013.[38]
The Uruguay Round of GATT multilateral trade negotiations took place from 1986 to 1994, with 123 nations becoming party to agreements achieved throughout the negotiations. Among the achievements were trade liberalization in agricultural goods and textiles, the General Agreement on Trade in Services, and agreements on intellectual property rights issues. The key manifestation of this round was the Marrakech Agreement signed in April 1994, which established the World Trade Organization (WTO). The WTO is a chartered multilateral trade organization, charged with continuing the GATT mandate to promote trade, govern trade relations, and prevent damaging trade practices or policies. It became operational in January 1995. Compared with its GATT secretariat predecessor, the WTO features an improved mechanism for settling trade disputes since the organization is membership-based and not dependent on consensus as in traditional trade negotiations. This function was designed to address prior weaknesses, whereby parties in dispute would invoke delays, obstruct negotiations, or fall back on weak enforcement.[8]:181[13]:459–460[16]:47 In 1997, WTO members reached an agreement which committed to softer restrictions on commercial financial services, including banking services, securities trading, and insurance services. These commitments entered into force in March 1999, consisting of 70 governments accounting for approximately 95% of worldwide financial services.[39]

Financial integration and systemic crises: 1980-present

Number of countries experiencing a banking crisis in each year since 1800. This covers 70 countries. The dramatic feature of this graph is the virtual absence of banking crises during the period of the Bretton Woods system, 1945 to 1971. This analysis is similar to Figure 10.1 in Rogoff and Reinhart (2009)[40]
Financial integration among industrialized nations grew substantially during the 1980s and 1990s, as did liberalization of their capital accounts.[24]:15 Integration among financial markets and banks rendered benefits such as greater productivity and the broad sharing of risk in the macroeconomy. The resulting interdependence also carried a substantive cost in terms of shared vulnerabilities and increased exposure to systemic risks.[41]:440–441 Accompanying financial integration in recent decades was a succession of deregulation, in which countries increasingly abandoned regulations over the behavior of financial intermediaries and simplified requirements of disclosure to the public and to regulatory authorities.[14]:36–37 As economies became more open, nations became increasingly exposed to external shocks. Economists have argued greater worldwide financial integration has resulted in more volatile capital flows, thereby increasing the potential for financial market turbulence. Given greater integration among nations, a systemic crisis in one can easily infect others.[ The 1980s and 1990s saw a wave of currency crises and sovereign defaults, including the 1987 Black Monday stock market crashes, 1992 European Monetary System crisis, 1994 Mexican peso crisis, 1997 Asian currency crisis, 1998 Russian financial crisis, and the 1998–2002 Argentine peso crisis.[ These crises differed in terms of their breadth, causes, and aggravations, among which were capital flights brought about by speculative attacks on fixed exchange rate currencies perceived to be mispriced given a nation's fiscal policy,[14]:83 self-fulfilling speculative attacks by investors expecting other investors to follow suit given doubts about a nation's currency peg,[42]:7 lack of access to developed and functioning domestic capital markets in emerging market countries,[30]:87 and current account reversals during conditions of limited capital mobility and dysfunctional banking systems.[
Following research of systemic crises that plagued developing countries throughout the 1990s, economists have reached a consensus that liberalization of capital flows carries important prerequisites if these countries are to observe the benefits offered by financial globalization. Such conditions include stable macroeconomic policies, healthy fiscal policy, robust bank regulations, and strong legal protection of property rights. Economists largely favor adherence to an organized sequence of encouraging foreign direct investment, liberalizing domestic equity capital, and embracing capital outflows and short-term capital mobility only once the country has achieved functioning domestic capital markets and established a sound regulatory framework.[14]:25[24]:113 An emerging market economy must develop a credible currency in the eyes of both domestic and international investors to realize benefits of globalization such as greater liquidity, greater savings at higher interest rates, and accelerated economic growth. If a country embraces unrestrained access to foreign capital markets without maintaining a credible currency, it becomes vulnerable to speculative capital flights and sudden stops, which carry serious economic and social costs.[34]:xii
Countries sought to improve the sustainability and transparency of the global financial system in response to crises in the 1980s and 1990s. The Basel Committee on Banking Supervision was formed in 1974 by the G-10 members' central bank governors to facilitate cooperation on the supervision and regulation of banking practices. It is headquartered at the Bank for International Settlements in Basel, Switzerland. The committee has held several rounds of deliberation known collectively as the Basel Accords. The first of these accords, known as Basel I, took place in 1988 and emphasized credit risk and the assessment of different asset classes. Basel I was motivated by concerns over whether large multinational banks were appropriately regulated, stemming from observations during the 1980s Latin American debt crisis. Following Basel I, the committee published recommendations on new capital requirements for banks, which the G-10 nations implemented four years later. In 1999, the G-10 established the Financial Stability Forum (reconstituted by the G-20 in 2009 as the Financial Stability Board) to facilitate cooperation among regulatory agencies and promote stability in the global financial system. The Forum was charged with developing and codifying twelve international standards and implementation thereof.[ The Basel II accord was set in 2004 and again emphasized capital requirements as a safeguard against systemic risk as well as the need for global consistency in banking regulations so as not to competitively disadvantage banks operating internationally. It was motivated by what were seen as inadequacies of the first accord such as insufficient public disclosure of banks' risk profiles and oversight by regulatory bodies. Members were slow to implement it, with major efforts by the European Union and United States taking place as late as 2007 and 2008.[In 2010, the Basel Committee revised the capital requirements in a set of enhancements to Basel II known as Basel III, which centered on a leverage ratio requirement aimed at restricting excessive leveraging by banks. In addition to strengthening the ratio, Basel III modified the formulas used to weight risk and compute the capital thresholds necessary to mitigate the risks of bank holdings, concluding the capital threshold should be set at 7% of the value of a bank's risk-weighted assets

Birth of the European Economic and Monetary Union 1992

In February 1992, European Union countries signed the Maastricht Treaty which outlined a three-stage plan to accelerate progress toward an Economic and Monetary Union (EMU). The first stage centered on liberalizing capital mobility and aligning macroeconomic policies between countries. The second stage established the European Monetary Institute which was ultimately dissolved in tandem with the establishment in 1998 of the European Central Bank (ECB) and European System of Central Banks. Key to the Maastricht Treaty was the outlining of convergence criteria that EU members would need to satisfy before being permitted to proceed. The third and final stage introduced a common currency for circulation known as the Euro, adopted by eleven of then-fifteen members of the European Union in January 1999. In doing so, they disaggregated their sovereignty in matters of monetary policy. These countries continued to circulate their national legal tenders, exchangeable for euros at fixed rates, until 2002 when the ECB began issuing official Euro coins and notes. As of 2011, the EMU comprises 17 nations which have issued the Euro, and 11 non-Euro states

Global financial crisis

Following the market turbulence of the 1990s financial crises and September 11 attacks on the U.S. in 2001, financial integration intensified among developed nations and emerging markets, with substantial growth in capital flows among banks and in the trading of financial derivatives and structured finance products. Worldwide international capital flows grew from $3 trillion to $11 trillion U.S. dollars from 2002 to 2007, primarily in the form of short-term money market instruments. The United States experienced growth in the size and complexity of firms engaged in a broad range of financial services across borders in the wake of the Gramm–Leach–Bliley Act of 1999 which repealed the Glass–Steagall Act of 1933, ending limitations on commercial banks' investment banking activity. Industrialized nations began relying more on foreign capital to finance domestic investment opportunities, resulting in unprecedented capital flows to advanced economies from developing countries, as reflected by global imbalances which grew to 6% of gross world product in 2007 from 3% in 2001
The global financial crisis precipitated in 2007 and 2008 shared some of the key features exhibited by the wave of international financial crises in the 1990s, including accelerated capital influxes, weak regulatory frameworks, relaxed monetary policies, herd behavior during investment bubbles, collapsing asset prices, and massive deleveraging. The systemic problems originated in the United States and other advanced nations. Similarly to the 1997 Asian crisis, the global crisis entailed broad lending by banks undertaking unproductive real estate investments as well as poor standards of corporate governance within financial intermediaries. Particularly in the United States, the crisis was characterized by growing securitization of non-performing assets, large fiscal deficits, and excessive financing in the housing sector.[ While the real estate bubble in the U.S. triggered the financial crisis, the bubble was financed by foreign capital flowing from many different countries. As its contagious effects began infecting other nations, the crisis became a precursor for the global economic downturn now referred to as the Great Recession. In the wake of the crisis, total volume of world trade in goods and services fell 10% from 2008 to 2009 and did not recover until 2011, with an increased concentration in emerging market countries. The global financial crisis demonstrated the negative effects of worldwide financial integration, sparking discourse on how and whether some countries should decouple themselves from the system altogether.

Eurozone crisis

In 2009, a newly elected government in Greece revealed the falsification of its national budget data, and that its fiscal deficit for the year was 12.7% of GDP as opposed to the 3.7% espoused by the previous administration. This news alerted markets to the fact that Greece's deficit exceeded the eurozone's maximum of 3% outlined in the Economic and Monetary Union's Stability and Growth Pact. Investors concerned about a possible sovereign default rapidly sold Greek bonds. Given Greece's prior decision to embrace the euro as its currency, it no longer held monetary policy autonomy and could not intervene to depreciate a national currency to absorb the shock and boost competitiveness, as was the traditional solution to sudden capital flight. The crisis proved contagious when it spread to Portugal, Italy, and Spain (together with Greece these are collectively referred to as the PIGS). Ratings agencies downgraded these countries' debt instruments in 2010 which further increased the costliness of refinancing or repaying their national debts. The crisis continued to spread and soon grew into a European sovereign debt crisis which threatened economic recovery in the wake of the Great Recession. In tandem with the IMF, the European Union members assembled a €750 billion bailout for Greece and other afflicted nations. Additionally, the ECB pledged to purchase bonds from troubled eurozone nations in an effort to mitigate the risk of a banking system panic. The crisis is recognized by economists as highlighting the depth of financial integration in Europe, contrasted with the lack of fiscal integration and political unification necessary to prevent or decisively respond to crises. During the initial waves of the crisis, the public speculated that the turmoil could result in a disintegration of the eurozone and an abandonment of the euro. German Federal Minister of Finance Wolfgang Schäuble called for the expulsion of offending countries from the eurozone. Now commonly referred to as the Eurozone crisis, it has been ongoing since 2009 and most recently began encompassing the 2012–13 Cypriot financial crisis.[

Implications of globalized capital

Balance of payments

The top five annual current account deficits and surpluses in billions of U.S. dollars for the year 2012 based on data from the Organisation for Economic Co-operation and Development.
The balance of payments accounts summarize payments made to or received from foreign countries. Receipts are considered credit transactions while payments are considered debit transactions. The balance of payments is a function of three components: transactions involving export or import of goods and services form the current account, transactions involving purchase or sale of financial assets form the financial account, and transactions involving unconventional transfers of wealth form the capital account.[ The current account summarizes three variables: the trade balance, net factor income from abroad, and net unilateral transfers. The financial account summarizes the value of exports versus imports of assets, and the capital account summarizes the value of asset transfers received net of transfers given. The capital account also includes the official reserve account, which summarizes central banks' purchases and sales of domestic currency, foreign exchange, gold, and SDRs for purposes of maintaining or utilizing bank reserves.[
Because the balance of payments sums to zero, a current account surplus indicates a deficit in the asset accounts and vice versa. A current account surplus or deficit indicates the extent to which a country is relying on foreign capital to finance its consumption and investments, and whether it is living beyond its means. For example, assuming a capital account balance of zero (thus no asset transfers available for financing), a current account deficit of £1 billion implies a financial account surplus (or net asset exports) of £1 billion. A net exporter of financial assets is known as a borrower, exchanging future payments for current consumption. Further, a net export of financial assets indicates growth in a country's debt. From this perspective, the balance of payments links a nation's income to its spending by indicating the degree to which current account imbalances are financed with domestic or foreign financial capital, which illuminates how a nation's wealth is shaped over time.A healthy balance of payments position is important for economic growth. If countries experiencing a growth in demand have trouble sustaining a healthy balance of payments, demand can slow, leading to: unused or excess supply, discouraged foreign investment, and less attractive exports which can further reinforce a negative cycle that intensifies payments imbalances.
A country's external wealth is measured by the value of its foreign assets net of its foreign liabilities. A current account surplus (and corresponding financial account deficit) indicates an increase in external wealth while a deficit indicates a decrease. Aside from current account indications of whether a country is a net buyer or net seller of assets, shifts in a nation's external wealth are influenced by capital gains and capital losses on foreign investments. Having positive external wealth means a country is a net lender (or creditor) in the world economy, while negative external wealth indicates a net borrower (or debtor).[

Unique financial risks

Nations and international businesses face an array of financial risks unique to foreign investment activity. Political risk is the potential for losses from a foreign country's political instability or otherwise unfavorable developments, which manifests in different forms. Transfer risk emphasizes uncertainties surrounding a country's capital controls and balance of payments. Operational risk characterizes concerns over a country's regulatory policies and their impact on normal business operations. Control risk is born from uncertainties surrounding property and decision rights in the local operation of foreign direct investments.[18]:422 Credit risk implies lenders may face an absent or unfavorable regulatory framework that affords little or no legal protection of foreign investments. For example, foreign governments may commit to a sovereign default or otherwise repudiate their debt obligations to international investors without any legal consequence or recourse. Governments may decide to expropriate or nationalize foreign-held assets or enact contrived policy changes following an investor's decision to acquire assets in the host country Country risk encompasses both political risk and credit risk, and represents the potential for unanticipated developments in a host country to threaten its capacity for debt repayment and repatriation of gains from interest and dividends

Participants

Each of the core economic functions, consumption, production, and investment, have become highly globalized in recent decades. While consumers increasingly import foreign goods or purchase domestic goods produced with foreign inputs, businesses continue to expand production internationally to meet an increasingly globalized consumption in the world economy. International financial integration among nations has afforded investors the opportunity to diversify their asset portfolios by investing abroad.[18]:4–5 Consumers, multinational corporations, individual and institutional investors, and financial intermediaries (such as banks) are the key economic actors within the global financial system. Central banks (such as the European Central Bank or the U.S. Federal Reserve System) undertake open market operations in their efforts to realize monetary policy goals.[20]:13–15[22]:11–13,76 International financial institutions such as the Bretton Woods institutions, multilateral development banks and other development finance institutions provide emergency financing to countries in crisis, provide risk mitigation tools to prospective foreign investors, and assemble capital for development finance and poverty reduction initiatives.[24]:243 Trade organizations such as the World Trade Organization, Institute of International Finance, and the World Federation of Exchanges attempt to ease trade, facilitate trade disputes and address economic affairs, promote standards, and sponsor research and statistics publications.

Regulatory bodies

Explicit goals of financial regulation include countries' pursuits of financial stability and the safeguarding of unsophisticated market players from fraudulent activity, while implicit goals include offering viable and competitive financial environments to world investors.[34]:57 A single nation with functioning governance, financial regulations, deposit insurance, emergency financing through discount windows, standard accounting practices, and established legal and disclosure procedures, can itself develop and grow a healthy domestic financial system. In a global context however, no central political authority exists which can extend these arrangements globally. Rather, governments have cooperated to establish a host of institutions and practices that have evolved over time and are referred to collectively as the international financial architecture.[14]:xviii[24]:2 Within this architecture, regulatory authorities such as national governments and intergovernmental organizations have the capacity to influence international financial markets. National governments may employ their finance ministries, treasuries, and regulatory agencies to impose tariffs and foreign capital controls or may use their central banks to execute a desired intervention in the open markets.[48]:17–21
Some degree of self-regulation occurs whereby banks and other financial institutions attempt to operate within guidelines set and published by multilateral organizations such as the International Monetary Fund or the Bank for International Settlements (particularly the Basel Committee on Banking Supervision and the Committee on the Global Financial System[55]).[27]:33–34 Further examples of international regulatory bodies are: the Financial Stability Board (FSB) established to coordinate information and activities among developed countries; the International Organization of Securities Commissions (IOSCO) which coordinates the regulation of financial securities; the International Association of Insurance Supervisors (IAIS) which promotes consistent insurance industry supervision; the Financial Action Task Force on Money Laundering which facilitates collaboration in battling money laundering and terrorism financing; and the International Accounting Standards Board (IASB) which publishes accounting and auditing standards. Public and private arrangements exist to assist and guide countries struggling with sovereign debt payments, such as the Paris Club and London Club.[ National securities commissions and independent financial regulators maintain oversight of their industries' foreign exchange market activities.[Two examples of supranational financial regulators in Europe are the European Banking Authority (EBA) which identifies systemic risks and institutional weaknesses and may overrule national regulators, and the European Shadow Financial Regulatory Committee (ESFRC) which reviews financial regulatory issues and publishes policy recommendations.

Research organizations and other for a

Research and academic institutions, professional associations, and think-tanks aim to observe, model, understand, and publish recommendations to improve the transparency and effectiveness of the global financial system. For example, the independent non-partisan World Economic Forum facilitates the Global Agenda Council on the Global Financial System and Global Agenda Council on the International Monetary System, which report on systemic risks and assemble policy recommendations. The Global Financial Markets Association facilitates discussion of global financial issues among members of various professional associations around the world. The Group of Thirty (G30) formed in 1978 as a private, international group of consultants, researchers, and representatives committed to advancing understanding of international economics and global finance.

Future of the global financial system

The IMF has reported that the global financial system is on a path to improved financial stability, but faces a host of transitional challenges borne out by regional vulnerabilities and policy regimes. One challenge is managing the United States' disengagement from its accommodative monetary policy. Doing so in an elegant, orderly manner could be difficult as markets adjust to reflect investors' expectations of a new monetary regime with higher interest rates. Interest rates could rise too sharply if exacerbated by a structural decline in market liquidity from higher interest rates and greater volatility, or by structural deleveraging in short-term securities and in the shadow banking system (particularly the mortgage market and real estate investment trusts). Other central banks are contemplating ways to exit unconventional monetary policies employed in recent years. Some nations however, such as Japan, are attempting stimulus programs at larger scales to combat deflationary pressures. The Eurozone's nations implemented myriad national reforms aimed at strengthening the monetary union and alleviating stress on banks and governments. Yet some European nations such as Portugal, Italy, and Spain continue to struggle with heavily leveraged corporate sectors and fragmented financial markets in which investors face pricing inefficiency and difficulty identifying quality assets. Banks operating in such environments may need stronger provisions in place to withstand corresponding market adjustments and absorb potential losses. Emerging market economies face challenges to greater stability as bond markets indicate heightened sensitivity to monetary easing from external investors flooding into domestic markets, rendering exposure to potential capital flights brought on by heavy corporate leveraging in expansionary credit environments. Policymakers in these economies are tasked with transitioning to more sustainable and balanced financial sectors while still fostering market growth so as not to provoke investor withdrawal.
The global financial crisis and Great Recession prompted renewed discourse on the architecture of the global financial system. These events called to attention financial integration, inadequacies of global governance, and the emergent systemic risks of financial globalization Since the establishment in 1945 of a formal international monetary system with the IMF empowered as its guardian, the world has undergone extensive changes politically and economically. This has fundamentally altered the paradigm in which international financial institutions operate, increasing the complexities of the IMF and World Bank's mandates.[ The lack of adherence to a formal monetary system has created a void of global constraints on national macroeconomic policies and a deficit of rule-based governance of financial activities.[ French economist and Executive Director of the World Economic Forum's Reinventing Bretton Woods Committee, Marc Uzan, has pointed out that some radical proposals such as a "global central bank or a world financial authority" have been deemed impractical, leading to further consideration of medium-term efforts to improve transparency and disclosure, strengthen emerging market financial climates, bolster prudential regulatory environments in advanced nations, and better moderate capital account liberalization and exchange rate regime selection in emerging markets. He has also drawn attention to calls for increased participation from the private sector in the management of financial crises and the augmenting of multilateral institutions' resources.[
The Council on Foreign Relations' assessment of global finance notes that excessive institutions with overlapping directives and limited scopes of authority, coupled with difficulty aligning national interests with international reforms, are the two key weaknesses inhibiting global financial reform. Nations do not presently enjoy a comprehensive structure for macroeconomic policy coordination, and global savings imbalances have abounded before and after the global financial crisis to the extent that the United States' status as the steward of the world's reserve currency was called into question. Post-crisis efforts to pursue macroeconomic policies aimed at stabilizing foreign exchange markets have yet to be institutionalized. The lack of international consensus on how best to monitor and govern banking and investment activity threatens the world's ability to prevent future global financial crises. The slow and often delayed implementation of banking regulations that meet Basel III criteria means most of the standards will not take effect until 2019, rendering continued exposure of global finance to unregulated systemic risks. Despite Basel III and other efforts by the G20 to bolster the Financial Stability Board's capacity to facilitate cooperation and stabilizing regulatory changes, regulation exists predominantly at the national and regional levels.

Reform efforts

Former World Bank Chief Economist and former Chairman of the U.S. Council of Economic Advisers Joseph E. Stiglitz referred in the late 1990s to a growing consensus that something is wrong with a system having the capacity to impose high costs on a great number of people who are hardly even participants in international financial markets, neither speculating on international investments nor borrowing in foreign currencies. He argued that foreign crises have strong worldwide repercussions due in part to the phenomenon of moral hazard, particularly when many multinational firms deliberately invest in highly risky government bonds in anticipation of a national or international bailout. Although crises can be overcome by emergency financing, employing bailouts places a heavy burden on taxpayers living in the afflicted countries, and the high costs damage standards of living. Stiglitz has advocated finding means of stabilizing short-term international capital flows without adversely affecting long-term foreign direct investment which usually carries new knowledge spillover and technological advancements into economies.[66]
American economist and former Chairman of the Federal Reserve Paul Volcker has argued that the lack of global consensus on key issues threatens efforts to reform the global financial system. He has argued that quite possibly the most important issue is a unified approach to addressing failures of systemically important financial institutions, noting public taxpayers and government officials have grown disillusioned with deploying tax revenues to bail out creditors for the sake of stopping contagion and mitigating economic disaster. Volcker has expressed an array of potential coordinated measures: increased policy surveillance by the IMF and commitment from nations to adopt agreed-upon best practices, mandatory consultation from multilateral bodies leading to more direct policy recommendations, stricter controls on national qualification for emergency financing facilities (such as those offered by the IMF or by central banks), and improved incentive structures with financial penalties.
Governor of the Bank of England and former Governor of the Bank of Canada Mark Carney has described two approaches to global financial reform: shielding financial institutions from cyclic economic effects by strengthening banks individually, and defending economic cycles from banks by improving systemic resiliency. Strengthening financial institutions necessitates stronger capital requirements and liquidity provisions, as well as better measurement and management of risks. The G-20 agreed to new standards presented by the Basel Committee on Banking Supervision at its 2009 summit in Pittsburgh, Pennsylvania. The standards included leverage ratio targets to supplement other capital adequacy requirements established by Basel II. Improving the resiliency of the global financial system requires protections that enable the system to withstand singular institutional and market failures. Carney has argued that policymakers have converged on the view that institutions must bear the burden of financial losses during future financial crises, and such occurrences should be well-defined and pre-planned. He suggested other national regulators follow Canada in establishing staged intervention procedures and require banks to commit to what he termed "living wills" which would detail plans for an orderly institutional failure.[68]
World leaders at the 2010 G-20 summit in Seoul, South Korea, endorsed the Basel III standards for banking regulation.
At its 2010 summit in Seoul, South Korea, the G-20 collectively endorsed a new collection of capital adequacy and liquidity standards for banks recommended by Basel III. Andreas Dombret of the Executive Board of Deutsche Bundesbank has noted a difficulty in identifying institutions that constitute systemic importance via their size, complexity, and degree of interconnectivity within the global financial system, and that efforts should be made to identify a group of 25 to 30 indisputable globally systemic institutions. He has suggested they be held to standards higher than those mandated by Basel III, and that despite the inevitability of institutional failures, such failures should not drag with them the financial systems in which they participate. Dombret has advocated for regulatory reform that extends beyond banking regulations and has argued in favor of greater transparency through increased public disclosure and increased regulation of the shadow banking system.
President of the Federal Reserve Bank of New York and Vice Chairman of the Federal Open Market Committee William C. Dudley has argued that a global financial system regulated on a largely national basis is untenable for supporting a world economy with global financial firms. In 2011, he advocated five pathways to improving the safety and security of the global financial system: a special capital requirement for financial institutions deemed systemically important; a level playing field which discourages exploitation of disparate regulatory environments and beggar thy neighbour policies that serve "national constituencies at the expense of global financial stability"; superior cooperation among regional and national regulatory regimes with broader protocols for sharing information such as records for the trade of over-the-counter financial derivatives; improved delineation of "the responsibilities of the home versus the host country" when banks encounter trouble; and well-defined procedures for managing emergency liquidity solutions across borders including which parties are responsible for the risk, terms, and funding of such measures.


                                            International Finance Corporation  


The International Finance Corporation (IFC) is an international financial institution that offers investment, advisory, and asset-management services to encourage private-sector development in developing countries. The IFC is a member of the World Bank Group and is headquartered in Washington, D.C.. It was established in 1956, as the private-sector arm of the World Bank Group, to advance economic development by investing in for-profit and commercial projects for poverty reduction and promoting development.[1][2][3] The IFC's stated aim is to create opportunities for people to escape poverty and achieve better living standards by mobilizing financial resources for private enterprise, promoting accessible and competitive markets, supporting businesses and other private-sector entities, and creating jobs and delivering necessary services to those who are poverty stricken or otherwise vulnerable.[4]
Since 2009, the IFC has focused on a set of development goals that its projects are expected to target. Its goals are to increase sustainable agriculture opportunities, improve healthcare and education, increase access to financing for microfinance and business clients, advance infrastructure, help small businesses grow revenues, and invest in climate health.
The IFC is owned and governed by its member countries but has its own executive leadership and staff that conduct its normal business operations. It is a corporation whose shareholders are member governments that provide paid-in capital and have the right to vote on its matters. Originally, it was more financially integrated with the World Bank Group, but later, the IFC was established separately and eventually became authorized to operate as a financially-autonomous entity and make independent investment decisions. It offers an array of debt and equity financing services and helps companies face their risk exposures while refraining from participating in a management capacity. The corporation also offers advice to companies on making decisions, evaluating their impact on the environment and society, and being responsible. It advises governments on building infrastructure and partnerships to further support private sector development.
The corporation is assessed by an independent evaluator each year. In 2011, its evaluation report recognized that its investments performed well and reduced poverty, but recommended that the corporation define poverty and expected outcomes more explicitly to better-understand its effectiveness and approach poverty reduction more strategically. The corporation's total investments in 2011 amounted to $18.66 billion. It committed $820 million to advisory services for 642 projects in 2011, and held $24.5 billion worth of liquid assets. The IFC is in good financial standing and received the highest ratings from two independent credit rating agencies in 2010 and 2011.
IFC comes under frequent criticism from NGOs that it is not able to track its money because of its use of financial intermediaries. For example, a report by Oxfam International and other NGOs in 2015, "The Suffering of Others," found the IFC was not performing enough due diligence and managing risk in many of its investments in third-party lenders [6]
Other criticism focuses on IFC working excessively with large companies or wealthy individuals already able to finance their investments without help from public institutions such as IFC, and such investments do not have an adequate positive development impact. An example often cited by NGOs and critical journalists is IFC granting financing to a Saudi prince for a five-star hotel in Ghana


The World Bank and International Monetary Fund were designed by delegates at the Bretton Woods conference in 1944 and the World Bank, then consisting of only the International Bank for Reconstruction and Development, became operational in 1946. Robert L. Garner joined the World Bank in 1947 as a senior executive and expressed his view that private business could play an important role in international development. In 1950, Garner and his colleagues proposed establishing a new institution for the purpose of making private investments in the developing countries served by the Bank. The U.S. government encouraged the idea of an international corporation working in tandem with the World Bank to invest in private enterprises without accepting guarantees from governments, without managing those enterprises, and by collaborating with third party investors. When describing the IFC in 1955, World Bank President Eugene R. Black said that the IFC would only invest in private firms, rather than make loans to governments, and it would not manage the projects in which it invests.[8] In 1956 the International Finance Corporation became operational under the leadership of Garner. It initially had 12 staff members and $100 million ($844.9 million in 2012 dollars)[9] in capital. The corporation made its inaugural investment in 1957 by making a $2 million ($16.4 million in 2012 dollars)[9] loan to a Brazil-based affiliate of Siemens & Halske (now Siemens AG).[2] In 2007, IFC bought 18% stake in the Indian Financial firm, Angel Broking.[10] In December 2015 IFC supported Greek banks with 150 million euros by buying shares in four of them: Alpha Bank (60 million), Eurobank (50 million), Piraeus Bank (20 million) and National Bank of Greece (20 million).[11]

Governance

The IFC is governed by its Board of Governors which meets annually and consists of one governor per member country (most often the country's finance minister or treasury secretary).[1] Each member typically appoints one governor and also one alternate.[12] Although corporate authority rests with the Board of Governors, the governors delegate most of their corporate powers and their authority over daily matters such as lending and business operations to the Board of Directors. The IFC's Board of Directors consists of 25 executive directors who meet regularly and work at the IFC's headquarters, and is chaired by the President of the World Bank Group.[13][14] The executive directors collectively represent all 184 member countries. When the IFC's Board of Directors votes on matters brought before it, each executive director's vote is weighted according to the total share capital of the member countries represented by that director.[13] The IFC's Executive Vice President and CEO oversees its overall direction and daily operations.[1] As of October 2012, Jin-Yong Cai serves as the Executive Vice President and CEO of the IFC.[14] President of the World Bank Group Jim Yong Kim appointed Jin-Yong Cai to serve as the new Executive Vice President and CEO of the IFC. Cai is a Chinese citizen who formerly served as a managing director for Goldman Sachs and has over 20 years of financial sector experience.[15][16]
Although the IFC coordinates its activities in many areas with the other World Bank Group institutions, it generally operates independently as it is a separate entity with legal and financial autonomy, established by its own Articles of Agreement.[13] The corporation operates with a staff of over 3,400 employees, of which half are stationed in field offices across its member nations.

Functions

Investment services

The IFC's investment services consist of loans, equity, trade finance, syndicated loans, structured and securitized finance, client risk management services, treasury services, and liquidity management.[12] In its fiscal year 2010, the IFC invested $12.7 billion in 528 projects across 103 countries. Of that total investment commitment, approximately 39% ($4.9 billion) was invested into 255 projects across 58 member nations of the World Bank's International Development Association (IDA).[12]
The IFC makes loans to businesses and private projects generally with maturities of seven to twelve years.[12] It determines a suitable repayment schedule and grace period for each loan individually to meet borrowers' currency and cash flow requirements. The IFC may provide longer-term loans or extend grace periods if a project is deemed to warrant it.[17] Leasing companies and financial intermediaries may also receive loans from the IFC. Though loans have traditionally been denominated in hard currencies, the IFC has endeavored to structure loan products in local currencies.[18] Its disbursement portfolio included loans denominated in 25 local currencies in 2010, and 45 local currencies in 2011, funded largely through swap markets. Local financial markets development is one of IFC’s strategic focus areas. In line with its AAA rating, it has strict concentration, liquidity, asset-liability and other policies. The IFC committed to approximately $5.7 billion in new loans in 2010, and $5 billion in 2011.
Although the IFC's shareholders initially only allowed it to make loans, the IFC was authorized in 1961 to make equity investments, the first of which was made in 1962 by taking a stake in FEMSA, a former manufacturer of auto parts in Spain that is now part of Bosch Spain.[2][19] The IFC invests in businesses' equity either directly or via private equity funds, generally from five up to twenty percent of a company's total equity. IFC’s private equity portfolio currently stands at roughly $3.0 billion committed to about 180 funds. The portfolio is widely distributed across all regions including Africa, East Asia, South Asia, Eastern Europe, Latin America and the Middle East, and recently has invested in Small Enterprise Assistance Funds' (SEAF) Caucasus Growth Fund,[20] Aureos Capital's Kula Fund II (Papua New Guinea, Fiji, Pacific Islands)[21] and Leopard Capital’s Haiti Fund.[22] Other equity investments made by the IFC include preferred equity, convertible loans, and participation loans.[12] The IFC prefers to invest for the long-term, usually for a period of eight to fifteen years, before exiting through the sale of shares on a domestic stock exchange, usually as part of an initial public offering. When the IFC invests in a company, it does not assume an active role in management of the company.[23]
Through its Global Trade Finance Program, the IFC guarantees trade payment obligations of more than 200 approved banks in over 80 countries to mitigate risk for international transactions.[13] The Global Trade Finance Program provides guarantees to cover payment risks for emerging market banks regarding promissory notes, bills of exchange, letters of credit, bid and performance bonds, supplier credit for capital goods imports, and advance payments.[24] The IFC issued $3.46 billion in more than 2,800 guarantees in 2010, of which over 51% targeted IDA member nations.[12] In its fiscal year 2011, the IFC issued $4.6 billion in more than 3,100 guarantees. In 2009, the IFC launched a separate program for crisis response, known as its Global Trade Liquidity Program, which provides liquidity for international trade among developing countries. Since its establishment in 2009, the Global Trade Liquidity Program assisted with over $15 billion in trade in 2011.[13]
The IFC operates a Syndicated Loan Program in an effort to mobilize capital for development goals. The program was created in 1957 and as of 2011 has channeled approximately $38 billion from over 550 financial institutions toward development projects in over 100 different emerging markets. The IFC syndicated a total of $4.7 billion in loans in 2011, twice that of its $2 billion worth of syndications in 2010.[12][13] Due to banks retrenching from lending across borders in emerging markets, in 2009 the IFC started to syndicate parallel loans to the international financial institutions and other participants.[25]
To service clients without ready access to low-cost financing, the IFC relies on structured or securitized financial products such as partial credit guarantees, portfolio risk transfers, and Islamic finance.[13][26] The IFC committed $797 million in the form of structured and securitized financing in 2010.[12] For companies that face difficulty in obtaining financing due to a perception of high credit risk, the IFC securitizes assets with predictable cash flows, such as mortgages, credit cards, loans, corporate debt instruments, and revenue streams, in an effort to enhance those companies' credit.[27]
Financial derivative products are made available to the IFC's clients strictly for hedging interest rate risk, exchange rate risk, and commodity risk exposure. It serves as an intermediary between emerging market businesses and international derivatives market makers to increase access to risk management instruments.
The IFC fulfills a treasury role by borrowing international capital to fund lending activities. It is usually one of the first institutions to issue bonds or to do swaps in emerging markets denominated in those markets' local currencies. The IFC's new international borrowings amounted to $8.8 billion in 2010 and $9.8 billion in 2011. The IFC Treasury actively engages in liquidity management in an effort to maximize returns and assure that funding for its investments is readily available while managing risks to the IFC.

Advisory services

In addition to its investment activities the IFC provides a range of advisory services to support corporate decision making regarding business, environment, social impact, and sustainability. The IFC's corporate advice targets governance, managerial capacity, scalability, and corporate responsibility. It prioritizes the encouragement of reforms that improve the trade friendliness and ease of doing business in an effort to advise countries on fostering a suitable investment climate. It also offers advice to governments on infrastructure development and public-private partnerships. The IFC attempts to guide businesses toward more sustainable practices particularly with regards to having good governance, supporting women in business, and proactively combating climate change.[13]

Asset Management Company

The IFC established IFC Asset Management Company LLC (IFC AMC) in 2009 as a wholly owned subsidiary to manage all capital funds to be invested in emerging markets. The AMC manages capital mobilized by the IFC as well as by third parties such as sovereign or pension funds, and other development financing organizations. Despite being owned by the IFC, the AMC has investment decision autonomy and is charged with a fiduciary responsibility to the four individual funds under its management. It also aims to mobilize additional capital for IFC investments as it can make certain types of investments which the IFC cannot.[30] As of 2011, the AMC managed the IFC Capitalization Fund (Equity) Fund, L.P., the IFC Capitalization (Subordinated Debt) Fund, L.P., the IFC African, Latin American, and Caribbean Fund, L.P., and the Africa Capitalization Fund, Ltd.[31] The IFC Capitalization (Equity) Fund holds $1.3 billion in equity, while the IFC Capitalization (Subordinated Debt) Fund is valued at $1.7 billion. The IFC African, Latin American, and Caribbean Fund (referred to as the IFC ALAC Fund) was created in 2010 and is worth $1 billion. As of March 2012, the ALAC Fund has invested a total of $349.1 million into twelve businesses. The Africa Capitalization Fund was set up in 2011 to invest in commercial banks in both Northern and Sub-Saharan Africa and its commitments totaled $181.8 million in March 2012.[30] As of 2012, Gavin E.R. Wilson serves as CEO of the AMC.[14]

Financial performance

The IFC prepares consolidated financial statements in accordance with United States GAAP which are audited by KPMG. It reported income before grants to IDA members of $2.18 billion in fiscal year 2011, up from $1.95 billion in fiscal 2010 and $299 million in fiscal 2009. The increase in income before grants is ascribed to higher earnings from the IFC's investments and also from higher service fees. The IFC reported a partial offset from lower liquid asset trading income, higher administrative costs, and higher advisory service expenses. The IFC made $600 million in grants to IDA countries in fiscal 2011, up from $200 million in fiscal 2010 and $450 million in fiscal 2009. The IFC reported a net income of $1.58 billion in fiscal year 2011. In previous years, the IFC had reported a net loss of $151 million in fiscal 2009 and $1.75 billion in fiscal 2010. The IFC's total capital amounted to $20.3 billion in 2011, of which $2.4 billion was paid-in capital from member countries, $16.4 billion was retained earnings, and $1.5 billion was accumulated other comprehensive income. The IFC held $68.49 billion in total assets in 2011.[31]
The IFC's return on average assets (GAAP basis) decreased from 3.1% in 2010 to 2.4% in 2011. Its return on average capital (GAAP basis) decreased from 10.1% in 2010 to 8.2% in 2011. The IFC's cash and liquid investments accounted for 83% of its estimated net cash requirements for fiscal years 2012 through 2014. Its external funding liquidity level grew from 190% in 2010 to 266% in 2011. It has a 2.6:1 debt-to-equity ratio and holds 6.6% in reserves against losses on loans to its disbursement portfolio. The IFC's deployable strategic capital decreased from 14% in 2010 to 10% in 2011 as a share of its total resources available, which grew from $16.8 billion in 2010 to $17.9 billion in 2011.[31]
In 2011, the IFC reported total funding commitments (consisting of loans, equity, guarantees, and client risk management) of $12.18 billion, slightly lower than its $12.66 billion in commitments in 2010. Its core mobilization, which consists of participation and parallel loans, structured finance, its Asset Management Company funds, and other initiatives, grew from $5.38 billion in 2010 to $6.47 billion in 2011. The IFC's total investment program was reported at a value of $18.66 billion for fiscal year 2011. Its advisory services portfolio included 642 projects valued at $820 million in 2011, compared to 736 projects at $859 million in 2010. The IFC held $24.5 billion in liquid assets in 2011, up from $21 billion in 2010.[31]
The IFC received credit ratings of AAA from Standard & Poor's in December 2012 and Aaa from Moody's Investors Service in November 2012.[32][33] S&P rated the IFC as having a strong financial standing with adequate capital and liquidity, cautious management policies, a high level of geographic diversification, and anticipated treatment as a preferred creditor given its membership in the World Bank Group. It noted that the IFC faces a weakness relative to other multilateral institutions of having higher risks due to its mandated emphasis on private sector investing and its income heavily affected by equity markets.

Green buildings in developing countries

The IFC has created a mass-market certification system for fast growing emerging markets called EDGE ("Excellence in Design for Greater Efficiencies").[35] IFC and the World Green Building Council[36] have partnered to accelerate green building growth in developing counties. The target is to scale up green buildings over a seven-year period until 20% of the property market is saturated. Certification occurs when the EDGE standard is met, which requires 20% less energy, water, and materials than conventional homes.


                                       

International Financial Institutions

International Financial Institutions (IFIs) – such as the World Bank, the Inter-American Development Bank and the Asian Development Bank – are formal international lending agencies operating across country boundaries. They are the largest source of financial and technical assistance to developing countries stimulating economic growth and development.


                                XXX  .  V0000  ADP Acquires Global Cash Card



case study electronic equipment for Global cash card in US :

ADP Acquires Global Cash Card, Solidifies Leadership Position in Employee Payments and Extends Payroll Differentiation with Acquisition of Proprietary Digital Payments Processing Platform . 

Acquisition increases flexibility to support the changing ways people get paid; ADP becomes the only human capital management provider with a proprietary digital payments processing platform.


ADP® today announced the acquisition of Global Cash Card, a leader in digital payments, including paycards and other electronic accounts. With this acquisition, ADP gains an industry-leading proprietary digital payment processing platform that enables innovation and added value services for clients and their workforces, as well as a large, diversified client base that has shown consistent growth. Paycards have been the fastest growing method of pay in recent years, in part because of their popularity with Millennials and Gen Z. After integrating Global Cash Card with ADP's existing paycard offer, the ALINE Card by ADP®, ADP will manage more than four million accounts on a single platform.
Founded in 2002 and headquartered in Irvine, California, Global Cash Card's offering is powered by one of the most advanced and flexible digital payments processing platforms in the industry. The company offers solutions for both Form W-2 employees and Form 1099 contractors, as well as online tools that help customers manage their digital accounts, including online bill pay, rewards plan enrollment, multi-purse capability for providing secondary account holders access to a portion of available balances, and an expense manager that allows customers to organize, categorize and budget their expenses. All of this helps workers by bringing together in one place both their wages and their most significant financial transactions and providing a more complete picture of their financial well-being. The company's proprietary platform is highly configurable based on customer needs, and is Payment Card Industry Data Security Standard (PCI DSS) compliant.
Carlos Rodriguez, president and CEO of ADP, said, "ADP pays 1-in-6 workers in the U.S. and our clients look to us as the market leader to offer solutions that help them better engage with their entire workforce. The acquisition of this established and profitable company helps us innovate around the essential service of delivering pay, and will enable us to provide new tools to consumers that help them manage their finances."
The functionality of digital accounts, including paycards, and ease of access to funds have made them particularly popular among Millennials and Gen Z. This also is true for the approximately 24.5 million U.S. households that are "underbanked" (2015 figures), given that account holders enjoy many of the standard features of a checking account, such as shopping and paying bills in stores, online, and through mobile apps. These accounts are also popular with employers because they provide a less expensive, more immediate and more secure option to deliver wages than paper checks.
With the acquisition of Global Cash Card, ADP will become the only human capital management provider with a proprietary digital payments processing platform and will enable ADP to offer digital accounts and flexible payment offerings across their existing base of more than 700,000 clients, while increasing the speed of implementation for new clients.
Commenting on the acquisition, Doug Politi, president of Added Value Services at ADP, said, "As the 'gig economy' changes the way people earn a living, so too does it change the way companies need to pay their workforce. We have been impressed with Global Cash Card's continuous innovation over the years, and are very excited to welcome Global Cash Card associates and experienced management team to the ADP family."
Joe Purcell, founder, president and CEO of Global Cash Card, said, "The Global Cash Card team is thrilled to join ADP. We've worked hard to develop a very happy and growing customer base, and I'm excited to see this payment processing technology reach a whole new level with the know-how, resources and market penetration of ADP behind it."
The financial terms of the transaction were not disclosed.
About ADP (NASDAQ: ADP)
Powerful technology plus a human touch. Companies of all types and sizes around the world rely on ADP cloud software and expert insights to help unlock the potential of their people. HR. Talent. Benefits. Payroll. Compliance. Working together to build a better workforce

The ALINE Card by ADP is issued by MB Financial Bank, N.A., Member FDIC, pursuant to licenses from Visa U.S.A. Inc. and Mastercard International, Inc. The ALINE Card by ADP is also issued by Central National Bank, Enid, Oklahoma, Member FDIC, pursuant to a license from Visa U.S.A. Inc. ADP is a registered ISO of MB Financial Bank, N.A. and Central National Bank, Enid, Oklahoma.
ADP, the ADP logo, ALINE Card by ADP, and ADP A more human resource are registered trademarks of ADP, LLC. All other marks are the property of their respective owners.



            XXX  .  V000000  Data Acquisition and Signal Processing for Smart Sensors 

From simple thermistors to intelligent silicon micro devices with powerful capabilities to communicate information across networks, sensors play an important role in such diverse fields as biomedical and chemical engineering to wireless communications. Introducing a new dependent count method for frequency signal processing, this book presents a practical approach to the design of signal processing sensors.
Modern advanced micro sensors technologies require new and equally advanced methods of frequency signal processing in order to function at inreasingly high speeds. The authors provide a comprehensive overview of data acquisition and signal processing methods for the new generation of smart and quasi-smart sensors. The practical approach of the text includes coverage of the design of signal processing methods for digital, frequency, period, duty-cycle and time interval sensors.
* Contains numerous practical examples illustrating the design of unique signal processing sensors and transducers
* Details traditional, novel, and state of the art methods for frequency signal processing
* Coverage of the physical characteristics of smart sensors, development methods and applications potential
* Outlines the concept, principles and nature of the method of dependent count (MDC) ; a unique method for frequency signal processing, developed by the authors
This text is a leading edge resource for measurement engineers, researchers and developers working in micro sensors, MEMS and microsystems, as well as advanced undergraduates and graduates in electrical and mechanical engineering.

example acquisition data in Method for Reading Sensors and Controlling :

This now we presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks.
Keywords: sensors, smartphones, mobile devices, mechatronics, robotics, robot, control, actuators
 

1. Introduction and Motivation

Most robots and automation systems rely on processing units to control their behavior. Such processing units can be embedded processors (such as microcontrollers) or general purpose computers (such as PCs) with specialized Input/Output (I/O) accessories. One common practice is the use of USB devices with several I/O options connected to a PC. Another approach consists of using a microcontroller unit that runs a software algorithm to control the system. Frequently this microcontroller is connected to a computer to monitor and set parameters of the running system.
Although consolidated, these approaches require specific device drivers on the host computer and special configurations. Moreover, if the control algorithm has to be changed, the user must reprogram the microcontroller, which in some cases requires special hardware tools such as in-circuit programmers and debuggers.
Mobile devices such as cell phones, tablets and netbooks are widespread and getting cheaper due to their large production, making them an interesting option to control mechatronics systems. These devices also include several sensors that can be used in a mechatronic system. Examples of sensors available in modern cell phones are the global positioning system receiver (GPS), 3-axis accelerometer, compass, liquid crystal display (LCD) with touchscreen, Internet access via WiFi or GPRS/3G services, camera, speakers, microphone, bluetooth module, light sensor and battery (some have even gyroscope, barometer and stereo camera). If all those accessories would be bought separately and installed in a device, the costs would be more expensive than a device with all these features already included.
To use a mobile device as a mechatronics system main processing unit, a communication link must established between the control unit and the system under control. Some works [] use the mobile’s device RS-232 serial port signals while others use bluetooth [,]. In all these works, a microcontroller still needs to be used to communicate with the mobile device, to read sensors and to control actuators. The problem is that not all mobile devices have a serial port, or bluetooth communication interface.
Analyzing modern mobile devices it is possible to note that most of them have an universal communication interface: audio channels, that usually can be accessed through standard 3.5 mm P2 connectors. If there are no connectors for external audio, this article also offers a solution using suction cups to use the proposed system in mobile devices where the speakers or microphones are built-in. Details of such solution are discussed in Section 5.1.
In this article, a system that allows closed loop control using any mobile device that can produce and receive sounds was designed and built. In such system, audio tones are used to encode actuators control commands and sensors states. The system cost is low because only a few parts are needed (these parts are widely used in the telephony industry, making them also easy to find on the market). Moreover the system does not need any intermediate processing unit, making it flexible to easily update the running algorithm, avoiding the need of reprogramming microcontrollers.

1.1. Robots and Smartphones

As mentioned, mobile devices have several features that can be used in robotics. Some possibilities of using these features are briefly discussed in the following text. These possibilities are, for sure, non-exhaustive.
  • CPU: Modern smartphones already have processors with clock rates of 1 GHz or more. Some models also have multi-core processors. These processing units are capable of doing many complex and computationally intensive operations for autonomous robots navigation. Santos et al. [], for example, showed that it is feasible to execute complex robotics navigation algorithms in processors of older smartphones with 300 MHz clock speed;
  • Camera: can be used with a variety of algorithms for visual odometry, object recognition, robot attention (the ability to select a topic of interest []), obstacle detection and avoidance, object tracking, and others;
  • Compass: can be used to sense the robot’s direction of movement. The system can work with one or two encoders. If only one encoder is used, the compass is used to guarantee that the robot is going in the expected direction and to control the desired curves angles;
  • GPS: can be used to obtain the robot position in outdoor environments, altitude and speed;
  • Accelerometer: can be used to detect speed changes and consequently if the robot has hit an object in any direction (a virtual bumper). It can also detect the robot’s orientation. It is also possible to use Kalman filtering to do sensor fusion of the camera, encoders and accelerometer to get more accurate positioning;
  • Internet: WiFi or other Internet connection can be used to remotely monitor the robot and send commands to it. The robot can also access a cloud system to aid some decision making process and communicate with other robots;
  • Bluetooth: can be used to exchange information with nearby robots and for robot localization;
  • Bluetooth audio: As the standard audio input and output are used for the control system, a bluetooth headset can be paired with the mobile device, allowing the robot to receive voice commands and give synthesized voice feedback to the user. The Android voice recognizer worked well for both English and Portuguese. The user can press a button in the bluetooth headset and say a complex command such as a phrase. The Android system will then return a vector with most probable phrases that the user has said;
  • ROS: The Robot Operating System (ROS) [] from Willow Garage is already supported in mobile devices running Android using the ros-java branch. Using ROS and the system described in this article, a low cost robot can be built with all the advantages and features of ROS.

1.2. Contributions

The main contributions of this work are:
  • A novel system for controlling actuators using audio channels
  • A novel system for reading sensors information using audio channels
  • A closed loop control architecture using the above-mentioned items
  • Application of a camera and laser based distance measurement system for robotics
  • A low cost mobile robot controlled by smartphones and mobile devices using the techniques introduced in this work

1.3. Organization

This paper is structured as follows: Section 2 describes previous architectures for mechatronics systems control. Section 3 introduces the new technique. Section 4 presents the experimental results of the proposed system. Section 5 describes a case study with an application of the system to build a low cost mobile robot and in Section 6 are the final considerations.
 

2. Related Work

This section reviews some of the most relevant related works that uses mobile devices to control robots and their communication interfaces.

2.1. Digital Data Interfaces

Santos et al. [] analyze the feasibility of using smartphones to execute robot’s autonomous navigation and localization algorithms. In the proposed system, the robot control algorithm is executed in the mobile phone and the motion commands are sent to the robot using bluetooth. Their experiments are made with mobile phones with processor clocks of 220 MHz and 330 MHz and they conclude that it is possible and robust to execute complex navigation in these devices even with soft real-time requirements. The tested algorithms are well-known: potential fields, particle filter and extended Kalman filter.
Another example of the use of smartphones to control robots is the open source project Cellbots [] which uses Android based phones to control mobile robots. The project requires a microcontroller that communicates with the phone via bluetooth or serial port and sends the electrical control signals to the motors. The problem is that not all mobile devices have bluetooth or serial ports. Moreover, in some cases the device has the serial port available only internally, requiring disassembly of the device to access the serial port signals. When using bluetooth, the costs are higher because an additional bluetooth module must be installed and connected to the microcontroller.
The work of Hess and Rohrig [] consists of using mobile phones to remotely control a robot. Their system can connect to the robot using TCP/IP interfaces or bluetooth. In the case of the TCP/IP sockets, the connection to the robot is made using an already existing wireless LAN (WiFi) infrastructure.
Park et al. [] describe user interface techniques for using PDAs or smartphones to remotely control robots. Again, the original robot controller is maintained and the mobile device is used simply as a remote control device. Their system commands are exchanged using WiFi wireless networks.

2.2. Analog Audio Interfaces

One interesting alternative is using a dedicated circuit to transform the audio output of the mobile device in a serial port signal [], but the problem with such approach is that only unidirectional communication is possible, and still, as in the other cases, a microcontroller is needed to decode the serial signal and execute some action.
On the other hand, the telecommunications industry frequently uses the Dial Tone Multi Frequency (DTMF) system to exchange remote control commands between equipments. Section 3.1 contains a description of the DTMF system. The system is more known in telephony for sending the digits that a caller wants to dial to a switching office. DTMF usage to control robots is not new. There are some recent projects that use DTMF digits exchange to remotely control robots: Patil and Henry [] used a remote mobile phone to telecommand a robot. DTMF tones are sent from the mobile phone to the remote robot’s phone and decoded by a specific integrated circuit and the binary output is connected to a FPGA that controls the robot. Manikandan et al. [] proposed and built a robot that uses two cell phones. One phone is placed in the robot and another acts as a remote control. The DTMF audio produced by the keys pressed in the remote control phone is sent to the phone installed in the robot, and the audio output of this phone is connected to a DTMF decoder via the earphone output of the cell phone. The 4-bit DTMF output is then connected to a microcontroller that interprets the codes and executes the movements related to the keys pressed in the remote control phone. Sai and Sivaramakrishnan [] used the same setup, where two mobile phones are used, one located at the robot and another used as a remote control. The difference is that the system is applied to different type of robot (mechanically). Naskar et al. [] presented a work where a remote DTMF keypad is used to control a military robot. Some DTMF digits are even used to fire real guns. The main difference is that instead of transmitting the DTMF tones using a phone call, the tones are transmitted using a radio frequency link.
Still about DTMF based control, Ladwa et al. [] proposed a system that can remotely control home appliances or robots via DTMF tones over telephone calls. The tones are generated by keys pressed in a remote phone keypad, received by a phone installed in the system under control and decoded by a DTMF decoder circuit. A microcontroller then executes some pre-programmed action. A similar work is presented by Cho and Jeon [] where key presses in a remote phone are sent through a telephone call to a receiver cell phone modem. Its audio output is connected to a DTMF decoder chip which is then connected to a robot control board. As the user presses keys in the remote phone, the robot goes forward, backwards or do curves according with the pressed key (a numerical digit that is represented by a DTMF code).
Recently, a startup company created a mobile robot that can be controlled by audio tones from mobile phones []. The system is limited to controlling two motors and does not have any feedback or sensor reading capability. The alternative is using camera algorithms such as optical flow to implement visual odometry, but even if so, such limitations make it difficult to build a complete mobile robot because of the lack of important sensory information such as bumpers and distance to objects. Also, the information provided in the company website does not make it clear if the wheel speed can be controlled and which audio frequencies are used.
With the exception of the example in the last paragraph, all other mentioned examples and projects use DTMF tones to remotely control a distant robot over radio or telephone lines. This system, in contrast, uses DTMF tones to control actuators and read sensors in a scheme where the control unit is physically attached to the robot, or near the robot (connected by an audio cable). The advantage is that DTMF tones are very robust to interference and widely adopted, making it easy to find electrical components and software support for dealing with such system.

3. System Architecture

This section describes the proposed system architecture. Its main advantage is to provide a universal connection system to read sensors and control actuators of mechatronics systems. The data is exchanged using audio tones, allowing the technique to be used with any device that has audio input/output interfaces.

3.1. Theoretical Background

The DTMF system was created in the decade of 1950 as a faster option to the (now obsolete) pulse dialing system. Its main purpose at that time was to send the digits that a caller wants to dial to the switching system of the telephone company. Although almost 60 years old, DTMF is still widely used in telecommunication systems and is still used in most new telephone designs [].
DTMF tones can be easily heard by pressing the keys on a phone during a telephone call. The system is composed of 16 different audio frequencies organized in a 4 × 4 matrix. Table 1 shows these frequencies and the corresponding keys/digits. A valid digit is always composed by a pair of frequencies (one from the table columns and one from the table rows) transmitted simultaneously. For example, to transmit the digit 9, an audio signal containing the frequencies 852 Hz and 1,477 Hz would have to be generated.
Table 1.
DTMF frequencies pairs and corresponding digits. Adapted from the Audio Engineer’s Reference Book [].
As DTMF was designed to exchange data via telephone lines that can be noisy, the use of 2 frequencies to uniquely identify a digit makes the system efficient and very robust to noise and other sounds that do not characterize a valid DTMF digit. In fact, when the DTMF system was designed, the frequencies were chosen to minimize tone pairs from natural sounds [].
The described robustness of DTMF led its use in a variety of current remote automation systems such as residential alarm monitoring, vehicle tracking systems and interactive voice response systems such as bank’s automatic answering machines menus that allows the user to execute interactive operations like “Press 1 to check your account balance; Press 2 to block your credit card”.
The wide adoption and reliability of the DTMF system led the semiconductor industry to develop low cost integrated circuits (ICs) that can encode and decode DTMF signals from and to digital binary digits. The system proposed in this article uses such ICs to transmit and receive information. Both actuators control and sensors data are encoded using DTMF tones. The following sections describe the system design.

3.2. Device Control

To control actuators, a mobile device generates a DTMF tone. The tone is decoded by a commercial DTMF decoder chip (such as the MT8870), converting the tone to a 4-bit binary word equivalent to the DTMF input. The decoded output remains present while the DTMF tone is present at the input. The resulting bits can feed a power circuit to control up to four independent binary (on/off) devices such as robots brakes, lights or a pneumatic gripper. Figure 1 shows the basic concept of the system.
Figure 1.
DTMF decoder with four independent outputs controlled by audio from a mobile device.
The audio output from the mobile device can be directly connected to the input of the DTMF decoder, but in some specific cases an audio preamplifier should be used to enhance the audio amplitude.
Figure 2 shows a direct current (DC) motor control application where the 4-bit output of the decoder is connected to a motor control circuit (an H-bridge, for example, using the L298 commercial dual H-bridge IC). As 2 bits are required to control each motor, the system can control 2 DC motors independently. Table 2 shows the DTMF digits and corresponding motor states. Note that these states can be different according to the pin connections between the DTMF decoder and the H-bridge. In order to control the DC motor’s speed, the mobile device turns the DTMF signals on and off in a fixed frequency, mimicking a pulse width modulation (PWM) signal.
Figure 2.
Dual motor control using one mono audio channel.
Table 2.
DTMF digits and corresponding motors states.
To control more devices it is possible to take advantage of the fact that most audio outputs of mobile devices are stereo. Thus, generating different audio tones in the left and right channels doubles the number of controlled devices (8 different on/off devices or 4 DC motors). One interesting option, possible only in devices with USB Host feature, such as netbooks and desktop computers, is to add low cost USB multimedia sound devices, increasing the number of audio ports in the system.
Another possibility consists in directly connecting servo-motors (the ones used in model airplanes) control signals to the output of the DTMF decoder. As each servo needs only one PWM input signal, each stereo audio channel can drive up to eight servo-motors.

3.3. Sensor Reading

Most mechatronics systems and sensor networks need sensors to sense the surrounding environment and their own state in order to decide what to do next. To accomplish this task in this system, sensors are connected to the input of a DTMF encoder chip (such as the TCM5087). Each time a sensor state changes, the encoder generates a DTMF tone that is captured and analyzed by the mobile device. According to the digits received it is possible to know which sensor generated the tone. More details on how to identify which sensor and its value are provided in Section 5.1.
As shown in Figure 3, up to four sensors can be connected to a DTMF encoder that generates tones according to the sensors states. The generator’s output itself is connected to the audio input of the mobile device which continuously samples the audio input checking if the frequency pair that characterizes a DTMF digit is present in the signal. To accomplish this task, the discrete Fourier transform (DFT) is used, according to Equation (1).
X(m)= n=0 N1 x(n)e j2Ï€nm/N  
(1)
where X(m) is the frequency magnitude of the signal under analysis at index m, x(n) is the input sequence in time (representing the signal) with n index and N is the DFT number of points. N determines the resolution of the DFT and the number of samples to be analyzed. For performance reasons, a Fast Fourier Transform (FFT) [] is used to identify the frequency components of the input signal and consequently detect the DTMF digit generated by the DTMF generator that encodes sensor data. For clear and detailed information about these digital signal processing concepts, please refer to Lyons’ book [] on Digital Signal Processing.
Figure 3.
Sensors input using a DTMF generator.
To optimize the FFT computation it is necessary to specify adequate values for N and Fs (the sample rate of the input signal). From Table 1, the highest frequency present in a DTMF tone is 1,633 Hz. Applying the fundamental sampling theorem results in Fs = 3,266 Hz (the theorem states that the sample rate should be at least twice the highest signal to be captured). For implementation convenience and better compatibility, a 8 KHz sample rate is used, which most mobile devices can perform.
The lower the number of points in the FFT, the faster the FFT is computed, and more digits per second can be recognized, leading to a better sensor reading frequency. To compute the smallest adequate number of points for the FFT, Equation (2) is used.
f(m)=mFsN  
(2)

In Equation (2), f(m) is each frequency under analysis, Fs is the sampling frequency (8 KHz) and N the FFT’s number of points to be minimized. Using N = 256, results in an analysis resolution of about 30 Hz, that is enough to differ from one DTMF frequency component to another. This is consistent with the DFT parameters used by Chitode to detect DTMF digits []. The work developed by Khan also uses N = 256 and Fs = 8 KHz to detect DTMF tones [].
Later in this text, Section 5.1 and Table 3 explains the use and application of this technique to read four digital (on/off) sensors simultaneously.
Table 3.
Truth table used with 4 sensors input used in the case study robot.
One of the limitations of the described method is that it is restricted to binary (on/off) sensors. As it is shown is Section 5.1, this is enough for many applications, including measuring angles and speeds using incremental optical encoders. In any case, additional electronics could be used to encode analog signals and transmit these signals using the audio interface. As each DTMF digit encodes 4 bits, the transmission of an analog value converted with a 12-bit analog-digital converter would take 3 transmission cycles. It would also be possible to use digital signal multiplexing hardware to encode more information in the same system (with worse performance). The demultiplexing would be done in the mobile device by software.
 

4. Experimental Results

In order to evaluate the proposed system, an Android application was developed using the Android development kit, which is available for free. Experiments were executed in several mobile devices and desktop computers.
For the actuator control subsystem, experiments showed that generating PWM signals by software is possible, but the resulting signal shows variations (a software generated PWM with 1 ms ON time and 1 ms OFF time produces a real signal with 50 ms ON time and 50 ms OFF time). A better option is to use pre-recorded PWM DTMF tones resulting in high reliability PWM of DTMF tones with frequencies greater than 1 KHz. As mobile devices have mature software support for playing pre-recorded audio, the PWM plays smoothly with low processor usage. In these experiments it was also observed that another practical way of doing fine speed adjustments consists in controlling the audio output volume, resulting in proportional speed changes in the motor(s).
Experiments of the sensor reading subsystem are based on the FFT. The experiments showed that in the worst case, the FFT computation time is 17 ms, leading to a theoretical limit of executing up to 58.8 FFTs per second. Figure 4 shows experimental results of the system running on 3 different devices. The tested devices were an early Android based phone, the HTC G1 with a 528 MHz ARM processor, an Android based tablet computer with a dual core 1 GHz ARM processor and a 1 GHz PC netbook with an Intel Celeron processor (in this case a version of the Android operating system for the x86 architecture was used). The FFT was implemented using Java and executed in the virtual machine (dalvik) of the Android system. Using the native development system for Android, thus bypassing the virtual machine, would enhance these results. Another performance improvement can be reached using the Goertzel algorithm .
Figure 4.
DTMF digit recognition performance in different devices.
From Figure 4 it is possible to note that even the device with less processing power is able to handle about 40 DTMF digits per second with zero packet loss. There are several causes for the increasing packet loss that starts at 40 Hz in the plot. One of the causes are the different audio input timings [] caused by the different audio hardware of each device. Another cause is related to the task scheduler of the Android operating system (and the underlying Linux kernel) that can be indeterministic when the CPU load is high.
As a reference for comparison, some performance tests were made in a Lego Mindstorms (TM) robotics kit that is commonly used in educational robotics and some scientific researches. When connected to a computer or smartphone via a bluetooth wireless link, the maximum sensor reading rate of the Lego-NXT brick is 20 Hz. If several sensors are used, the bandwidth is divided. For example, using 2 encoders and 2 touch sensors reduces the sensor reading rate to 5 Hz per sensor or less. If the NXT brick is connected to a computer using the USB port, then the maximum sensor reading frequency rises to 166 Hz. If two encoders and two touch sensors (bumpers) are used, then each sensor will be read at a rate of 41.5 Hz. The performance of the system proposed in this article is comparable to this commercial product as a 40 Hz rate can be sustained for each sensor in a system with 4 sensors.
 

5. Case Study Application

The system described in Section 3 can be applied to several situations where a computing device needs to control actuators and read sensors, such as laboratory experiments, machine control and robotics. In this section, a mobile robot case study is described.

5.1. Low Cost Mobile Robot

As an application example, the presented technique was used to build a low cost educational mobile robot. For the Robot’s frame, wheels, gears and two motors 24 US dollars were spent. For electronics parts more 6 US dollars were spent summing up a total of 30 US dollars to build the robot. As most people own a mobile phone or a smartphone, there is the assumption that the control device will not have to be bought because a mobile device that the user already has will be used.
Even if the control device needed to be purchased, the option of using a smartphone would still be good because single board computers, typically used in robots or other robot computers, are more expensive than smartphones. Furthermore, smartphones include camera, battery, Internet connection and a variety of sensors that would have to be bought separately and connected to the robot’s computer. With multi-core smartphones running with clock speeds faster than 1 GHz and with 512 MB or 1 GB of RAM memory, they are a good alternative to traditional robots computers.
Important sensors in such kind of robot are the bumpers to detect collisions and encoders to compute odometry. Figure 5 shows a block diagram connecting bumpers and 2 wheel encoders to the DTMF generator. Table 3 shows a truth table with the possible states of each sensor and the corresponding DTMF digits. Instead of using commercial encoders discs, several encoders were designed and printed with a conventional laser printer. The discs were glued to the robot’s wheels and a standard CNY70 light reflection sensor was mounted in front of each disc.
Figure 5.
Sensors connection in a mobile robot with differential drive.
As can be seen in Table 3, there is a unique DTMF digit that corresponds to each possible sensor state. Using basic binary arithmetic it is possible to obtain the individual state of each sensor. For example, from Table 3 it is known that the bumpers are the bits 0 and 1. Using a bitwise AND operation with the binary mask 0001 will filter all other sensor states and the result will be either 0 or 1, indicating the left bumper state. For the right bumper, the same AND operation can be applied with the binary mask 0010. Furthermore, using the binary 0011 mask and the AND operation will only return a value different than zero if both bumpers are activate at the same time. Using these types of comparisons it is then possible to know the state of each sensor. In the case of the optical encoders, the system’s software monitors for state transitions and add a unit for each transition to a counter that keeps how many pulses each encoder generated.
As seen in the Figure 5, up to four sensors can be connected to each mono audio channel, allowing closed loop control of up to 4 motors if 4 encoders are used. Using the number of pulses accounted for each encoder it is possible to compute displacement and speed for each wheel as it is done with other incremental encoders. This information can be used in classical odometry and localization systems to obtain the robot’s position in a Cartesian space [,].
To properly design a robot with the presented technique, a relation between wheel dimensions and the maximum linear speed that can be measured is introduced here. In Equation (3), VMax is the maximum linear speed of the robot that can be measured, r is radius of the wheel, c is the maximum digits per second detection capacity of the mobile device and s is the encoder disc resolution (number of DTMF digits generated at each complete wheel revolution).
V Max =2Ï€rcs  
(3)

Table 4 shows the distance measurement resolution and maximum speed that can be measured according to the given equation considering several encoder resolutions.
Table 4.
Encoder resolution, displacement measurement resolution and maximum speed that can be measured (considering r = 25 mm and s = 40 DTMF digits per second).
Figure 6 shows odometry experimental results for this low cost robot. The error bars are the standard deviation of the real displacement that occurred. The blue line shows the real traveled distance and the red line shows the distance measured by the mobile phone using the proposed technique. Each point in the graph is the average value of ten samples.
Figure 6.
Experimental odometry results. X axis is the real traveled distance manually measured with a tape measure. Y axis is the distance computed by the mobile device using the proposed system with data from the encoders.
According to McComb and Predko, odometry errors are unavoidable due to several factors such as wheels’ slip and small measurement errors in the wheel radius that accumulate over time. They say that a displacements of 6 to 9 meters leads to 15 centimeters odometer error [] or more, which is a percentual error of 1.6%–2.5%. The greatest odometry error of the system was 3.7% for 74 cm range. But for 130 cm displacements the error was 1 centimeter (0.76%). These values show that the proposed system performance is consistent with classical odometry errors described in the literature.
To close the control loop, the computed odometry information is sent to a classical PI (Proportional-Integral) controller that has as set-points (or goals) the desired distance to be displaced by the robot. The encoders are read at a 40 Hz rate, the position is computed and sent to the controller to decide if the robot has to go faster, stop or walk. If any of the bumpers are activated in the meantime, the control loop is interrupted and the robot immediately stops.
Although most mobile devices have the possibility of recording and reproducing sounds, not all of them have physical connectors for both the audio input and output. To solve this problem in one of the tested devices that does not have an audio input connector, an earphone is attached near the built-in microphone of the device using a suction cup. In this particular case, an audio preamplifier must be used to generate tones with sufficient amplitude to be detected. The DTMF tones encoding sensors data is generated, amplified and sent to the earphone fixed very near the built-in microphone of the mobile device. It is worth mentioning that this scheme works reliably because DTMF system was designed to avoid interference from natural sounds such as music and people’s voices [].

5.2. Human Machine Interface

Users can control this robot from the web or using voice commands. Both a web server and a voice recognition system were implemented. The web-server is embedded into the application, therefore no intermediate computers or servers are needed. Any Internet enabled device can access the web-page and issue several commands to move the robot forward, back, or do curves. For debugging purposes the web-server also shows variable values such as distance, encoder pulses and recognized DTMF pulses from sensors.
The voice recognizer system is straightforward to implement thanks to the Android API. When the user issues a voice command, the operating systems understands it (in several languages) and passes a vector of strings to the robot’s control application with the most probable phrases said. The application just has to select the one that better fits the expected command. Example voice commands are “Walk 30 centimeters” or “Forward 1 meter”. The numbers said by the user are automatically converted to numeric values by the Android API, making it easy to implement softwares that makes the robot move for some distance using closed loop control.

5.3. Distance Measurement

An important sensor to aid the navigation of autonomous mobile robots is the distance measurement from the robot to obstacles in front of it. This task is typically performed by ultrasound or laser sensors. Another approach is based on stereo vision, but the computational costs are high. To support distance measurement in this low cost robot, a laser module (laser pointer) is used to project a brilliant red dot in the object in front of the robot. The camera then captures a frame and uses the projected dot position on its image plane to compute the distance to the obstacle based on simple trigonometry. This method is described by Danko [] and better explained by Portugal-Zambrano and Mena-Chalco []. The algorithm assumes that the brightest pixels on the captured image are on the laser projected dot.
Figure 7 depicts how the system works. A laser pointer parallel to the camera emits a focused red dot that is projected in an object at distance D from the robot. This red dot is reflected and projected in the camera’s image plane. The distance pfc (pixels from center) between the center of the image plane (in the optical axis) and the red dot in the image plane is proportional to the distance D.
Figure 7.
Distance measurement system using a camera and a laser pointer. H is the distance between the camera optical axis and the laser pointer, D the distance between the camera and the object, theta is the angle between the camera’s optical axis and ...
Equation (4) shows how to compute the distance using described system. The distance between the camera and the laser (H) is known previously, the number of pixels from the image center to the red laser dot (pfc) is obtained from the image. The radians per pixels (rpc) and the radian offset (ro) are obtained calibrating the system, which consists of taking several measurements of objects at known distances and their pixels distance from center (pfc). Then a linear regression algorithm finds the best ro and rpc. Details on this calibration can be found on the work of Portugal-Zambrano and Mena-Chalco .
D=Htan(pfc*rpc+ro  
(4)

As can be seem in Equation (4), the measurement range depends mainly on the baseline H given by the distance between the laser and the camera and the number of pixels from the image center, pfc, that has a limit given by the camera resolution. This equation can be used to determine the measurement range. As the object gets farther away, its pfc tends to zero. Assuming pfc to be zero it is possible to simplify Equation (4) to Equation (5) which gives the maximum distance that can be measured. In the same way, the minimum distance is given by half the camera resolution (because the measurement is made from the point to the center of image). Equation (6) specifies the minimum measurement distance. Table 5 shows some possible range values computed using these equations.
D max =Htan(ro)  
(5)

D min =Htan(r2 *rpc+ro)  
(6)

Table 5.
Measurement range for a VGA camera (640 × 480). All values in centimeters.

5.4. Block Diagram

Figure 8 shows a block diagram of the system. Each task is executed in a separated thread, thereby reading sensors and controlling motors do not interfere with each other. A task planner module allows the system to integrate the distance measurement, voice commands and web interface. The system running with all these subsystems used between 10% and 45% of the processor in all devices, leaving room to also execute complex algorithms embedded on the robot.
Figure 8.
Block diagram of the robot’s software.

5.5. Experimental Results

The algorithm implementation is straightforward: the system has to scan a region of the image for a group of pixels with the greatest values (255) in the red channel. The current implementation searches for a pattern of 5 pixels in a cross shape. The center of this cross is the pfc value. Figure 9 shows the results for 3 different distances. The red dot found by the algorithm is shown by a green circle, and the green line shows the distance from the laser dot to the image center.
Figure 9.
Image seen by the robot’s camera of the same object at different distances. Note the distance of the laser dot to the image center (shown by the green line) when the object is at different distances.
The baseline used is 4.5 centimeters and after executing a linear regression with a spreadsheet, the calibration values found are ro = 0.074 and rpc = 0.008579.
Table 6 shows experimental results of the system. The average error is 2.55% and the maximum observed error is 8.5%, which happened at the limit of the measurement range. The range of operation goes from 15 cm to 60 cm, but that can be changed modifying the H distance.
Table 6.
Distance measurement results using a mobile phone camera.
The advantage of such system is that the processing needed is very low: the system has to find the brightest red dot on a small limited region of interest in the image, and then compute the distance using simple trigonometric relations. The implementation computes distance at a rate of 9 frames per second in the mobile device while running the FFTs and closed loop control system described. This makes this approach an interesting solution to distance measurement in robotics systems.
Figures 10, ,1111 and and1212 show photos of the robot under the control of different devices. Thanks to the portability of the Android system, the same software can be used in PC computers and mobile devices using ARM processors. Although a proof of concept was developed using the Android operating system, the proposed architecture can be used with any system or programming language that can produce and record sounds. One should note that the main contribution of this work is the communication scheme, so these photos show a provisional robot’s assembly setup used to validate the proposed architecture for robotics.
Figure 10.
Robot under the control of a mobile phone. The audio input and output channels are connected in a single connector below the phone.
Figure 11.
Robot under the control of a tablet computer. The audio output is driven from a P2 connector attached to the earphone jack and the audio input is captured by the built-in microphone of the device. Note the suction cup holding an earphone near the microphone. ...
Figure 12.
Robot under the control of a netbook computer. Audio input and output channels are connected with independent P2 connectors. This is the most common case for computers.
 

6. Conclusions

This paper introduces a simple but universal control architecture that enables a wide variety of devices to implement control of mechatronics and automation systems. The method can be used to implement closed loop control systems in mechatronics systems using audio channels of computing devices, allowing the processing unit to be easily replaced without the need of pairing or special configurations. Several obsolete and current devices can be used to control robots such as PDAs, phones and computers. Even an MP3 player could be used if control without feedback is needed. The sound produced by the player would drive the motors.
As an application example, the presented method is used to build a mobile robot with differential drive. The robot’s complete costs, including frame, motors, sensors and electronics is less than 30 US dollars (in small quantities), and the parts can be easily found in stores or on the Internet. The mentioned price does not include the mobile device.
The method can be used for several applications such as educational robotics, low cost robotics research platforms, tele presence robots, autonomous and remotely controlled robots. In engineering courses it is also a motivation for students to learn digital signal processing theory, and all the other multidisciplinary fields involved in robotics.
Another interesting application of this system is to build sensor networks composed of smartphones that can gather data from their internal sensors and poll external sensors via audio tones, allowing sensor networks to be easily built and scaled using commercial off-the-shelf mobile devices instead of specific boards and development kits.



                          XXX  .  V0000000 Introduction to Data Acquisition  

Table of Contents

  1. Introduction
  2.  Transducers
  3.  Signals
  4.  Signal Conditioning
  5.  Data Acquisition Hardware
  6.  Driver and Application Software
  7. More Information

1. Introduction


Data acquisition involves gathering signals from measurement sources and digitizing the signals for storage, analysis, and presentation on a PC. Data acquisition systems come in many different PC technology forms to offer flexibility when choosing your system. You can choose from PCI, PXI, PCI Express, PXI Express, PCMCIA, USB, wireless, and Ethernet data acquisition for test, measurement, and automation applications. Consider the following five components when building a basic data acquisition system (Figure 1):
• Transducers and sensors
• Signals
• Signal conditioning
• DAQ hardware
• Driver and application software

Figure 1. Data Acquisition System


 

2.  Transducers



Data acquisition begins with the physical phenomenon to be measured. This physical phenomenon could be the temperature of a room, the intensity of a light source, the pressure inside a chamber, the force applied to an object, or many other things. An effective data acquisition system can measure all of these different phenomena.

A transducer is a device that converts a physical phenomenon into a measurable electrical signal, such as voltage or current. The ability of a data acquisition system to measure different phenomena depends on the transducers to convert the physical phenomena into signals measurable by the data acquisition hardware. Transducers are synonymous with sensors in data acquisition systems. There are specific transducers for many different applications, such as measuring temperature, pressure, or fluid flow. Table 1 shows a short list of some common phenomena and the transducers used to measure them.

PhenomenonTransducer
TemperatureThermocouple, RTD, Thermistor
LightPhoto Sensor
SoundMicrophone
Force and PressureStrain Gage
Piezoelectric Transducer
Position and DisplacementPotentiometer, LVDT, Optical Encoder
AccelerationAccelerometer
pHpH Electrode

Table 1. Phenomena and Existing Transducers

Different transducers have different requirements for converting phenomena into a measurable signal. Some transducers may require excitation in the form of voltage or current. Other transducers may require additional components and even resistive networks to produce a signal. Refer to ni.com/sensors for more information on transducers.
 

3.  Signals


The appropriate transducers convert physical phenomena into measurable signals. However, different signals need to be measured in different ways. For this reason, it is important to understand the different types of signals and their corresponding attributes. Signals can be categorized into two groups:

· Analog
· Digital


Analog Signals

An analog signal can exist at any value with respect to time. A few examples of analog signals include voltage, temperature, pressure, sound, and load. The three primary characteristics of an analog signal are level, shape, and frequency (Figure 2).

 
http:/vcm.natinst.com:27110/cms/images/devzone/tut/a/021afe6996.gif
Figure 2. Primary Characteristics of an Analog Signal

Level
Because analog signals can take on any value, the level gives vital information about the measured analog signal. The intensity of a light source, the temperature in a room, and the pressure inside a chamber are all examples that demonstrate the importance of the level of a signal. When you measure the level of a signal, the signal generally does not change quickly with respect to time. The accuracy of the measurement, however, is very important. You should choose a data acquisition system that yields maximum accuracy to help with analog level measurements.

Shape
Some signals are named after their specific shapes - sine, square, sawtooth, and triangle. The shape of an analog signal can be as important as the level because by measuring the shape of an analog signal, you can further analyze the signal, including peak values, DC values, and slope. Signals where shape is of interest generally change rapidly with respect to time, but system accuracy is still important. The analysis of heartbeats, video signals, sounds, vibrations, and circuit responses are some applications involving shape measurements.

Frequency
All analog signals can be categorized by their frequencies. Unlike the level or shape of the signal, you cannot directly measure frequency. You must analyze the signal using software to determine the frequency information. This analysis is usually done using an algorithm known as the Fourier transform.

When frequency is the most important piece of information, you need to consider including both accuracy and acquisition speed. Although the acquisition speed for acquiring the frequency of a signal is less than the speed required for obtaining the shape of a signal, you still must acquire the signal fast enough that you do not lose the pertinent information while acquiring the analog signal. The condition that stipulates this speed is known as the Nyquist Sampling Theorem. Speech analysis, telecommunication, and earthquake analysis are some examples of common applications where the frequency of the signal must be known.


Digital Signals

A digital signal cannot take on any value with respect to time. Instead, a digital signal has two possible levels: high and low. Digital signals generally conform to certain specifications that define the characteristics of the signal. They are commonly referred to as transistor-to-transistor logic (TTL). TTL specifications indicate a digital signal to be low when the level falls within 0 to 0.8 V, and the signal is high between 2 and 5 V. The useful information that you can measure from a digital signal includes the state and the rate (Figure 3).


Figure 3. Primary Characteristics of a Digital Signal

State
Digital signals cannot take on any value with respect to time. The state of a digital signal is essentially the level of the signal - on or off, high or low. Monitoring the state of a switch - open or closed - is a common application showing the importance of knowing the state of a digital signal.

Rate
The rate of a digital signal defines how the digital signal changes state with respect to time. An example of measuring the rate of a digital signal includes determining how fast a motor shaft spins. Unlike frequency, the rate of a digital signal measures how often a portion of a signal occurs. A software algorithm is not required to determine the rate of a signal.
 

4.  Signal Conditioning



Sometimes transducers generate signals too difficult or too dangerous to measure directly with a data acquisition device. For instance, when dealing with high voltages, noisy environments, extreme high and low signals, or simultaneous signal measurement, signal conditioning is essential for an effective data acquisition system. It maximizes the accuracy of a system, allows sensors to operate properly, and guarantees safety.
It is important to select the right hardware for signal conditioning. You can choose from both modular and integrated hardware options (Figure 4) and use signal conditioning accessories in a variety of applications including the following:
· Amplification
· Attenuation
· Isolation
· Bridge completion
· Simultaneous sampling
· Sensor excitation
· Multiplexing
Other important criteria to consider with signal conditioning include packaging (modular versus integrated), performance, I/O count, advanced features, and cost. Use online tools at ni.com/signalconditioning to configure the best signal conditioning solution for your application.
Figure 4. Signal Conditioning Hardware Options
 
 

5.  Data Acquisition Hardware


Data acquisition hardware acts as the interface between the computer and the outside world. It primarily functions as a device that digitizes incoming analog signals so that the computer can interpret them. Other data acquisition functionality includes the following:

· Analog input/output
· Digital input/output
· Counter/timers
· Multifunction - a combination of analog, digital, and counter operations on a single device
National Instruments offers several hardware platforms for data acquisition. The most readily available platform is the desktop computer. NI provides PCI DAQ boards that plug into any desktop computer. In addition, NI makes DAQ modules for PXI/CompactPCI, a more rugged modular computer platform specifically for measurement and automation applications. For distributed measurements, the NI Compact FieldPoint platform delivers modular I/O, embedded operation, and Ethernet communication. For portable or handheld measurements, National Instruments DAQ devices for USB and PCMCIA work with laptops or Windows Mobile PDAs (Figure 5). In addition, National Instruments has launched DAQ devices for PCI Express, the next-generation PC I/O bus, and for PXI Express, the high-performance PXI bus.

Figure 5. National Instruments DAQ Hardware Options
The newest DAQ devices from National Instruments offer connectivity over wireless and cabled Ethernet. NI Wi-Fi DAQ devices combine IEEE 802.11g wireless or Ethernet communication, direct sensor connectivity, and the flexibility of NI LabVIEW software for remote monitoring of electrical, physical, mechanical, and acoustical signals.
 
Figure 6. Wi-Fi Data Acquisition
 

6.  Driver and Application Software


Driver Software

Software transforms the PC and the data acquisition hardware into a complete data acquisition, analysis, and presentation tool. Without software to control or drive the hardware, the data acquisition device does not work properly. Driver software is the layer of software for easily communicating with the hardware. It forms the middle layer between the application software and the hardware. Driver software also prevents a programmer from having to do register-level programming or complicated commands to access the hardware functions. NI offers two different software options:

· NI-DAQmx driver and additional measurement services software
· NI-DAQmx Base driver software

With the introduction of NI-DAQmx, National Instruments revolutionized data acquisition application development by greatly increasing the speed at which you can move from building a program to deploying a high-performance measurement application. The DAQ Assistant, included with NI-DAQmx, is a graphical, interactive guide for configuring, testing, and acquiring measurement data. With a single click, you can even generate code based on your configuration, making it easier and faster to develop complex operations. Because the DAQ Assistant is completely menu-driven, you make fewer programming errors and drastically decrease the time from setting up your data acquisition system to taking your first measurement.
NI-DAQmx Base offers a subset of NI-DAQmx functionality on Windows and Linux, Mac OS X, Windows Mobile, and Windows CE.

Application Software
The application layer can be either a development environment in which you build a custom application that meets specific criteria, or it can be a configuration-based program with preset functionality. Application software adds analysis and presentation capabilities to driver software. To choose the right application software, evaluate the complexity of the application, the availability of configuration-based software that fits the application, and the amount of time available to develop the application. If the application is complex or there is no existing program, use a development environment.
NI offers three development environment software products for creating complete instrumentation, acquisition, and control applications:
· LabVIEW with graphical programming methodology
· LabWindows™/CVI for traditional C programmers
· Measurement Studio for Visual Basic, C++, and .NET
With LabVIEW SignalExpress, NI has introduced a configuration-based software environment where programming is no longer a requirement. Using LabVIEW SignalExpress, you can make interactive measurements with NI Express technology.

Introduction to Physical Computing
Physical computing refers to the design and construction of physical systems that use a mix of software and hardware to sense and respond to the surrounding world. Such systems blend digital and physical processes into toys and gadgets, kinetic sculpture, functional sensing and assessment tools, mobile instruments, interactive wearables, and more. This is a project-based course that deals with all aspects of conceiving, designing and developing projects with physical computing: the application, the artifact, the computer-aided design environment, and the physical prototyping facilities. The course is organized around a series of practical hands-on exercises which introduce the fundamentals of circuits, embedded programming, sensor signal processing, simple mechanisms, actuation, and time-based behavior. The key objective is gaining an intuitive understanding of how information and energy move between the physical, electronic, and computational domains to create a desired behavior. The exercises provide building blocks for collaborative projects which utilize the essential skills and challenge students to not only consider how to make things, but also for whom we design, and why the making is worthwhile.

Physical computing refers to the design and construction of physical systems that use a mix of software and hardware to sense and respond to the surrounding world. Such systems blend digital and physical processes into toys and gadgets, kinetic sculpture, functional sensing and assessment tools, mobile instruments, interactive wearables, and more. This is a project-based course that deals with all aspects of conceiving, designing and developing projects with physical computing: the application, the artifact, the computer-aided design environment, and the physical prototyping facilities. The course is organized around a series of practical hands-on exercises which introduce the fundamentals of circuits, embedded programming, sensor signal processing, simple mechanisms, actuation, and time-based behavior. The key objective is gaining an intuitive understanding of how information and energy move between the physical, electronic, and computational domains to create a desired behavior. The exercises provide building blocks for collaborative projects which utilize the essential skills and challenge students to not only consider how to make things, but also for whom we design, and why the making is worthwhile.

The Adaptive House is the focus of an advanced design studio based around the collaborative development of reality computing applications within a residential prototype. Reality computing encompasses a constellation of technologies focused around capturing reality (laser scanning, photogrammetry), working with spatial data (CAD, physical modeling, simulation), and using data to interact with and influence the physical world (augmented/virtual reality, projector systems, 3d printing, robotics). This studio will use reality computing to understand existing homes, define modes of augmentation, and influence the design of houses yet to be built through full scale prototyping. The objective of the course will be the production of a house that moves beyond the notion of being "smart," but is actively adapted towards its inhabitants' needs and capabilities.


Information and Communication Technologies and the Effects of Globalization: Twenty-First Century "Digital Slavery" for Developing Countries 


The main goal of this paper is to examine the ICT (Information and Communication Technology) revolution and the concept of globalization as they effect developing countries. Globalization as one of the reasons for possible widening of the gap between the poor and the rich nations was examined and the emerging concept of "digital slavery" was carefully evaluated. The wide gap in availability and use of ICTs across the world and the influences ICTs exert on globalization at the expense of developing countries were carefully examined and suggestions and necessary policies were offered for developing countries to leap-frog the industrialization stage and transform their economies into high value-added information economies that can compete with the advanced countries on the global market. This is why it is important for Africa, in general, and Nigeria, in particular, to be aware of the implications, prepare to avoid the most telling consequences and prepare to meet its challenges.

Introduction

The information revolution and the extraordinary increase in the spread of knowledge have given birth to a new era--one of knowledge and information which effects directly economic, social, cultural and political activities of all regions of the world, including Africa. Governments worldwide have recognized the role that Information and Communication Technologies could play in socio-economic development. A number of countries especially those in the developed world and some in developing countries are putting in place policies and plans designed to transform their economies into an information and knowledge economy. Countries like USA, Canada, and a number of European countries, as well as Asian countries like India, Singapore, Malaysia, South Korea, Japan, and South American countries like Brazil, Chile, and Mexico among others, and Australia and Mauritius either already have in place comprehensive ICTs policies and plans or are at an advanced stage of implementing these programmes across their economies and societies. Some of these countries see ICTs and their deployment for socio-economic development as one area where they can quickly establish global dominance and reap tremendous payoff in terms of wealth creation and generation of high quality employment. On the other hand, some other countries regard the development and utilization of ICTs within their economy and society as a key component of their national vision to improve the quality of life, knowledge and international competitiveness.
As Faye {2000} has pointed out, ICTs are offering even less developed countries a window of opportunities to leapfrog the industrialization stage and transform their economics in to high value-added information economies that can compete with the advanced economics on the global market. Technological innovation has contributed to globalization by supplying infrastructure for trans-world connections. According to Ajayi {2000}, the revolution taking place in information and communication technologies have been the central and driving force for the globalization process. Both developed and less-developed countries can not afford to miss out on the opportunities these technologies are creating.
In practice, globalization benefits those with technology, resources, contacts, information and access to markets. It has a negative impact on the poor. The prediction is that the gap between the new winners and losers within the world economy order dominated by an Information and Knowledge Economies will be much larger than the development gap that now exists between the advanced nations and the less developed nations. African countries are at risk of being further marginalized if they fail to embrace these technologies to transform their economies. As pointed out by the Secretary-General of the United Nations, globalization can benefit humankind as a whole.
At the moment millions of people--perhaps even the majority of the human race--are being denied those benefits. They are poor not because they have too much globalization, but rather that they have too little--or none at all. Many people are actually suffering in different ways--I would say not from globalization itself, but from the failure to manage its adverse effects. Some have lost their jobs; others see their communities disintegrating, some feel that their very identity is at stake {UN, 2000}. The most significant aspect of ICTs and globalization that should concern the developing countries like Nigeria is the fact that it has led to unprecedented inequalities in the distributing of benefits between developed countries and the less developed. Present day globalization is not new because history shows that a similar trend was witnessed in the 19th century and the earlier part of the 20th century {Adeboye, 2000}. What is different is the intensity and the magnitude of the inequalities that it generates.
In all these developments, there is the underlying assumption that globalization is good for all and that its benefits are shared out (even if not equally) all over the world. The more developed countries benefit while the least developed countries tend to remain impoverished and do not share in the benefits. The combined effect of the global fluidity of finance capital, the growth of foreign direct investment, and the emergence of global corporations have greatly undermined the economic and political sovereignty of states--especially the poor ones. It must be emphasized that the so-called globalized world is riddled with imperfections.
First, free trade is far from being free. In developing countries, trade distorting export subsidies and domestic support in agriculture make nonsense of the pretensions to free trade. Likewise, developed countries restrict the imports of labour-intensive products like textiles that would provide a major boost to exports of developing countries. Free movement of persons across national borders is severely restricted. Highly skilled personnel and those who have money to invest can cross borders fairly easily. The story is different for lower skilled people and particularly unskilled labour. These people can hardly move at all. As Onitiri {2001} has put it, you will find many of them in asylum camps in the developed countries, knocking at the gates for a chance to do the most menial job. All of the above point to the indices of digital slavery for developing countries. In effect, what we now call globalization is really a globalization of imperfections: restricted trade, restricted movement and absence of a world authority that can compensate effectively for marginalized areas and pockets of poverty in a globalised world.
The diffusion of ICT into Africa is at a snail's speed, such that the gap between the information-rich developed countries and Africa continues to increase everyday. Africa has 13% of the world population, but only 2% of world telephone lines and 1% of Internet connectivity measured in terms of number of Internet hosts and Internet users. Consequently most African countries including Nigeria have not been able to reap the abundant benefits of the global information society and the information economy in areas such as education, health, commerce, agriculture, rural development etc.
It is the objective of this paper to evaluate the effects of ICT in the globalization process and examine the emerging concept of 'Digital Slavery' as it is affecting developing countries. In addition, this paper will try to highlight and discuss the factors responsible for this concept of digital slavery. This paper attempts to make developing countries aware of and proactively anticipate the trends, consequences and implications as well as devise appropriate response. This paper will, finally, try to assess the benefits of globalization while minimizing the destabilizations, dislocation, disparities, distortion, disruptions and even the concept of digital slavery associated with the current global trends.

ICTs and Globalization

Information Communication Technology is basically an electronic based system of information transmission, reception, processing and retrieval, which has drastically changed the way we think, the way we live and the environment in which we live. It must be realized that globalization is not limited to the financial markets, but encompasses the whole range of social, political, economic and cultural phenomena. Information and communication technology revolution is the central and driving force for globalization and the dynamic change in all aspects of human existence is the key by-product of the present globalization period of ICT revolution. The world telecommunication system, the convergence of computer technology and telecommunications technology into the Information Technology, with all its components and activities, is distinctive in its extension and complexity- and is also undergoing a rapid and fundamental change. The results of this are that National boundaries between countries and continents become indistinct and the capacity to transfer and process information increases at an exceptional rate. The global information communication has been called "the world's largest machine," and it is very complex and difficult to visualize and understand in its different hardware and software subsystems. As Kofi Annan {1999} has put it, "the Internet holds the greatest promise humanity has known for long- distance learning and universal access to quality education... It offers the best chance yet for developing countries to take their rightful place in the global economy... And so our mission must be to ensure access as widely as possible. If we do not, the gulf between the haves and the have-nots will be the gulf between the technology-rich and the technology-poor".
ICTs are increasingly playing an important role in organizations and in society's ability to produce, access, adapt and apply information. They are being heralded as the tools for the post-industrial age, and the foundations for a knowledge economy, due to their ability to facilitate the transfer and acquisition of knowledge {Morale-Gomez and Melesse, 1998}. These views seem to be shared globally, irrespective of geographical location and difference in income level and wealth of the nation. ICT may not be the only cause of changes we are witnessing in today's business environment, but the rapid developments in ICT have given impetus to the current wave of globalization.
While trans-national corporations are reaping huge profits from the flexibility and opportunities offered by globalization, the level of poverty in the world is growing. At least, 2.8 billion people in the world, that is 45% of the world population, are living on less than $2 a day {Stigliz 2002}. Africa in particular is hit by the growth of poverty and economic crisis. The use and production of ICT plays an important role in the ability of nations to participate in global economic activities. Apart from facilitating the acquisition and absorption of knowledge, ICT could offer developing countries unprecedented opportunities to change educational systems, improve policy formulation and execution, and widen the range of opportunities for business and for the poor. It could also support the process of learning, knowledge networking, knowledge codification, teleworking, and science systems. ICT could be used to access global knowledge and communication with other people. However, over major parts of developing countries ICT is available only on a very limited scale, and this raises doubts about developing countries' ability to participate in the current ICT-induced global knowledge economy. There has also been concern that this unequal distribution of ICT may in fact further contribute to the marginalization of poor countries in relation to developed countries, and to disruptions of the social fabric. Hence, one can conclude that the concept of 'digital slavery' is inevitable for developing countries as far as ICT is concerned. The wide gap in the availability and use of ICT across the world, and the influences ICT exerts on globalization, raise questions about whether globalization entails homogeneity for organizations and societies in developing countries. It also raises questions about the feasibility and desirability of efforts to implement the development of ICT through the transfer of best practices from western industrialized countries to developing countries, and whether organizations can utilize ICT in accordance with the socio-cultural requirements of the contexts {Walshan, 2001}. Information and Communication Technology development is a global revolution. It has become a subject of great significance and concern to all mankind. Relevant studies have shown that the greatest impact of the ICT revolution will revolve around the 'Digital Divide' equation. The most important aspect of the ICT challenge is the need to plan, design and implement a National Information Infrastructure (NII) as the engine of economic growth and development.

Digital Slavery--Reality or Myth?

Slavery is a social institution which is defined by law and custom as the most absolute involuntary form of human servitude. It is a condition in which one human being was owned by another. A slave was considered by law as property, or chattel, and deprived of most of the rights ordinarily held by free persons. But it must be realized that there is no consensus on what a slave was or on how the institution of slavery should be defined. But it must be known that the slave usually had few rights and always fewer than his owner. The product of a slave's labour could be claimed by someone else, who also frequently had the right to control his physical production. Another characteristic of slavery is the fact that the slave was deprived of personal liberty and the right to move about geographically as he desired. There were likely to be limits on his capacity to make choices with regards to his occupation. At this juncture, one can rightly ask how the above characteristics of slavery fit in to this concept of 'digital slavery', which is the theme of this paper. Despite the undoubted benefits offered by ICTs, significant barriers to their effective use exist in both developed and developing countries. These barriers must be addressed to allow realization of ICTs' full potential. Some barriers may be endemic (e.g. the generation gap, learning processes and gaining experience in ICTs). The developing countries are faced with the problems of poor telecoms infrastructure, poor computer and general literacy, lack of awareness of the Internet and regulatory inadequacy that also hinder other applications of the Internet there. Technological gaps and uneven diffusion in technology are not new. "Older" innovations such as telephony and electricity are still far from evenly diffused - but what may be unprecedented is the potential size of the opportunity costs and benefits forgone by failure to participate in the new 'digital society.' Growth in the use of ICTs is highly uneven. There are significant disparities in access to and use of ICTs across countries. Developing countries risk being left further behind in terms of income, equality, development, voice and presence on an increasingly digitalized world stage. The image of globalization as a promise or threat is, in fact, one of the most powerful and persuasive images of our times {Veseth, 1998}. Yet, despite the vast literature on this subject and the ongoing discussion, globalization remains an ill-defined concept. Some view it as the international system that has succeeded the end of the Cold War, while others prefer to continue using the term "internationalization" to describe the current changes in the international economy. Though there is some agreement among scholars and experts that globalization is producing greater interconnections and interdependence, there seems to be little consensus on the degree of integration it engenders and on its pervasiveness. Different views have emerged on this issue.
As way of simplification, four different positions can be accounted for: "The first identifies globalization with an increasing homogenization within the global system, which would ultimately lead to assimilation. The second--the 'strong globalization view'--contends that homogeneity remains highly unlikely within the global system, but that a range of qualitative and quantitative changes have combined to introduce a new condition, or set of processes, into world affairs that warrant the novel term 'globalization'. The third position--the 'weak' globalization perspective--maintains that many of the undoubtedly important developments of recent decades signal a significant increase of internationalization within the international political economy that has complex but variable consequences for politics, economics and society, but that has not ushered in a distinctively new era in human affairs. The final-rejectionist--position defends the view that nothing of any great or irreversible significance has taken place" {Jones, 2000}. Most observers have dismissed the most radical views, i.e. that globalization is leading to assimilation or that it is not upon us. The crucial debate is thus between the "strong" and "weak" globalization positions.
In the midst of the worldwide economic boom, reports documenting modern-day slavery come from every corner of the globe. From Bangladesh to Brazil, from India to The Sudan, and even in the U.S., there are more people enslaved today than ever before in human history {Britannica, 2003}. As indicated in the earlier part of this paper, globalization always produces winners and losers. In all cases, those who win are those who trade in goods and services characterized by increasing returns. The pace and structure of globalization is usually dictated by the winners. In the late 19th century and pre-first World War years, it was driven by colonialism and gunboat diplomacy. The current one is driven by more subtle ideology propagated by the international financial institutions and the world trade organization through the influence of ICTs. It must be emphasized that trade liberation should lead to greater benefits for all if the free movement of goods and services is extended to the physical movement of people. In contrast, what happens is that it is driven by multinational corporations that locate different stages in the production/value chain in different parts of the world. It must also be realized that it is these multinational corporations that are the most important vehicles for transferring technology around the globe. Location is determined by cost advantage. The result is minimal inter-firm and inter-industry trade and integration. Labour migration, which helps to equalize factor costs in previous episode of liberation, is restricted to the highly skilled-computer software and hardware engineers and programmers. As Mule {2000}, observes, "In theory globalization can have a positive impact on agricultural growth. In practice globalization benefits those with technology, resources, contacts, information and access to markets. It has a negative impact on the poor". As Adeboye {2002} has revealed, much of the financial flow-over, 60 percent is speculative rather than developmental. What is really stated is the fact that globalization has always led to the de-industrialization of losers at the expense of winners. For example, China and India were just as industrialized as parts of Europe at the start of the first globalization. The manufacturing sectors of these economies vanished with Britain's market penetration following colonization. United States and European market penetration seems to have done the same thing to the economies of the south in the current round of globalization. Developing countries' export concentration is also very high. They not only trade in low value-added goods and services, but also they depend on one or a few export commodities for their export earnings. This aids greater marginalization. From the above, one can rightly say that the concept of 'digital slavery' in the 21st century is becoming a reality for the African countries. The main effect of the transformation engendered by globalization is that certain parts of the world, the developing world in general and Africa in particular, are being increasingly marginalized and subjected to the hegemonic control of the main actor on the world scene. It must be pointed out that the West has driven the globalization agenda in a way to gain disproportionate share of the benefits at the expense of the developing world.
In our contemporary world, many countries, which incidentally are geographically located in Europe, North America and parts of Asia, are highly industrialized, and they have an edge in modern science and technology. They also show similar patterns in level of wealth, with a stable governance structure. On the other hand, we have many countries, which are geographically located in Africa and most parts of Asia, that have not shown much improvement on most fronts of development, especially when considered on a global scale. They are characterized by war and famine, and corruption is seriously affecting the development and functioning of the public infrastructure. By this, one can say that all these factors are seriously aiding the concept of 'digital slavery,' which is seriously threatening African countries. The most significant aspect of globalization that should concern us in Africa is the fact that it has led to unprecedented inequalities in the distribution of benefits between the developed countries and the less developed. Present day globalization as earlier noted, is not new because history shows that a significant trend was witnessed in the 19th century and the earlier part of the 20th century. What is different is the intensity and the magnitude of the inequalities that it generates. In all these developments, there is the underlying assumption that globalization is good for all and that its benefits are shared out, even if not equally, all over the world. By this, it can be assumed that the concept of 'digital slavery' in this paper can be termed as a myth because the originators and the inventors of ICTs, which are the developed countries, allow and encourage the benefits of globalization to be shared throughout the world. It is also argued that ICTs as a great social leveler, can erase cultural barriers, overwhelm economic inequalities, and even compensate for intellectual disparities. High technology can also put unequal human beings on an equal footing, and that makes it the most potent democratizing tool ever devised. But it is hardly realized that globalization benefits different countries differently, the more developed countries taking the lion's share of the benefits while the least developed tend to be impoverished and by-passed by the benefits. It may be said that two different worlds co-existed. One was the world of the rich nations whose population had ample access to education, health services, clean water, unemployment benefits, and social security. The other one was the world characterized by abject poverty with a lack of education, no access to health services and a lack of basic infrastructures to deliver social services. The combined effect of the global fluidity of finance capital, the growth of foreign direct investment (FDI), and the emergency of global corporations has greatly undermined the economic (political) sovereignty of states especially the poor ones. It is necessary to highlight certain pertinent issues that must be addressed in any discussion on globalization. UNDP in 1999 reproduced figures to show that the gap between the richest and the poorest countries in per capita income terms was only 3:1 during the dawn of the Industrial Revolution in 1820, rising to 11:1 by the first episode of globalization in 1913. More recently, it grew to 35:1 in 1950, rising slightly to 44:1 by 1973. After the commencement of the present round of globalization, this figure has acquired a staggering magnitude of 71:1. Accompanying this widening gap is the grave human cost in terms of malnutrition, morbidity and mortality {Murshed, 2000}. It is estimated that those living in abject poverty number over 700 million, the majority of whom are in Sub-Saharan Africa and East Asia. Since the beginning of the 1980s, most African countries have been facing severe economic crisis. Most macro-economic indicators have been pointing downwards. The continent is only the least industrialized part of the world, but is also undergoing de-industrialization {Mkandawire 1991}. Over the past decade, the UNDP human development index country rankings have been showing that 15-20 countries at the very bottom of the list are all in Africa. Moreover, Africa currently has the highest level of debt as a proportion of GDP and it is the only region where food supply is declining. This is the most conclusive evidence of 'digital slavery' and marginalization of some groups and nations from the process of globalization. This is why it is important for Africa and Nigeria in particular to be aware of the implications, and be prepared to meet its challenges.
Many researchers have noted that globalization, far from delivering on these great promises, has been responsible for serious destabilizing forces that have been associated with increasing poverty, negative rates of economic development, massive unemployment, and instability of exchange rates and double-digit inflation rates. It was noted that at the international level, the international financial system has been unable to cope with the risks and challenges of globalization. By this, one can say that developing countries especially African countries are gradually drifting to 'digital slavery.' The diffusion of ICTs into Africa has been at a snail's speed, so much so that the gap between information-rich developed countries and African countries continues to widen every day. As pointed out by Stiglitz {2002}, the former Vice President and Chief Economist of the World Bank, it was the decision of the Western countries to keep quotas on a wide range of goods - from textiles to sugar - while forcing the developing countries to open their markets. The West has been subsidizing its agriculture, thus weakening the developing countries' capacity to compete, while at the same time forcing them to remove their own subsidies, thereby increasing their vulnerability to Western imports. He also recalls how capital markets of developing countries were opened and subsequently subjected to speculative attacks leading to net outflow of resources and the weakening of currencies. From the above analysis, one is tempted to conclude that 'digital slavery' as far as African countries are concerned is a reality.

The Way Forward

At the beginning of the 1990s, the leading economies of the world began to realize the importance of information and knowledge as valuable resources, both nationally and within organizations. A national information infrastructure was formulated to provide foundation for an information economy. In order to assist African countries to face the challenges of the information society and thus avoid their marginalization and the effects of 'digital slavery,' the United Nation Economic Commission for Africa (ECA) has elaborated an African Information Society Initiative (AISI), as requested by member states. This initiative is an action framework to build Africa's information and communication infrastructure, and was adopted during the ECA's 22nd meeting of African Ministers in charge of planning and development (held in May 1996) under resolution 812 {XXXI} entitled "Implementation of the African Information Society Initiative" {ECA, 1996}. These efforts led to the development of National Information and Communication Infrastructure (NICI), whose policies, plans and strategies could be used to enhance the role of information and communication technologies in facilitating the socio-economic development process. For the African countries to avoid 'digital slavery' there is need for a master plan and strategy for implementation of NICI. In addition, there is a need to establish a commission on ICTs to regulate the sector. It must be realized that this is the time for government to encourage Nigerians in diaspora to actively participate in ICT development. Also the various governments in African countries should declare access to ICT services as a fundamental human right of everybody and should establish a timetable and guarantee-enabling environment for attracting the right level of investments.
The digital revolution offers Nigeria and other African countries the unique opportunity of actively participating in the world latest developmental revolution. The information age has created new wealth and sustains one of the largest and unbroken growths in some economies notably North America and Europe. The biggest beneficiaries have always been countries that are quick to identify the strategic relevance of Information Technology (IT) in the rapid transformation of national economic development. IT has contributed significantly in dramatically developing and establishing the economies of Asia in the league of Newly Industrialized Economies. African countries especially Nigeria thus has the choice to either convert its population plus its low-wage economy into a strong IT powerhouse in global terms {as a net producer of IT products and services} or it could once again dash hopes of the black race by sitting on its hands and ending up as merely a large market for the consumption of IT products and services that in turn feeds the already booming economies of wiser countries. This will obviously amount to self-inflicted 'digital slavery' that will further condemn Nigeria and other African countries into another painful century of unshackling itself from the effects of slavery whose devastation would be greater than the earlier one. African countries should draw up National IT Policy that should define the road map of what Africa wants out of this revolution and how it intends to achieve its objectives. Some informed sources are likely to point out the possible impairment we are likely to suffer as a result of the inefficiency of our vital utilities and public infrastructure such as power supply, telecommunications and transportation. But one can equally counter such views with the successes that countries like India and Pakistan with similar handicaps have achieved. With this in mind, there is great potential for Nigeria and other African countries for local development of IT Brainpower for subsequent 'export' to bridge the IT skills gap in North America and Europe, also there is the need for local manufacture of hardware components as well as local development of software. What is now needed is to take the technology a step further by encouraging the local production of some of the components used for systems building: Components such as Computers, Motherboards, Modems, Monitors, Casing/Power Supplies, Keyboards/Mice and Add-in Cards {Mirilla, 2000}. That no IT company has yet been able to climb to this level has more to do with the investment climate and lack of policy guidelines by government than the required technical skill. Government/private sector cooperation is very crucial here. It is also important to note that no genuine foreign entrepreneur will come in to successfully develop this sector for us. Such initiative must come from local operators. Foreign investors come in only when policies and infrastructure have been properly defined and established. The trend all over the world is moving from national monopoly in the ownership and management of the sector in to a regime of deregulation and privatization to open up the sector to private investors in order to engender healthy competition, resulting in greater efficiency, improvement of service quality, lowering of prices and general consumer satisfaction. Some African countries like Nigeria have quite rightly and widely adopted this trend. ICT venture is highly capital intensive, but also highly profitable. The government has a lot of financial responsibility to provide for other sectors of national development like education, health-care delivery, public security and defense, public administration, etc. It therefore makes sense for government to largely divest itself from ICT infrastructure provision, but to carefully guide its growth and development through government policy, while at the same time making effective use of the ICT in the performance of its legitimate functions.
Developing countries must look forward prospectively and participate actively in building technological capabilities to suit their needs. Technology itself also has a role to play in this. Just as technologies create them, so new innovations offer ways of bridging technological divides. Connectivity can build on existing infrastructure or bypass traditional means with technologies such as wireless. The availability of free software is transforming the information technology industry.

Conclusion and Recommendations

Without any shadow of doubt, it must be realized that the entire world is going online. Although globalization is being propelled by rapid technological innovations, ICT is not the only technological force that drives globalization. There are breakthroughs in biotechnology and new materials as well as development in ICT which firms and other nations must be aware of and proactively anticipate the trends, consequences and implications as well as device appropriate response. Many of us are aware that we live in a rapidly changing world. But most of us are still not conscious of the speed of change and the interconnections between the many technological and other changes that are affecting our daily lives and that will have a profound effect on our future. And without this consciousness, it is going to be difficult to fully understand our present predicament and proffer effective solutions that will ensure that we have at least a standing place in a future world. The emergence of serious economic crisis in most African countries in the 1980s and Western response of using international financial institutions, particularly the International Monetary Fund and the World Bank, in imposing structural adjustment programmes, has led to tighter imperialist control of the continent. With globalization as asserted by Ibrahim {2002}, the West is no longer ashamed of proclaiming the necessity for imperialism. The conclusion is that certain steps have to be taken in order to access the benefits of globalization while minimizing destabilizations, dislocation, disparities, disruption, distortions and even the concept of 'digital slavery' associated with the current global trends. In other words, nations and enterprises are not powerless in determining their response to its risks and challenges. To manage the process of globalization, African countries and enterprises must develop mechanisms and institutional arrangement for creating awareness and understanding of the nature, pace, consequences and implications of the changes resulting from globalization. Special focused teams involving representatives of the government, academia, and the private sector must be formed to monitor, analyze and disseminate information on the trends, structure, consequences and implications of globalization and recommend policy actions to all concerned. It is also necessary to use all sources of information, especially the Internet to educate the youths and the children to ensure that they do not miss the train of this global trend. Proactive measures should be taken to interact with the media in order to effectively disseminate information and views on globalization. Central to the preparedness of any nation for globalization is a sharp focus on its youths and children, who own the future, and who have been responsible for most of the phenomenal technological advances of the second half of the 20th century. A focus on children and youths is particularly called for in the case of Africa because of the heavy population of children as a percentage of the population. A nation where the children miss the train of on-line connectivity and the enormous explosion of learning that it holds is doomed. An attack on poverty must start by an assault on ignorance and a relevant nation is only guaranteed if its children are prepared for life-time learning and are given the enablement to be part of the global future which is now.
The agenda for global preparedness would include a heavy emphasis on the development of the telecommunications and Internet infrastructure. Since technology is a major driver of globalization, the center piece of preparedness must be a focus on investments that expand the technological capability of Nigeria - institutional development, Research and Development (R&D) spending, venture capital for innovative initiatives, and a forward looking educational curricula that prepare its graduates for the challenges of globalization. Preparedness for globalization must include the transformation of the public sector to meet the global challenge of managing a private sector led economy. If the private sector is moving on-line and relating to the Internet, government methods, service delivery, record keeping and information dissemination must engage with these new developments. Creative networking of the various initiatives within the country can help to catalyze the preparedness of the private sector, civil society and government for globalization. Also to avoid the concept of 'digital slavery' computer literacy and internet connectivity must be addressed by targeted policies and deliver quick and with sustainable results. The widely held view that countries can not influence the force or pace of globalizations, and that rather they should learn to live with or make the best of its impact and consequences is an error. Countries, by prudent cooperative and prudent management can proactively control its pace and consequences. Of particular concern is the fact that the flow of foreign direct investment (FDI) that has escalated in recent decades has eluded the poorest nations of the world, particularly African countries. While globalization has multiplied the flow of FDI, less than one percent of the total has gone to Sub-Saharan Africa. Other issues of concern include the growing number of non-tarrif barriers that developed countries are imposing on goods from developing countries while forcing them to open up their own markets to the goods of the developed countries, and the use of Trade-Related Intellectual Property Rights (TRIPR) and Trade-Related Investment Measures (TRIMs) to corner developing country markets for the developed, while ensuring little or no transfer of technology to these countries, the issues of subsidies and their use in the World Trade Organization rules of international trade is also very pertinent. The developed countries use subsidies freely but penalize others who use them. It must be realized that developed countries are able to achieve these fits due to the advantages of ICTs at their disposal. All the above measures by the developed countries can then be termed 'digital slavery' for the developing countries because of their inability to make proper use of ICT facilities like the developed countries. African countries must rise up to these challenges of globalization and they must do everything possible to put an end to this 21st Century slavery. The developing countries should seek adequate representation at international meetings where the issues are discussed, and they should cooperate to coordinate research and apply available expertise in carrying out the factual data gathering and analysis that reveal the true impact of the issue in developing countries. In addition, certain prerequisites, such as reliable power supply to operate the computers, a well-functioning telephone network to transmit data, foreign currency to import the technology, and the computer literate personnel are necessary for successful use of IT. It is disheartening to note that such infrastuctural elements remain inadequate in many Sub-Saharan African countries.



  == MA THEOS INTERNATIONAL FINANCE ON ELECTRONIC PROCESSING MATIC ==

1 komentar:

  1. Your Net Worth Will Be More Than Halved After This…

    Your Savings Will Be Worthless After China Does This

    China Will Destroy Your Net Worth

    Did you know that China is USA’s largest debt holder?

    This means that all they need to do start devaluing the US Dollar is by selling their debt holdings to their secondary market.

    This means the US Dollar value itself will crumble.

    Some financial analysts have stated that the USD will more than halve in value over the next few months.

    Are you and your family prepared?

    Make sure you don’t let your net worth and savings be rendered worthless.

    >>[Watch This Video To Learn How To Profit From This Downfall]<<

    However, it’s not all bad news.

    There’s a way that you can actually PROFIT from the fall of the US dollar and also any other economic collapse in the future.

    Just click on the video before and learn how you can join the 1% of the elite that makes money each time there is an economic downturn.

    >>[Watch This Video To Learn How To Profit From This Downfall]<<

    Speak soon.

    [Mr Mark Fidelman]


    BalasHapus