labour force
All the members of a particular organization or country who are able to work, viewed collectively.
‘a firm with a labour force of one hundred people’
Workforce
The workforce or labour force (labor force in American English; see spelling differences) is the labour pool in employment. It is generally used to describe those working for a single company or industry, but can also apply to a geographic region like a city, state, or country. Within a company, its value can be labelled as its "Workforce in Place". The workforce of a country includes both the employed and the unemployed. The labour force participation rate, LFPR (or economic activity rate, EAR), is the ratio between the labour force and the overall size of their cohort (national population of the same age range). The term generally excludes the employers or management, and can imply those involved in manual labour. It may also mean all those who are available for work.
Formal and informal
Formal labour is any sort of employment that is structured and paid in a formal way.[1] Unlike the informal sector of the economy, formal labour within a country contributes to that country's gross national product.[2] Informal labour is labour that falls short of being a formal arrangement in law or in practice.[3] It can be paid or unpaid and it is always unstructured and unregulated.[4] Formal employment is more reliable than informal employment. Generally, the former yields higher income and greater benefits and securities for both men and women.[5]Informal labour in the world
The contribution of informal labourers is immense. Informal labour is expanding globally, most significantly in developing countries.[6] According to a study done by Jacques Charmes, in the year 2000 informal labour made up 57% of non-agricultural employment, 40% of urban employment, and 83% of the new jobs in Latin America. That same year, informal labour made up 78% of non-agricultural employment, 61% of urban employment, and 93% of the new jobs in Africa.[7] Particularly after an economic crisis, labourers tend to shift from the formal sector to the informal sector. This trend was seen after the Asian economic crisis which began in 1997.[8]Informal labour and gender
Gender is frequently associated with informal labour. Women are employed more often informally than they are formally, and informal labour is an overall larger source of employment for females than it is for males.[5] Women frequent the informal sector of the economy through occupations like home-based workers and street vendors.[8] The Penguin Atlas of Women in the World shows that in the 1990s, 81% of women in Benin were street vendors, 55% in Guatemala, 44% in Mexico, 33% in Kenya, and 14% in India. Overall, 60% of women workers in the developing world are employed in the informal sector.[1]The specific percentages are 84% and 58% for women in Sub-Saharan Africa and Latin America respectively.[1] The percentages for men in both of these areas of the world are lower, amounting to 63% and 48% respectively.[1] In Asia, 65% of women workers and 65% of men workers are employed in the informal sector.[9] Globally, a large percentage of women that are formally employed also work in the informal sector behind the scenes. These women make up the hidden work force.[9]
Agricultural and non-agricultural labour
Formal and informal labour can be divided into the subcategories of agricultural work and non-agricultural work. Martha Chen et al. believe these four categories of labour are closely related to one another.[10] A majority of agricultural work is informal, which the Penguin Atlas for Women in the World defines as unregistered or unstructured.[9] Non-agricultural work can also be informal. According to Martha Chen, informal labour makes up 48% of non-agricultural work in North Africa, 51% in Latin America, 65% in Asia, and 72% in Sub-Saharan Africa.[5]Agriculture and gender
The agricultural sector of the economy has remained stable in recent years.[11] According to the Penguin Atlas of Women in the World, women make up 40% of the agricultural labour force in most parts of the world, while in developing countries they make up 67% of the agricultural workforce.[9] Joni Seager shows in her atlas that specific tasks within agricultural work are also gendered. For example, for the production of wheat in a village in Northwest China, men perform the ploughing, the planting, and the spraying, while women perform the weeding, the fertilising, the processing, and the storage.[9]In terms of food production worldwide, the atlas shows that women produce 80% of the food in Sub-Saharan Africa, 50% in Asia, 45% in the Caribbean, 25% in North Africa and in the Middle East, and 25% in Latin America.[9] A majority of the work women do on the farm is considered housework and is therefore negligible in employment statistics.[9]
Paid and unpaid
Paid and unpaid work are also closely related with formal and informal labour. Some informal work is unpaid, or paid under the table.[10] Unpaid work can be work that is done at home to sustain a family, like child care work, or actual habitual daily labour that is not monetarily rewarded, like working the fields.[9] Unpaid workers have zero earnings, and although their work is valuable, it is hard to estimate its true value. The controversial debate still stands. Men and women tend to work in different areas of the economy, regardless of whether their work is paid or unpaid. Women focus on the service sector, while men focus on the industrial sector.Unpaid labour and gender
Women usually work fewer hours in income generating jobs than men do.[5] Often it is housework that is unpaid. Worldwide, women and girls are responsible for a great amount of household work.[9]The Penguin Atlas of Women in the World, published in 2008, stated that in Madagascar, women spend 20 hours per week on housework, while men spend only two.[9] In Mexico, women spend 33 hours and men spend 5 hours.[9] In Mongolia the housework hours amount to 27 and 12 for women and men respectively.[9] In Spain, women spend 26 hours on housework and men spend 4 hours.[9] Only in the Netherlands do men spend 10% more time than women do on activities within the home or for the household.[9]
The Penguin Atlas of Women in the World also stated that in developing countries, women and girls spend a significant amount of time fetching water for the week, while men do not. For example, in Malawi women spend 6.3 hours per week fetching water, while men spend 43 minutes. Girls in Malawi spend 3.3 hours per week fetching water, and boys spend 1.1 hours.[9] Even if women and men both spend time on household work and other unpaid activities, this work is also gendered.[5]
Unearned pay and gender
In the United Kingdom in 2014, two-thirds of workers on long-term sick leave were women, despite women only constituting half of the workforce, even after excluding maternity leaveXXX . V It May Surprise You Which Countries Are Replacing Workers With Robots the Fastest
Automation has been responsible for improvements in manufacturing productivity for decades.
Advanced robotics will accelerate this trend. Machines, after all, can perform many manufacturing tasks more efficiently, effectively and consistently than humans, leading to increased output, better quality and less waste. And machines don’t require health insurance, coffee breaks, maternity leave or sleep.
The industrial world realizes this and robot sales have been surging, increasing 29 percent in 2014 alone, according to the International Federation of Robotics.
None of this should surprise us. Automation makes little or no economic sense in countries where there is comparatively little manufacturing or where abundant cheap labor is readily available. The basic economic trade-off between the cost of labor and the cost of automation is the primary consideration. Labor laws, cultural considerations, the availability of capital and the age and skill levels of local workers also are important factors.
Consider an economy such as India’s. When you have 1.3 billion people who can make things cheaply, it doesn’t make a lot of economic sense to automate. In fact, Indians appear more likely to design and produce labor-saving robots and other such machinery than to use it in their factories.
But with wages increasing in emerging economies, we soon may see more robotics even in textiles, especially in cotton products where the raw material is shipped to countries like Pakistan, converted to textiles and sent back to the U.S.
What this all means is that the next stage of the robotic revolution — dubbed Industry 4.0 — will affect some countries more than others and some industries and job categories more than others. Every industry has certain unique jobs, each with its own required tasks. Some jobs can be automated, others not. Moreover, different tasks require different robotic functions — some of which will require very expensive robotic systems. All of this has to be considered.
Surprisingly, the countries moving ahead most aggressively — installing more robots than would be expected given their productivity-adjusted labor costs — are emerging markets: Indonesia, South Korea, Taiwan and Thailand. Manufacturers in South Korea and Thailand, in particular, have been automating at a comparatively breakneck pace; the Indonesians and Taiwanese slightly slower.
The fast pace of automation in South Korea and Taiwan can be explained in part by higher-than-average wage increases, aging workforces and low unemployment rates. In a developing economy like Indonesia, the motive might be different — to improve quality so local factories can compete with those in Japan and the West.
The reason for this, we believe, is that Chinese manufacturers see the writing on the wall. Labor costs have been increasing rapidly in China, a trend that is likely to continue. Moreover, with an aging workforce — complicated by the country’s decades-old though recently abandoned one-child policy — skill shortages appear on the horizon. And quality remains a big concern. The strategic use of robots can help compensate for these shortcomings.
Countries moving more slowly in the adoption of industrial robotics include Australia, the Czech Republic, Germany, Mexico and Poland. While there are other factors at play as well, labor regulations that require employers to justify layoffs and pay idled workers for long periods of time appear to be largely responsible for the slower pace.
Under the circumstances, automation would appear to be a no-brainer — except that their governments, in effect, discourage it with various restrictions on replacing workers with machines, including years of mandatory severance pay in some cases.
As the world’s emerging economies industrialize and labor costs rise — as is now happening in China and happened even earlier in South Korea and Taiwan — the picture is likely to change.
For now, however, workers in most developing economies would appear to have little to fear. Robots may be coming to their factories, but not any time soon.
Advanced robotics will accelerate this trend. Machines, after all, can perform many manufacturing tasks more efficiently, effectively and consistently than humans, leading to increased output, better quality and less waste. And machines don’t require health insurance, coffee breaks, maternity leave or sleep.
The industrial world realizes this and robot sales have been surging, increasing 29 percent in 2014 alone, according to the International Federation of Robotics.
Robots at the Hyundai factory in Asan, South Korea, on Jan. 20, 2015. (SeongJoon Cho/Bloomberg via Getty Images)
Of the more than 229,000 industrial robots sold in 2014 (the most recent statistics available), more than 57,000 were sold to Chinese manufacturers, 29,300 to Japanese companies, 26,200 to companies in the U.S., 24,700 to South Koreans and more than 20,000 to German companies. By comparison, robot sales in India totaled just 2,100, IFR reported.None of this should surprise us. Automation makes little or no economic sense in countries where there is comparatively little manufacturing or where abundant cheap labor is readily available. The basic economic trade-off between the cost of labor and the cost of automation is the primary consideration. Labor laws, cultural considerations, the availability of capital and the age and skill levels of local workers also are important factors.
Consider an economy such as India’s. When you have 1.3 billion people who can make things cheaply, it doesn’t make a lot of economic sense to automate. In fact, Indians appear more likely to design and produce labor-saving robots and other such machinery than to use it in their factories.
When you have 1.3 billion people who can make things cheaply, it doesn’t make a lot of economic sense to automate.Also important in the “buy” or “don’t buy” calculation involving robotics is the technical ability of machines vis-a-vis manual labor. Some jobs — think textile cutting and stitching, for example — simply need human hands, at least for now. This is good news, all else being equal, not only for China and India, but for other emerging market economies like Bangladesh, Indonesia, Pakistan, Thailand and Turkey, which are significant textile exporters.
But with wages increasing in emerging economies, we soon may see more robotics even in textiles, especially in cotton products where the raw material is shipped to countries like Pakistan, converted to textiles and sent back to the U.S.
What this all means is that the next stage of the robotic revolution — dubbed Industry 4.0 — will affect some countries more than others and some industries and job categories more than others. Every industry has certain unique jobs, each with its own required tasks. Some jobs can be automated, others not. Moreover, different tasks require different robotic functions — some of which will require very expensive robotic systems. All of this has to be considered.
A worker in Guangzhou, China on March 4, 2011. (AP Photo/Kin Cheung)
Last fall we took a close look at the world’s 25 largest manufacturing export economies to see which countries are most aggressively automating production and which are lagging. Nearly half of the countries — Brazil, China, the Czech Republic, India, Indonesia, Mexico, Poland, Russia, South Korea, Taiwan and Thailand — are generally considered emerging markets.Surprisingly, the countries moving ahead most aggressively — installing more robots than would be expected given their productivity-adjusted labor costs — are emerging markets: Indonesia, South Korea, Taiwan and Thailand. Manufacturers in South Korea and Thailand, in particular, have been automating at a comparatively breakneck pace; the Indonesians and Taiwanese slightly slower.
The fast pace of automation in South Korea and Taiwan can be explained in part by higher-than-average wage increases, aging workforces and low unemployment rates. In a developing economy like Indonesia, the motive might be different — to improve quality so local factories can compete with those in Japan and the West.
Chinese manufacturers see the writing on the wall.Other countries that have been rapidly integrating robotics into manufacturing, but not quite as quickly, are Canada, China, Japan, Russia, the U.K. and the U.S. China is an interesting case because it’s automating rapidly despite the fact that Chinese wages are still comparatively low.
The reason for this, we believe, is that Chinese manufacturers see the writing on the wall. Labor costs have been increasing rapidly in China, a trend that is likely to continue. Moreover, with an aging workforce — complicated by the country’s decades-old though recently abandoned one-child policy — skill shortages appear on the horizon. And quality remains a big concern. The strategic use of robots can help compensate for these shortcomings.
Countries moving more slowly in the adoption of industrial robotics include Australia, the Czech Republic, Germany, Mexico and Poland. While there are other factors at play as well, labor regulations that require employers to justify layoffs and pay idled workers for long periods of time appear to be largely responsible for the slower pace.
German Chancellor Angela Merkel shakes hands with a robot at an industrial fair in 2006 alongside Indian Prime Minister Manmohan Singh. (AP Photo/Joerg Sarbach)
The slowest adopters of robotic technology among the 25 largest manufacturing exporters have been Austria, Belgium, Brazil, France, India, Italy, the Netherlands, Sweden, Spain and Switzerland. With the exception of India, all of these nations have aging work forces, some of the highest productivity-adjusted labor costs in the world and are likely to face serious skill shortages in coming years.Under the circumstances, automation would appear to be a no-brainer — except that their governments, in effect, discourage it with various restrictions on replacing workers with machines, including years of mandatory severance pay in some cases.
Surprisingly, the countries moving ahead most aggressively — installing more robots than would be expected given their productivity-adjusted labor costs — are emerging markets.India has been moving slowly because the economic balance still favors India’s abundant cheap labor. But there are bureaucratic hurdles as well. Indian companies with more than 100 employees must obtain permission from the government before they fire anyone. Imagine what that involves. This lack of flexibility not only discourages the automation of existing factories, it also sends a powerful signal that any company or investor considering automation needs to think twice — maybe three times — before sinking money into a new modern factory.
As the world’s emerging economies industrialize and labor costs rise — as is now happening in China and happened even earlier in South Korea and Taiwan — the picture is likely to change.
For now, however, workers in most developing economies would appear to have little to fear. Robots may be coming to their factories, but not any time soon.
XXX . V0 Technological unemployment
Technological unemployment is the loss of jobs caused by technological change. Such change typically includes the introduction of labour-saving "mechanical-muscle" machines or more efficient "mechanical-mind" processes (automation). Just as horses employed as prime movers were gradually made obsolete by the automobile, humans' jobs have also been affected throughout modern history. Historical examples include artisan weavers reduced to poverty after the introduction of mechanised looms. During World War II, Alan Turing's Bombe machine compressed and decoded thousands of man-years worth of encrypted data in a matter of hours. A contemporary example of technological unemployment is the displacement of retail cashiers by self-service tills.
That technological change can cause short-term job losses is widely accepted. The view that it can lead to lasting increases in unemployment has long been controversial. Participants in the technological unemployment debates can be broadly divided into optimists and pessimists. Optimists agree that innovation may be disruptive to jobs in the short term, yet hold that various compensation effects ensure there is never a long-term negative impact on jobs, whereas pessimists contend that at least in some circumstances, new technologies can lead to a lasting decline in the total number of workers in employment. The phrase "technological unemployment" was popularised by John Maynard Keynes in the 1930s. Yet the issue of machines displacing human labour has been discussed since at least Aristotle's time.
Prior to the 18th century both the elite and common people would generally take the pessimistic view on technological unemployment, at least in cases where the issue arose. Due to generally low unemployment in much of pre-modern history, the topic was rarely a prominent concern. In the 18th century fears over the impact of machinery on jobs intensified with the growth of mass unemployment, especially in Great Britain which was then at the forefront of the Industrial Revolution. Yet some economic thinkers began to argue against these fears, claiming that overall innovation would not have negative effects on jobs. These arguments were formalised in the early 19th century by the classical economists. During the second half of the 19th century, it became increasingly apparent that technological progress was benefiting all sections of society, including the working class. Concerns over the negative impact of innovation diminished. The term "Luddite fallacy" was coined to describe the thinking that innovation would have lasting harmful effects on employment.
The view that technology is unlikely to lead to long term unemployment has been repeatedly challenged by a minority of economists. In the early 1800s these included Ricardo himself. There were dozens of economists warning about technological unemployment during brief intensifications of the debate that spiked in the 1930s and 1960s. Especially in Europe, there were further warnings in the closing two decades of the twentieth century, as commentators noted an enduring rise in unemployment suffered by many industrialised nations since the 1970s. Yet a clear majority of both professional economists and the interested general public held the optimistic view through most of the 20th century.
In the second decade of the 21st century, a number of studies have been released suggesting that technological unemployment may be increasing worldwide. Oxford Professors Carl Benedikt Frey and Michael Osborne, for example, have estimated that 47 percent of U.S. jobs are at risk of automation.[1] However, on PBS NewsHours, they made clear that their findings do not necessarily imply future technological unemployment.[2] While many economists and commentators still argue such fears are unfounded, as was widely accepted for most of the previous two centuries, concern over technological unemployment is growing once again.[3][4][5] A report in Wired in 2017 quotes knowledgeable people such as economist Gene Sperling and management professor Andrew McAfee on the idea that handling existing and impending job loss to automation is a "significant issue".[6] Regarding a recent claim by Treasury Secretary Steve Mnuchin that automation is not "going to have any kind of big effect on the economy for the next 50 or 100 years", says McAfee, "I don't talk to anyone in the field who believes that."[6] Recent technological innovations have the potential to render humans obsolete with the professional, white-collar, low-skilled, creative fields, and other "mental jobs
Long term effects on employment
There are more sectors losing jobs than creating jobs. And the general-purpose aspect of software technology means that even the industries and jobs that it creates are not forever.
“
”
The concept of structural unemployment, a lasting level of joblessness that does not disappear even at the high point of the business cycle, became popular in the 1960s. For pessimists, technological unemployment is one of the factors driving the wider phenomena of structural unemployment. Since the 1980s, even optimistic economists have increasingly accepted that structural unemployment has indeed risen in advanced economies, but they have tended to blame this on globalisation and offshoring rather than technological change. Others claim a chief cause of the lasting increase in unemployment has been the reluctance of governments to pursue expansionary policies since the displacement of Keynesianism that occurred in the 1970s and early 80s. In the 21st century, and especially since 2013, pessimists have been arguing with increasing frequency that lasting worldwide technological unemployment is a growing threat.
Compensation effects
Compensation effects are labour-friendly consequences of innovation which "compensate" workers for job losses initially caused by new technology. In the 1820s, several compensation effects were described by Say in response to Ricardo's statement that long term technological unemployment could occur. Soon after, a whole system of effects was developed by Ramsey McCulloch. The system was labelled "compensation theory" by Marx, who proceeded to attack the ideas, arguing that none of the effects were guaranteed to operate. Disagreement over the effectiveness of compensation effects has remained a central part of academic debates on technological unemployment ever since.[14][18]Compensation effects include:
- By new machines. (The labour needed to build the new equipment that applied innovation requires.)
- By new investments. (Enabled by the cost savings and therefore increased profits from the new technology.)
- By changes in wages. (In cases where unemployment does occur, this can cause a lowering of wages, thus allowing more workers to be re-employed at the now lower cost. On the other hand, sometimes workers will enjoy wage increases as their profitability rises. This leads to increased income and therefore increased spending, which in turn encourages job creation.)
- By lower prices. (Which then lead to more demand, and therefore more employment.) Lower prices can also help offset wage cuts, as cheaper goods will increase workers' buying power.
- By new products. (Where innovation directly creates new jobs.)
Many economists now pessimistic about technological unemployment accept that compensation effects did largely operate as the optimists claimed through most of the 19th and 20th century. Yet they hold that the advent of computerisation means that compensation effects are now less effective. An early example of this argument was made by Wassily Leontief in 1983. He conceded that after some disruption, the advance of mechanization during the Industrial Revolution actually increased the demand for labour as well as increasing pay due to effects that flow from increased productivity. While early machines lowered the demand for muscle power, they were unintelligent and needed large armies of human operators to remain productive. Yet since the introduction of computers into the workplace, there is now less need not just for muscle power but also for human brain power. Hence even as productivity continues to rise, the lower demand for human labour may mean less pay and employment. However, this argument is not fully supported by more recent empirical studies. One research done by Erik Brynjolfsson and Lorin M. Hitt in 2003 presents direct evidence that suggests a positive short-term effect of computerization on firm-level measured productivity and output growth. In addition, they find the long-term productivity contribution of computerization and technological changes might even be greater.
The Luddite fallacy
If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries.
“
”
There are two underlying premises for why long-term difficulty could develop. The one that has traditionally been deployed is that ascribed to the Luddites (whether or not it is a truly accurate summary of their thinking), which is that there is a finite amount of work available and if machines do that work, there can be no other work left for humans to do. Economists call this the lump of labour fallacy, arguing that in reality no such limitation exists. However, the other premise is that it is possible for long-term difficulty to arise that has nothing to do with any lump of labour. In this view, the amount of work that can exist is infinite, but (1) machines can do most of the "easy" work, (2) the definition of what is "easy" expands as information technology progresses, and (3) the work that lies beyond "easy" (the work that requires more skill, talent, knowledge, and insightful connections between pieces of knowledge) may require greater cognitive faculties than most humans are able to supply, as point 2 continually advances. This latter view is the one supported by many modern advocates of the possibility of long-term, systemic technological unemployment.
Skill levels and technological unemployment
A common view among those discussing the effect of innovation on the labour market has been that it mainly hurts those with low skills, while often benefiting skilled workers. According to scholars such as Lawrence F. Katz, this may have been true for much of the twentieth century, yet in the 19th century, innovations in the workplace largely displaced costly skilled artisans, and generally benefited the low skilled. While 21st century innovation has been replacing some unskilled work, other low skilled occupations remain resistant to automation, while white collar work requiring intermediate skills is increasingly being performed by autonomous computer programs.[27][28][29]Some recent studies however, such as a 2015 paper by Georg Graetz and Guy Michaels, found that at least in the area they studied – the impact of industrial robots – innovation is boosting pay for highly skilled workers while having a more negative impact on those with low to medium skills.[30] A 2015 report by Carl Benedikt Frey, Michael Osborne and Citi Research, agreed that innovation had been disruptive mostly to middle-skilled jobs, yet predicted that in the next ten years the impact of automation would fall most heavily on those with low skills.[31]
Geoff Colvin at Forbes argued that predictions on the kind of work a computer will never be able to do have proven inaccurate. A better approach to anticipate the skills on which humans will provide value would be to find out activities where we will insist that humans remain accountable for important decisions, such as with judges, CEOs, bus drivers and government leaders, or where human nature can only be satisfied by deep interpersonal connections, even if those tasks could be automated.[32]
In contrast, others see even skilled human laborers being obsolete. Oxford academics Carl Benedikt Frey and Michael A Osborne have predicted computerization could make nearly half of jobs redundant ,[33] of the 702 professions assessed, they found a strong correlation between education and income with ability to be automated, with office jobs and service work being some of the more at risk.[34] In 2012 co-founder of Sun Microsystems Vinod Khosla predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software.
Empirical findings
There has been a lot of empirical research that attempts to quantify the impact of technological unemployment, mostly done at the microeconomic level. Most existing firm-level research has found a labor-friendly nature of technological innovations. For example, German economists Stefan Lachenmaier and Horst Rottmann find that both product and process innovation have a positive effect on employment. Interestingly, they also find that process innovation has a more significant job creation effect than product innovation.[36] This result is supported by evidence in the United States as well, which shows that manufacturing firm innovations have a positive effect on the total number of jobs, not just limited to firm-specific behavior.[37]At the industry level, however, researchers have found mixed results with regard to the employment effect of technological changes. A 2017 study on manufacturing and service sectors in 11 European countries suggests that positive employment effect of technological innovations only exist in the medium- and high-tech sectors.There also seems to be a negative correlation between employment and capital formation, which suggests that technological progress could potentially be labor-saving given that process innovation is often incorporated in investment.[38]
Limited macroeconomic analysis has been done to study the relationship between technological shocks and unemployment. The small amount of existing research, however, suggests mixed results. Italian economist Marco Vivarelli finds that the labor-saving effect of process innovation seems to have affected the Italian economy more negatively than the United States. On the other hand, the job creating effect of product innovation could only be observed in the United States, not Italy.[39] Another study in 2013 finds a more transitory, rather than permanent, unemployment effect of technological change.
Measures of technological innovation
There have been four main approaches that attempt to capture and document technological innovation quantitatively. The first one, proposed by Jordi Gali in 1999 and further developed by Neville Francis and Valerie A. Ramey in 2005, is to use long-run restrictions in a Vector Autoregression (VAR) to identify technological shocks, assuming that only technology affects long-run productivity.[41][42]The second approach is from Susanto Basu, John Fernald and Miles Kimball.[43] They create a measure of aggregate technology change with augmented Solow residuals, controlling for aggregate, non-technological effects such as non-constant returns and imperfect competition.
The third method, initially developed by John Shea in 1999, takes a more direct approach and employs observable indicators such as Research and Development (R&D) spending, and number of patent applications.[44] This measure of technological innovation is very widely used in empirical research, since it does not rely on the assumption that only technology affects long-run productivity, and fairly accurately captures the output variation based on input variation. However, there are limitations with direct measures such as R&D. For example, since R&D only measures the input in innovation, the output is unlikely to be perfectly correlated with the input. In addition, R&D fails to capture the indeterminate lag between developing a new product or service, and bringing it to market.[45]
The fourth approach, constructed by Michelle Alexopoulos, looks at the number of new titles published in the fields of technology and computer science to reflect technological progress, which turns out to be consistent with R&D expenditure data.[46] Compared with R&D, this indicator captures the lag between changes in technology.
Welfare payments
The use of various forms of subsidies has often been accepted as a solution to technological unemployment even by conservatives and by those who are optimistic about the long term effect on jobs. Welfare programmes have historically tended to be more durable once established, compared with other solutions to unemployment such as directly creating jobs with public works. Despite being the first person to create a formal system describing compensation effects, Ramsey McCulloch and most other classical economists advocated government aid for those suffering from technological unemployment, as they understood that market adjustment to new technology was not instantaneous and that those displaced by labour-saving technology would not always be able to immediately obtain alternative employment through their own efforts.Basic income
Several commentators have argued that traditional forms of welfare payment may be inadequate as a response to the future challenges posed by technological unemployment, and have suggested a basic income as an alternative. People advocating some form of basic income as a solution to technological unemployment include Martin Ford, [120] Erik Brynjolfsson,[81] Robert Reich and Guy Standing. Reich has gone as far as to say the introduction of a basic income, perhaps implemented as a negative income tax is "almost inevitable",[121] while Standing has said he considers that a basic income is becoming "politically essential".[122] Since late 2015, new basic income pilots have been announced in Finland, the Netherlands, and Canada. Further recent advocacy for basic income has arisen from a number of technology entrepreneurs, the most prominent being Sam Altman, president of Y Combinator.[123]Skepticism about basic income includes both right and left elements, and proposals for different forms of it have come from all segments of the spectrum. For example, while the best-known proposed forms (with taxation and distribution) are usually thought of as left-leaning ideas that right-leaning people try to defend against, other forms have been proposed even by libertarians, such as von Hayek and Friedman. Republican president Nixon's Family Assistance Plan (FAP) of 1969, which had much in common with basic income, passed in the House but was defeated in the Senate.[124]
One objection to basic income is that it could be a disincentive to work, but evidence from older pilots in India, Africa, and Canada indicates that this does not happen and that a basic income encourages low-level entrepreneurship and more productive, collaborative work. Another objection is that funding it sustainably is a huge challenge. While new revenue-raising ideas have been proposed such as Martin Ford's wage recapture tax, how to fund a generous basic income remains a debated question, and skeptics have dismissed it as utopian. Even from a progressive viewpoint, there are concerns that a basic income set too low may not help the economically vulnerable, especially if financed largely from cuts to other forms of welfare.
To better address both the funding concerns and concerns about government control, one alternative model is that the cost and control would be distributed across the private sector instead of the public sector. Companies across the economy would be required to employ humans, but the job descriptions would be left to private innovation, and individuals would have to compete to be hired and retained. This would be a for-profit sector analog of basic income, that is, a market-based form of basic income. It differs from a job guarantee in that the government is not the employer (rather, companies are) and there is no aspect of having employees who "cannot be fired", a problem that interferes with economic dynamism. The economic salvation in this model is not that every individual is guaranteed a job, but rather just that enough jobs exist that massive unemployment is avoided and employment is no longer solely the privilege of only the very smartest or highly trained 20% of the population. Another option for a market-based form of basic income has been proposed by the Center for Economic and Social Justice (CESJ) as part of "a Just Third Way" (a Third Way with greater justice) through widely distributed power and liberty. Called the Capital Homestead Act,[128] it is reminiscent of James S. Albus's Peoples' Capitalism[68][69] in that money creation and securities ownership are widely and directly distributed to individuals rather than flowing through, or being concentrated in, centralized or elite mechanisms.
Education
Improved availability to quality education, including skills training for adults, is a solution that in principle at least is not opposed by any side of the political spectrum, and welcomed even by those who are optimistic about long-term technological employment. Improved education paid for by government tends to be especially popular with industry.Proponents of this brand of policy assert higher level, more specialized learning is a way to capitalize from the growing technology industry. Leading technology research university MIT published an open letter to policymakers advocating for the "reinvention of education", namely a shift "away from rote learning" and towards STEM disciplines. [129] Similar statements released by the U.S President's Council of Advisors on Science and Technology (PACST) have also been used to support this STEM emphasis on enrollment choice in higher learning [130]. Education reform is also a part of the U.K government's "Industrial Strategy", a plan announcing the nation's intent to invest millions into a "technical education system" [131]. The proposal includes the establishment of a retraining program for workers who wish to adapt their skill-sets. These suggestions combat the concerns over automation through policy choices aiming to meet the emerging needs of society via updated information. Of the professionals within the academic community who applaud such moves, often noted is a gap between economic security and formal education [132]--a disparity exacerbated by the rising demand for specialized skills-- and education's potential to reduce it.
However, several academics have also argued that improved education alone will not be sufficient to solve technological unemployment, pointing to recent declines in the demand for many intermediate skills, and suggesting that not everyone is capable in becoming proficient in the most advanced skills. Kim Taipale has said that "The era of bell curve distributions that supported a bulging social middle class is over... Education per se is not going to make up the difference."[133] while an op-ed piece from 2011, Paul Krugman, an economics professor and columnist for the New York Times, argued that better education would be an insufficient solution to technological unemployment, as it "actually reduces the demand for highly educated workers".
Public works
Programmes of Public works have traditionally been used as way for governments to directly boost employment, though this has often been opposed by some, but not all, conservatives. Jean-Baptiste Say, although generally associated with free market economics, advised that public works could be a solution to technological unemployment.[135] Some commentators, such as professor Mathew Forstater, have advised that public works and guaranteed jobs in the public sector may be the ideal solution to technological unemployment, as unlike welfare or guaranteed income schemes they provide people with the social recognition and meaningful engagement that comes with work.For less developed economies, public works may be an easier to administrate solution compared to universal welfare programmes.[23] As of 2015, calls for public works in the advanced economies have been less frequent even from progressives, due to concerns about sovereign debt A partial exception is for spending on infrastructure, which has been recommended as a solution to technological unemployment even by economists previously associated with a neoliberal agenda, such as Larry Summers.
Shorter working hours
In 1870, the average American worker clocked up about 75 hours per week. Just prior to World War II working hours had fallen to about 42 per week, and the fall was similar in other advanced economies. According to Wassily Leontief, this was a voluntary increase in technological unemployment. The reduction in working hours helped share out available work, and was favoured by workers who were happy to reduce hours to gain extra leisure, as innovation was at the time generally helping to increase their rates of pay.[23]Further reductions in working hours have been proposed as a possible solution to unemployment by economists including John R. Commons, Lord Keynes and Luigi Pasinetti. Yet once working hours have reached about 40 hours per week, workers have been less enthusiastic about further reductions, both to prevent loss of income and as many value engaging in work for its own sake. Generally, 20th-century economists had argued against further reductions as a solution to unemployment, saying it reflects a Lump of labour fallacy.[139] In 2014, Google's co-founder, Larry Page, suggested a four-day workweek, so as technology continues to displace jobs, more people can find employment.
Broadening the ownership of technological assets
Several solutions have been proposed which don't fall easily into the traditional left-right political spectrum. This includes broadening the ownership of robots and other productive capital assets. Enlarging the ownership of technologies has been advocated by people including James S. Albus[68][142] John Lanchester,[143] Richard B. Freeman,[126] and Noah Smith.[144] Jaron Lanier has proposed a somewhat similar solution: a mechanism where ordinary people receive "nano payments" for the big data they generate by their regular surfing and other aspects of their online presence.[145]Structural changes towards a post-scarcity economy
The Zeitgeist Movement (TZM), The Venus Project (TVP) as well as various individuals and organizations propose structural changes towards a form of a post-scarcity economy in which people are 'freed' from their automatable, monotonous jobs, instead of 'losing' their jobs. In the system proposed by TZM all jobs are either automated, abolished for bringing no true value for society (such as ordinary advertising), rationalized by more efficient, sustainable and open processes and collaboration or carried out based on altruism and social relevance (see also: Whuffie), opposed to compulsion or monetary gain.[ The movement also speculates that the free time made available to people will permit a renaissance of creativity, invention, community and social capital as well as reducing stress.Others
The threat of technological unemployment has occasionally been used by free market economists as a justification for supply side reforms, to make it easier for employers to hire and fire workers. Conversely, it has also been used as a reason to justify an increase in employee protection.Economists including Larry Summers have advised a package of measures may be needed. He advised vigorous cooperative efforts to address the "myriad devices" – such as tax havens, bank secrecy, money laundering, and regulatory arbitrage – which enable the holders of great wealth to avoid paying taxes, and to make it more difficult to accumulate great fortunes without requiring "great social contributions" in return. Summers suggested more vigorous enforcement of anti-monopoly laws; reductions in "excessive" protection for intellectual property; greater encouragement of profit-sharing schemes that may benefit workers and give them a stake in wealth accumulation; strengthening of collective bargaining arrangements; improvements in corporate governance; strengthening of financial regulation to eliminate subsidies to financial activity; easing of land-use restrictions that may cause estates to keep rising in value; better training for young people and retraining for displaced workers; and increased public and private investment in infrastructure development, such as energy production and transportation.[
Michael Spence has advised that responding to the future impact of technology will require a detailed understanding of the global forces and flows technology has set in motion. Adapting to them "will require shifts in mindsets, policies, investments (especially in human capital), and quite possibly models of employment and distribution".
Since the publication of their 2011 book Race Against The Machine, MIT professors Andrew McAfee and Erik Brynjolfsson have been prominent among those raising concern about technological unemployment. The two professors remain relatively optimistic however, stating "the key to winning the race is not to compete against machines but to compete with machines
Disruptive innovation is a term in the field of business administration which refers to an innovation that creates a new market and value network and eventually disrupts an existing market and value network, displacing established market leading firms, products, and alliances. [2] The term was defined and first analyzed by the American scholar Clayton M. Christensen and his collaborators beginning in 1995,[3] and has been called the most influential business idea of the early 21st century.[4]
Not all innovations are disruptive, even if they are revolutionary. For example, the first automobiles in the late 19th century were not a disruptive innovation, because early automobiles were expensive luxury items that did not disrupt the market for horse-drawn vehicles. The market for transportation essentially remained intact until the debut of the lower-priced Ford Model T in 1908.[5] The mass-produced automobile was a disruptive innovation, because it changed the transportation market, whereas the first thirty years of automobiles did not.
Disruptive innovations tend to be produced by outsiders and entrepreneurs, rather than existing market-leading companies. The business environment of market leaders does not allow them to pursue disruptive innovations when they first arise, because they are not profitable enough at first and because their development can take scarce resources away from sustaining innovations (which are needed to compete against current competition).[6] A disruptive process can take longer to develop than by the conventional approach and the risk associated to it is higher than the other more incremental or evolutionary forms of innovations, but once it is deployed in the market, it achieves a much faster penetration and higher degree of impact on the established markets.[7]
Beyond business and economics disruptive innovations can also be considered to disrupt complex systems, only including economic and business-related aspects
The term disruptive technologies was coined by Clayton M. Christensen and introduced in his 1995 article Disruptive Technologies: Catching the Wave,[9] which he cowrote with Joseph Bower. The article is aimed at management executives who make the funding or purchasing decisions in companies, rather than the research community. He describes the term further in his book The Innovator's Dilemma.[10] Innovator's Dilemma explored the cases of the disk drive industry (which, with its rapid generational change, is to the study of business what fruit flies are to the study of genetics, as Christensen was advised in the 1990s[11]) and the excavating equipment industry (where hydraulic actuation slowly displaced cable-actuated movement). In his sequel with Michael E. Raynor, The Innovator's Solution,[12] Christensen replaced the term disruptive technology with disruptive innovation because he recognized that few technologies are intrinsically disruptive or sustaining in character; rather, it is the business model that the technology enables that creates the disruptive impact. However, Christensen's evolution from a technological focus to a business-modelling focus is central to understanding the evolution of business at the market or industry level. Christensen and Mark W. Johnson, who cofounded the management consulting firm Innosight, described the dynamics of "business model innovation" in the 2008 Harvard Business Review article "Reinventing Your Business Model".[13] The concept of disruptive technology continues a long tradition of identifying radical technical change in the study of innovation by economists, and the development of tools for its management at a firm or policy level.
The term “disruptive innovation” is misleading when it is used to refer to a product or service at one fixed point, rather than to the evolution of that product or service over time.
In the late 1990s, the automotive sector began to embrace a perspective of "constructive disruptive technology" by working with the consultant David E. O'Ryan, whereby the use of current off-the-shelf technology was integrated with newer innovation to create what he called "an unfair advantage". The process or technology change as a whole had to be "constructive" in improving the current method of manufacturing, yet disruptively impact the whole of the business case model, resulting in a significant reduction of waste, energy, materials, labor, or legacy costs to the user.
In keeping with the insight that what matters economically is the business model, not the technological sophistication itself, Christensen's theory explains why many disruptive innovations are not "advanced technologies", which the technology mudslide hypothesis would lead one to expect. Rather, they are often novel combinations of existing off-the-shelf components, applied cleverly to a small, fledgling value network.
Rainbow revenue story
The current theoretical understanding of disruptive innovation is different from what might be expected by default, an idea that Clayton M. Christensen called the "technology mudslide hypothesis". This is the simplistic idea that an established firm fails because it doesn't "keep up technologically" with other firms. In this hypothesis, firms are like climbers scrambling upward on crumbling footing, where it takes constant upward-climbing effort just to stay still, and any break from the effort (such as complacency born of profitability) causes a rapid downhill slide. Christensen and colleagues have shown that this simplistic hypothesis is wrong; it doesn't model reality. What they have shown is that good firms are usually aware of the innovations, but their business environment does not allow them to pursue them when they first arise, because they are not profitable enough at first and because their development can take scarce resources away from that of sustaining innovations (which are needed to compete against current competition). In Christensen's terms, a firm's existing value networks place insufficient value on the disruptive innovation to allow its pursuit by that firm. Meanwhile, start-up firms inhabit different value networks, at least until the day that their disruptive innovation is able to invade the older value network. At that time, the established firm in that network can at best only fend off the market share attack with a me-too entry, for which survival (not thriving) is the only reward.[6]Christensen defines a disruptive innovation as a product or service designed for a new set of customers.
"Generally, disruptive innovations were technologically straightforward, consisting of off-the-shelf components put together in a product architecture that was often simpler than prior approaches. They offered less of what customers in established markets wanted and so could rarely be initially employed there. They offered a different package of attributes valued only in emerging markets remote from, and unimportant to, the mainstream."[14]Christensen argues that disruptive innovations can hurt successful, well-managed companies that are responsive to their customers and have excellent research and development. These companies tend to ignore the markets most susceptible to disruptive innovations, because the markets have very tight profit margins and are too small to provide a good growth rate to an established (sizable) firm.[15] Thus, disruptive technology provides an example of an instance when the common business-world advice to "focus on the customer" (or "stay close to the customer", or "listen to the customer") can be strategically counterproductive.
While Christensen argued that disruptive innovations can hurt successful, well-managed companies, O'Ryan countered that "constructive" integration of existing, new, and forward-thinking innovation could improve the economic benefits of these same well-managed companies, once decision-making management understood the systemic benefits as a whole.
Christensen distinguishes between "low-end disruption", which targets customers who do not need the full performance valued by customers at the high end of the market, and "new-market disruption", which targets customers who have needs that were previously unserved by existing incumbents.[16]
"Low-end disruption" occurs when the rate at which products improve exceeds the rate at which customers can adopt the new performance. Therefore, at some point the performance of the product overshoots the needs of certain customer segments. At this point, a disruptive technology may enter the market and provide a product that has lower performance than the incumbent but that exceeds the requirements of certain segments, thereby gaining a foothold in the market.
In low-end disruption, the disruptor is focused initially on serving the least profitable customer, who is happy with a good enough product. This type of customer is not willing to pay premium for enhancements in product functionality. Once the disruptor has gained a foothold in this customer segment, it seeks to improve its profit margin. To get higher profit margins, the disruptor needs to enter the segment where the customer is willing to pay a little more for higher quality. To ensure this quality in its product, the disruptor needs to innovate. The incumbent will not do much to retain its share in a not-so-profitable segment, and will move up-market and focus on its more attractive customers. After a number of such encounters, the incumbent is squeezed into smaller markets than it was previously serving. And then, finally, the disruptive technology meets the demands of the most profitable segment and drives the established company out of the market.
"New market disruption" occurs when a product fits a new or emerging market segment that is not being served by existing incumbents in the industry.
The extrapolation of the theory to all aspects of life has been challenged,[17][18] as has the methodology of relying on selected case studies as the principal form of evidence.[17] Jill Lepore points out that some companies identified by the theory as victims of disruption a decade or more ago, rather than being defunct, remain dominant in their industries today (including Seagate Technology, U.S. Steel, and Bucyrus).[17] Lepore questions whether the theory has been oversold and misapplied, as if it were able to explain everything in every sphere of life, including not just business but education and public institutions.
Disruptive technology
In 2009, Milan Zeleny described high technology as disruptive technology and raised the question of what is being disrupted. The answer, according to Zeleny, is the support network of high technology.[19] For example, introducing electric cars disrupts the support network for gasoline cars (network of gas and service stations). Such disruption is fully expected and therefore effectively resisted by support net owners. In the long run, high (disruptive) technology bypasses, upgrades, or replaces the outdated support network. Questioning the concept of a disruptive technology, Haxell (2012) questions how such technologies get named and framed, pointing out that this is a positioned and retrospective act.[20][21]Technology, being a form of social relationship[citation needed], always evolves. No technology remains fixed. Technology starts, develops, persists, mutates, stagnates, and declines, just like living organisms.[22] The evolutionary life cycle occurs in the use and development of any technology. A new high-technology core emerges and challenges existing technology support nets (TSNs), which are thus forced to coevolve with it. New versions of the core are designed and fitted into an increasingly appropriate TSN, with smaller and smaller high-technology effects. High technology becomes regular technology, with more efficient versions fitting the same support net. Finally, even the efficiency gains diminish, emphasis shifts to product tertiary attributes (appearance, style), and technology becomes TSN-preserving appropriate technology. This technological equilibrium state becomes established and fixated, resisting being interrupted by a technological mutation; then new high technology appears and the cycle is repeated.
Regarding this evolving process of technology, Christensen said:
"The technological changes that damage established companies are usually not radically new or difficult from a technological point of view. They do, however, have two important characteristics: First, they typically present a different package of performance attributes—ones that, at least at the outset, are not valued by existing customers. Second, the performance attributes that existing customers do value improve at such a rapid rate that the new technology can later invade those established markets."[23]Joseph Bower[24] explained the process of how disruptive technology, through its requisite support net, dramatically transforms a certain industry.
"When the technology that has the potential for revolutionizing an industry emerges, established companies typically see it as unattractive: it’s not something their mainstream customers want, and its projected profit margins aren’t sufficient to cover big-company cost structure. As a result, the new technology tends to get ignored in favor of what’s currently popular with the best customers. But then another company steps in to bring the innovation to a new market. Once the disruptive technology becomes established there, smaller-scale innovation rapidly raise the technology’s performance on attributes that mainstream customers’ value."[25]The automobile was high technology with respect to the horse carriage; however, it evolved into technology and finally into appropriate technology with a stable, unchanging TSN. The main high-technology advance in the offing is some form of electric car—whether the energy source is the sun, hydrogen, water, air pressure, or traditional charging outlet. Electric cars preceded the gasoline automobile by many decades and are now returning to replace the traditional gasoline automobile. The printing press was a development that changed the way that information was stored, transmitted, and replicated. This allowed empowered authors but it also promoted censorship and information overload in writing technology.
Milan Zeleny described the above phenomenon.[26] He also wrote that:
"Implementing high technology is often resisted. This resistance is well understood on the part of active participants in the requisite TSN. The electric car will be resisted by gas-station operators in the same way automated teller machines (ATMs) were resisted by bank tellers and automobiles by horsewhip makers. Technology does not qualitatively restructure the TSN and therefore will not be resisted and never has been resisted. Middle management resists business process reengineering because BPR represents a direct assault on the support net (coordinative hierarchy) they thrive on. Teamwork and multi-functionality is resisted by those whose TSN provides the comfort of narrow specialization and command-driven work."Social media could be considered a disruptive innovation within sports. More specifically, the way that news in sports circulates nowadays versus the pre-internet era where sports news was mainly on T.V., radio, and newspapers. Social media has created a new market for sports that was not around before in the sense that players and fans have instant access to information related to sports.
High-technology effects
High technology is a technology core that changes the very architecture (structure and organization) of the components of the technology support net. High technology therefore transforms the qualitative nature of the TSN's tasks and their relations, as well as their requisite physical, energy, and information flows. It also affects the skills required, the roles played, and the styles of management and coordination—the organizational culture itself.This kind of technology core is different from regular technology core, which preserves the qualitative nature of flows and the structure of the support and only allows users to perform the same tasks in the same way, but faster, more reliably, in larger quantities, or more efficiently. It is also different from appropriate technology core, which preserves the TSN itself with the purpose of technology implementation and allows users to do the same thing in the same way at comparable levels of efficiency, instead of improving the efficiency of performance.[28]
As for the difference between high technology and low technology, Milan Zeleny once said:
" The effects of high technology always breaks the direct comparability by changing the system itself, therefore requiring new measures and new assessments of its productivity. High technology cannot be compared and evaluated with the existing technology purely on the basis of cost, net present value or return on investment. Only within an unchanging and relatively stable TSN would such direct financial comparability be meaningful. For example, you can directly compare a manual typewriter with an electric typewriter, but not a typewriter with a word processor. Therein lies the management challenge of high technology. "[29]However, not all modern technologies are high technologies. They have to be used as such, function as such, and be embedded in their requisite TSNs. They have to empower the individual because only through the individual can they empower knowledge. Not all information technologies have integrative effects. Some information systems are still designed to improve the traditional hierarchy of command and thus preserve and entrench the existing TSN. The administrative model of management, for instance, further aggravates the division of task and labor, further specializes knowledge, separates management from workers, and concentrates information and knowledge in centers.
As knowledge surpasses capital, labor, and raw materials as the dominant economic resource, technologies are also starting to reflect this shift. Technologies are rapidly shifting from centralized hierarchies to distributed networks. Nowadays knowledge does not reside in a super-mind, super-book, or super-database, but in a complex relational pattern of networks brought forth to coordinate human action.
Practical example of disruption
In the practical world, the popularization of personal computers illustrates how knowledge contributes to the ongoing technology innovation. The original centralized concept (one computer, many persons) is a knowledge-defying idea of the prehistory of computing, and its inadequacies and failures have become clearly apparent. The era of personal computing brought powerful computers "on every desk" (one person, one computer). This short transitional period was necessary for getting used to the new computing environment, but was inadequate from the vantage point of producing knowledge. Adequate knowledge creation and management come mainly from networking and distributed computing (one person, many computers). Each person's computer must form an access point to the entire computing landscape or ecology through the Internet of other computers, databases, and mainframes, as well as production, distribution, and retailing facilities, and the like. For the first time, technology empowers individuals rather than external hierarchies. It transfers influence and power where it optimally belongs: at the loci of the useful knowledge. Even though hierarchies and bureaucracies do not innovate, free and empowered individuals do; knowledge, innovation, spontaneity, and self-reliance are becoming increasingly valued and promoted.[XXX . V00 Working time
Working time is the period of time that a person spends at paid labor. Unpaid labor such as personal housework or caring for children or pets is not considered part of the working week.
Many countries regulate the work week by law, such as stipulating minimum daily rest periods, annual holidays, and a maximum number of working hours per week. Working time may vary from person to person, often depending on location, culture, lifestyle choice, and the profitability of the individual's livelihood. For example, someone who is supporting children and paying a large mortgage will need to work more hours to meet basic costs of living than someone of the same earning power without children. Because fewer people than ever are having children,[1] choosing part time work is becoming more popular.[2]
Standard working hours (or normal working hours) refers to the legislation to limit the working hours per day, per week, per month or per year. If an employee needs to work overtime, the employer will need to pay overtime payments to employees as required in the law. Generally speaking, standard working hours of countries worldwide are around 40 to 44 hours per week (but not everywhere: from 35 hours per week in France[3] to up to 112 hours per week in North Korean labor camps [4]), and the additional overtime payments are around 25% to 50% above the normal hourly payments Maximum working hours refers to the maximum working hours of an employee. The employee cannot work more than the level specified in the maximum working hours law
Since the 1960s, the consensus among anthropologists, historians, and sociologists has been that early hunter-gatherer societies enjoyed more leisure time than is permitted by capitalist and agrarian societies;[6][7] for instance, one camp of !Kung Bushmen was estimated to work two-and-a-half days per week, at around 6 hours a day.[8] Aggregated comparisons show that on average the working day was less than five hours.[6]
Subsequent studies in the 1970s examined the Machiguenga of the Upper Amazon and the Kayapo of northern Brazil. These studies expanded the definition of work beyond purely hunting-gathering activities, but the overall average across the hunter-gatherer societies he studied was still below 4.86 hours, while the maximum was below 8 hours.[6] Popular perception is still aligned with the old academic consensus that hunter-gatherers worked far in excess of modern humans' forty-hour week.[7]
The industrial revolution made it possible for a larger segment of the population to work year-round, because this labor was not tied to the season and artificial lighting made it possible to work longer each day. Peasants and farm laborers moved from rural areas to work in urban factories, and working time during the year increased significantly.[9] Before collective bargaining and worker protection laws, there was a financial incentive for a company to maximize the return on expensive machinery by having long hours. Records indicate that work schedules as long as twelve to sixteen hours per day, six to seven days per week were practiced in some industrial sites.[citation needed]
Over the 20th century, work hours shortened by almost half, mostly due to rising wages brought about by renewed economic growth, with a supporting role from trade unions, collective bargaining, and progressive legislation. The workweek, in most of the industrialized world, dropped steadily, to about 40 hours after World War II. The limitation of working hours is also proclaimed by the Universal Declaration of Human Rights,[10] International Covenant on Economic, Social and Cultural Rights,[11] and European Social Charter.[12] The decline continued at a faster pace in Europe: for example, France adopted a 35-hour workweek in 2000. In 1995, China adopted a 40-hour week, eliminating half-day work on Saturdays (though this is not widely practiced). Working hours in industrializing economies like South Korea, though still much higher than the leading industrial countries, are also declining steadily.
Technology has also continued to improve worker productivity, permitting standards of living to rise as hours decline.[13] In developed economies, as the time needed to manufacture goods has declined, more working hours have become available to provide services, resulting in a shift of much of the workforce between sectors.
Economic growth in monetary terms tends to be concentrated in health care, education, government, criminal justice, corrections, and other activities that are regarded as necessary for society rather than those that contribute directly to the production of material goods.
In the mid-2000s, the Netherlands was the first country in the industrialized world where the overall average working week dropped to less than 30 hours.[14]
Gradual decrease in working hours
Most countries in the developed world have seen average hours worked decrease significantly.[15][16] For example, in the U.S in the late 19th century it was estimated that the average work week was over 60 hours per week.[17] Today the average hours worked in the U.S. is around 33,[18] with the average man employed full-time for 8.4 hours per work day, and the average woman employed full-time for 7.7 hours per work day.[19] The front runners for lowest average weekly work hours are the Netherlands with 27 hours,[20] and France with 30 hours.[21] At current rates the Netherlands is set to become the first country to reach an average work week under 21 hours.[22] In a 2011 report of 26 OECD countries, Germany had the lowest average working hours per week at 25.6 hours.[23]The New Economics Foundation has recommended moving to a 21-hour standard work week to address problems with unemployment, high carbon emissions, low well-being, entrenched inequalities, overworking, family care, and the general lack of free time.[24][25][26] Actual work week lengths have been falling in the developed world.[27]
Factors that have contributed to lowering average work hours and increasing standard of living have been:
- Technological advances in efficiency such as mechanization, robotics and information technology.
- The increase of women equally participating in making income as opposed to previously being commonly bound to homemaking and childrearing exclusively.
- Dropping fertility rates leading to fewer hours needed to be worked to support children.
Workweek structure
The structure of the work week varies considerably for different professions and cultures. Among salaried workers in the western world, the work week often consists of Monday to Friday or Saturday with the weekend set aside as a time of personal work and leisure. Sunday is set aside in the western world because it is the Christian sabbath.The traditional American business hours are 9:00 a.m. to 5:00 p.m., Monday to Friday, representing a workweek of five eight-hour days comprising 40 hours in total. These are the origin of the phrase 9-to-5, used to describe a conventional and possibly tedious job.[32] Negatively used, it connotes a tedious or unremarkable occupation. The phrase also indicates that a person is an employee, usually in a large company, rather than self-employed. More neutrally, it connotes a job with stable hours and low career risk, but still a position of subordinate employment. The actual time at work often varies between 35 and 48 hours in practice due to the inclusion, or lack of inclusion, of breaks. In many traditional white collar positions, employees were required to be in the office during these hours to take orders from the bosses, hence the relationship between this phrase and subordination. Workplace hours have become more flexible, but even still the phrase is commonly used.
Several countries have adopted a workweek from Monday morning until Friday noon, either due to religious rules (observation of shabbat in Israel whose workweek is Sunday to Friday afternoon) or the growing predominance of a 35–37.5 hour workweek in continental Europe. Several of the Muslim countries have a standard Sunday through Thursday or Saturday through Wednesday workweek leaving Friday for religious observance, and providing breaks for the daily prayer times.[
Average annual hours actually worked per worker
OECD ranking
|
|
|
Differences among countries and regions
South Korea
South Korea has the fastest shortening working time in the OECD,[36] which is the result of the government's proactive move to lower working hours at all levels to increase leisure and relaxation time, which introduced the mandatory forty-hour, five-day working week in 2004 for companies with over 1,000 employees. Beyond regular working hours, it is legal to demand up to 12 hours of overtime during the week, plus another 16 hours on weekends.[citation needed] The 40-hour workweek expanded to companies with 300 employees or more in 2005, 100 employees or more in 2006, 50 or more in 2007, 20 or more in 2008 and a full inclusion to all workers nationwide in July 2011.[37] The government has continuously increased public holidays to 16 days in 2013, more than the 10 days of the United States and double that of the United Kingdom's 8 days.[38] Despite those efforts, South Korea's work hours are still relatively long, with an average 2,163 hours per year in 2012.[39]Japan
Work hours in Japan are decreasing, but many Japanese still work long hours.[40] Recently, Japan's Ministry of Health, Labor and Welfare (MHLW) issued a draft report recommending major changes to the regulations that govern working hours. The centerpiece of the proposal is an exemption from overtime pay for white-collar workers.[citation needed]Japan has enacted an 8-hour work day and 40-hour work week (44 hours in specified workplaces). The overtime limits are: 15 hours a week, 27 hours over two weeks, 43 hours over four weeks, 45 hours a month, 81 hours over two months and 120 hours over three months; however, some workers get around these restrictions by working several hours a day without 'clocking in' whether physically or metaphorically.[41][citation needed] The overtime allowance should not be lower than 125% and not more than 150% of the normal hourly rate.[42]European Countries
In most European Union countries, working time is gradually decreasing.[43] The European Union's working time directive imposes a 48-hour maximum working week that applies to every member state except the United Kingdom and Malta (which have an opt-out, meaning that UK-based employees may work longer than 48 hours if they wish, but they cannot be forced to do so).[44] France has enacted a 35-hour workweek by law, and similar results have been produced in other countries through collective bargaining. A major reason for the low annual hours worked in Europe is a relatively high amount of paid annual leave.[45] Fixed employment comes with four to six weeks of holiday as standard. In the UK, for example, full-time employees are entitled to 28 days of paid leave a year.[46]Mexico
Mexican laws mandate a maximum of 48 hours of work per week, but they are rarely observed or enforced due to loopholes in the law, the volatility of labor rights in Mexico, and its underdevelopment relative to other members countries of the Organisation for Economic Co-operation and Development (OECD). Indeed, private sector employees often work overtime without receiving overtime compensation. Fear of unemployment and threats by employers explain in part why the 48-hour work week is disregardedColombia
Articles 161 to 167 of the Substantive Work Code in Colombia provide for a maximum of 48 hours of work a week.[47]Australia
In Australia, between 1974 and 1997 no marked change took place in the average amount of time spent at work by Australians of "prime working age" (that is, between 25 and 54 years of age). Throughout this period, the average time spent at work by prime working-age Australians (including those who did not spend any time at work) remained stable at between 27 and 28 hours per week. This unchanging average, however, masks a significant redistribution of work from men to women. Between 1974 and 1997, the average time spent at work by prime working-age Australian men fell from 45 to 36 hours per week, while the average time spent at work by prime working-age Australian women rose from 12 to 19 hours per week. In the period leading up to 1997, the amount of time Australian workers spent at work outside the hours of 9 a.m. to 5 p.m. on weekdays also increased.[48]In 2009, a rapid increase in the number of working hours was reported in a study by The Australia Institute. The study found the average Australian worked 1855 hours per year at work. According to Clive Hamilton of The Australia Institute, this surpasses even Japan. The Australia Institute believes that Australians work the highest number of hours in the developed world.[49]
From January 1, 2010, Australia enacted a 38-hour workweek in accordance with the Fair Work Act 2009, with an allowance for additional hours as overtime.[50]
The vast majority of full-time employees in Australia work additional overtime hours. A 2015 survey found that of Australia's 7.7 million full-time workers, 5 million put in more than 40 hours a week, including 1.4 million who worked more than 50 hours a week and 270,000 who put in more than 70 hours.[51]
United States
In 2006, the average man employed full-time worked 8.4 hours per work day, and the average woman employed full-time worked 7.7 hours per work day.[19] There is no mandatory minimum amount of paid time off for sickness or holiday. The majority of jobs in America do not offer paid time off. [52] Because of the pressure of working, time is increasingly viewed as a commodity.[53]Recent history
By 1946 the United States government had inaugurated the 40-hour work week for all federal employees.[54] Beginning in 1950, under the Truman Administration, the United States became the first known industrialized nation to explicitly (albeit secretly) and permanently forswear a reduction of working time. Given the military-industrial requirements of the Cold War, the authors of the then secret National Security Council Report 68 (NSC-68)[55] proposed the US government undertake a massive permanent national economic expansion that would let it "siphon off" a part of the economic activity produced to support an ongoing military buildup to contain the Soviet Union. In his 1951 Annual Message to the Congress, President Truman stated:In terms of manpower, our present defense targets will require an increase of nearly one million men and women in the armed forces within a few months, and probably not less than four million more in defense production by the end of the year. This means that an additional 8 percent of our labor force, and possibly much more, will be required by direct defense needs by the end of the year. These manpower needs will call both for increasing our labor force by reducing unemployment and drawing in women and older workers, and for lengthening hours of work in essential industries.[56]According to the Bureau of Labor Statistics, the average non-farm private sector employee worked 34.5 hours per week as of June 2012.[57]
As President Truman’s 1951 message had predicted, the share of working women rose from 30 percent of the labor force in 1950 to 47 percent by 2000 – growing at a particularly rapid rate during the 1970s.[58] According to a Bureau of Labor Statistics report issued May 2002, "In 1950, the overall participation rate of women was 34 percent. ... The rate rose to 38 percent in 1960, 43 percent in 1970, 52 percent in 1980, and 58 percent in 1990 and reached 60 percent by 2000. The overall labor force participation rate of women is projected to attain its highest level in 2010, at 62 percent.”[58] The inclusion of women in the work force can be seen as symbolic of social progress as well as of increasing American productivity and hours worked.
Between 1950 and 2007 official price inflation was measured to 861 percent. President Truman, in his 1951 message to Congress, predicted correctly that his military buildup "will cause intense and mounting inflationary pressures." Using the data provided by the United State Bureau of Labor Statistics, Erik Rauch has estimated productivity to have increased by nearly 400%.[59] According to Rauch, "if productivity means anything at all, a worker should be able to earn the same standard of living as a 1950 worker in only 11 hours per week."
In the United States, the working time for upper-income professionals has increased compared to 1965, while total annual working time for low-skill, low-income workers has decreased.[60] This effect is sometimes called the "leisure gap".
The average working time of married couples – of both spouses taken together – rose from 56 hours in 1969 to 67 hours in 2000.[61]
Overtime rules
Many professional workers put in longer hours than the forty-hour standard.[62] In professional industries like investment banking and large law firms, a forty-hour workweek is considered inadequate and may result in job loss or failure to be promoted.[63][64] Medical residents in the United States routinely work long hours as part of their training.Workweek policies are not uniform in the U.S. Many compensation arrangements are legal, and three of the most common are wage, commission, and salary payment schemes. Wage earners are compensated on a per-hour basis, whereas salaried workers are compensated on a per-week or per-job basis, and commission workers get paid according to how much they produce or sell.
Under most circumstances, wage earners and lower-level employees may be legally required by an employer to work more than forty hours in a week; however, they are paid extra for the additional work. Many salaried workers and commission-paid sales staff are not covered by overtime laws. These are generally called "exempt" positions, because they are exempt from federal and state laws that mandate extra pay for extra time worked.[65] The rules are complex, but generally exempt workers are executives, professionals, or sales staff.[66] For example, school teachers are not paid extra for working extra hours. Business owners and independent contractors are considered self-employed, and none of these laws apply to them.
Generally, workers are paid time-and-a-half, or 1.5 times the worker's base wage, for each hour of work past forty. California also applies this rule to work in excess of eight hours per day,[67] but exemptions[68] and exceptions[69] significantly limit the applicability of this law.
In some states, firms are required to pay double-time, or twice the base rate, for each hour of work past 60, or each hour of work past 12 in one day in California, also subject to numerous exemptions and exceptions.[67] This provides an incentive for companies to limit working time, but makes these additional hours more desirable for the worker. It is not uncommon for overtime hours to be accepted voluntarily by wage-earning workers. Unions often treat overtime as a desirable commodity when negotiating how these opportunities shall be partitioned among union members.
Brazil
The work time in Brazil is 44 hours per week, usually 8 hours per day and 4 hours on Saturday or 8.8 hours per day. On duty/no meal break jobs are 6 hours per day. Public servants work 40 hours a week.It is worth noting that in Brazil meal time is not usually counted as work. There is a 1-hour break for lunch and work schedule is typically 8:00 or 9:00–noon, 13:00–18:00. In larger cities people have lunch meal on/near the work site, while in smaller cities a sizable fraction of the employees might go home for lunch.
A 30-day vacation is mandatory by law and there are about 13 to 15 holidays a year, depending on the municipality.
China
China adopted a 40-hour week, eliminating half-day work on Saturdays. However, this rule has never been truly enforced, and unpaid or underpaid overtime working is common practice in China.Traditionally, Chinese have worked long hours, and this has led to many deaths from overwork, with the state media reporting in 2014 that 600,000 people were dying annually from overwork. Despite this, work hours have reportedly been falling for about three decades due to rising productivity, better labor laws, and the spread of the two-day weekend. The trend has affected both factories and white-collar companies that have been responding to growing demands for easier work schedules.[
Hong Kong
Hong Kong has no legislation regarding maximum and normal working hours. The average weekly working hours of full-time employees in Hong Kong is 49 hours.[74] According to the Price and Earnings Report 2012 conducted by UBS, while the global and regional average were 1,915 and 2,154 hours per year respectively, the average working hours in Hong Kong is 2,296 hours per year, which ranked the fifth longest yearly working hours among 72 countries under study.[75] In addition, from the survey conducted by the Public Opinion Study Group of the University of Hong Kong, 79% of the respondents agree that the problem of overtime work in Hong Kong is "severe", and 65% of the respondents support the legislation on the maximum working hours.[76] In Hong Kong, 70% of surveyed do not receive any overtime remuneration.[77] These show that people in Hong Kong concerns the working time issues. As Hong Kong implemented the minimum wage law in May 2011, the Chief Executive, Donald Tsang, of the Special Administrative Region pledged that the government will standardize working hours in Hong Kong.[78]On 26 November 2012, the Labour Department of the HKSAR released the "Report of the policy study on standard working hours". The report covers three major areas, including: (1) the regimes and experience of other places in regulating working hours, (2) latest working time situations of employees in different sectors, and (3) estimation of the possible impact of introducing standard working hour in Hong Kong.[79] Under the selected parameters, from most loosen to most stringent, the estimated increase in labour cost vary from 1.1 billion to 55 billion HKD, and affect 957,100 (36.7% of total employees) to 2,378,900 (91.1% of total) employees.[74]
Various sectors of the community show concerns about the standard working hours in Hong Kong. The points are summarized as below:
Opinions from various sectors
Labor organizations
Hong Kong Catholic Commission For Labour Affairs urges the government to legislate the standard working hours in Hong Kong, and suggests a 44 hours standard, 54 hours maximum working hours in a week. The organization thinks that long working time adversely affects the family and social life and health of employees; it also indicates that the current Employment Ordinance does not regulate overtime pays, working time limits nor rest day pays, which can protect employees rights.Generally, business sector agrees that it is important to achieve work-life balance, but does not support a legislation to regulate working hours limit. They believe "standard working hours" is not the best way to achieve work-life balance and the root cause of the long working hours in Hong Kong is due to insufficient labor supply. The Managing Director of Century Environmental Services Group, Catherine Yan, said "Employees may want to work more to obtain a higher salary due to financial reasons. If standard working hour legislation is passed, employers will need to pay a higher salary to employees, and hence the employers may choose to segment work tasks to employer more part time employees instead of providing overtime pay to employees." She thinks this will lead to a situation that the employees may need to find two part-time jobs to earn their living, making them wasting more time on transportation from one job to another.[80]
The Chairman of the Hong Kong General Chamber of Commerce, Chow Chung-kong believes that it is so difficult to implement standard working hours that apply “across-the-board”, specifically, to accountants and barristers.[81] In addition, he believes that standard working hours may decrease individual employees' working hours and would not increase their actual income. It may also lead to an increase of number of part-timers in the labor market.
According to a study conducted jointly by the Business, Economic and Public Affairs Research Centre and Enterprise and Social Development Research Centre of Hong Kong Shue Yan University, 16% surveyed companies believe that a standard working hours policy can be considered, and 55% surveyed think that it would be difficult to implement standard working hours in businesses.[82]
Employer representative in the Labour Advisory Board, Stanley Lau, said that standard working hours will completely alter the business environment of Hong Kong, affect small and medium enterprise and weaken competitiveness of businesses. He believes that the government can encourage employers to pay overtime salary, and there is no need to regulate standard working hours.[83]
Political parties
On 17–18 October 2012, the Legislative Council members in Hong Kong debated on the motion "legislation for the regulation of working hours". Cheung Kwok-che proposed the motion "That is the Council urges the Government to introduce a bill on the regulation of working hours within this legislative session, the contents of which must include the number of standard weekly hours and overtime pay".[84] As the motion was not passed by both functional constituencies and geographical constituencies, it was negatived.[85]The Hong Kong Federation of Trade Unions suggested a standard 44-hour work week with overtime pay of 1.5 times the usual pay. It believes the regulation of standard working hour can prevent the employers to force employees to work (overtime) without pay.[86]
Elizabeth Quat of the Democratic Alliance for the Betterment and Progress of Hong Kong (DAB), believed that standard working hours were a labor policy and was not related to family-friendly policies. The Vice President of Young DAB, Wai-hung Chan, stated that standard working hours would bring limitations to small and medium enterprises. He thought that the government should discuss the topic with the public more before legislating standard working hours.
The Democratic Party suggested a 44-hour standard work week and compulsory overtime pay to help achieve the balance between work, rest and entertainment of people in Hong Kong.[87]
The Labour Party believed regulating working hours could help achieve a work-life balance.[88] It suggests an 8-hour work day, a 44-hour standard work week, a 60-hour maximum work week and an overtime pay of 1.5 times the usual pay.[77]
Poon Siu-ping of Federation of Hong Kong and Kowloon Labour Unions thought that it is possible to set work hour limit for all industries; and the regulation on working hours can ensure the overtime payment by employers to employees, and protect employees’ health.
The Civic party suggests "to actively study setting weekly standard working hours at 44 hours to align with family-friendly policies" in LegCo Election 2012.[89]
Member of Economic Synergy, Jeffery Lam, believes that standard working hours would adversely affect productivity, tense the employer-employee relationship, and increase the pressure faced by businesses who suffer from inadequate workers. He does not support the regulation on working hours at its current situation.
Government
Matthew Cheung Kin-chung, the Secretary for Labour and Welfare Bureau, said the Executive Council has already received the government report on working hours in June, and the Labour Advisory Board and the LegCo’s Manpower Panel will receive the report in late November and December respectively.[91] On 26 November 2012, the Labour Department released the report, and the report covered the regimes and experience of practicing standard working hours in selected regions, current work hour situations in different industries, and the impact assessment of standard working hours. Also, Matthew Cheung mentioned that the government will form a select committee by first quarter of 2013, which will include government officials, representative of labor unions and employers’ associations, academics and community leaders, to investigate the related issues. He also said that it would "perhaps be unrealistic" to put forward a bill for standard working hours in the next one to two years.[92]Academics
Yip Siu-fai, Professor of the Department of Social Work and Social Administration of HKU, said that professions such as nurse, accountants, have a long work time, and this may affect their social life. He believes standard working hour can help making Hong Kong to become a family-friendly workplace and increase fertility rate. Randy Chiu, Professor of Department of Management of HKBU, said that standard working hour could avoid excessively long working hours of employees.[93] He also said that nowadays Hong Kong attains almost full employment, has a high rental price and severe inflation, recently implemented minimum wage, and affected by gloomy global economy; he also mentioned that comprehensive considerations on macroeconomic situations are needed, and pushed to the awareness that it is perhaps inappropriate to adopt working time regulation examples in other countries to Hong Kong.[94]Lee Shu-Kam, Associate Professor of the Department of Economics and Finance of HKSYU, believed standard working hours cannot achieve ‘work-life balance’. He referenced the research to the US by the University of California, Los Angeles in 1999 and pointed out that in the industries and regions which the wage elasticity is low, the effects of standard working hours on lowering actual working time and increasing wage is limited: for regions where labor supply is inadequate, standard working hours can protect employees’ benefit yet cause unemployment; but for the regions, such as Japan, where the problem does not exist, standard working hour would only lead to unemployment.[95] In addition, he said the effect of standard working hour is similar to that of giving overtime pay, say, making employees to favor overtime work more. In this sense, standard working hour does not match its principle: to shorten work time and increase recreation time of employees.[96] He believed that the key point is to help employees to achieve work-life balance and get a win-win situation of employers and employees.
Francis Lui, Head and Professor of the Department of Economics of Hong Kong University of Science and Technology, believed that the standard working hour may not lower work time but increase unemployment. He used Japan as an example to illustrate that the implementation of standard working hours lowered productivity per head and demotivated the economy. He also said that even if the standard working hours can shorten employees’ weekly working hours, they may need to work for more years to earn sufficient amount of money for retirement, i.e. delay their retirement age. The working time in the whole life may not change.[97]
Lok-sang Ho, Professor of Economics and Director of the Centre for Public Policy Studies of Lingnan University, pointed out that “as different employees perform various jobs and under different degrees of pressures, it may not appropriate to establish standard working hours in Hong Kong; and he proposed 50-hour maximum work week to protect workers health.[98]
Singapore
Singapore enacts an 8-hour normal work day, a 44-hour normal working week, and a maximum 48-hour work week. It is to note that if the employee works no more than five days a week, the employee’s normal working day is 9-hour and the working week is 44 hours. Also, if the number of hours worked of the worker is less than 44 hours every alternate week, the 44-hour weekly limit may be exceeded in the other week. Yet, this is subjected to the pre-specification in the service contract and the maximum should not exceed 48 hours per week or 88 hours in any consecutive two week time. In addition, a shift worker can work up to 12 hours a day, provided that the average working hours per week do not exceed 44 over a consecutive 3-week time. The overtime allowance per overtime hour must not be less than 1.5 times of the employee’s hour basic rates.[99]Other countries
The Kapauku people of Papua think it is bad luck to work two consecutive days. The !Kung Bushmen work just two-and-a-half days per week, rarely more than six hours per day.[100]The work week in Samoa is approximately 30 hours,[101] and although average annual Samoan cash income is relatively low, by some measures, the Samoan standard of living is quite good.
In India, particularly in smaller companies, someone generally works for 11 hours a day and 6 days a week. No Overtime is paid for extra time. Law enforcement is negligible in regulating the working hours. A typical office will open at 09:00 or 09:30 and officially end the work day at about 19:00. However, many workers and especially managers will stay later in the office due to additional work load. However, large Indian companies and MNC offices located in India tend to follow a 5-day, 8- to 9-hour per day working schedule. The Government of India in some of its offices also follows a 5-day week schedule.[
Nigeria has public servants that work 35 hours per week
Recent trends
Many modern workplaces are experimenting with accommodating changes in the workforce and the basic structure of scheduled work. Flextime allows office workers to shift their working time away from rush-hour traffic; for example, arriving at 10:00 am and leaving at 6:00 pm. Telecommuting permits employees to work from their homes or in satellite locations (not owned by the employer), eliminating or reducing long commute times in heavily populated areas. Zero-hour contracts establish work contracts without minimum-hour guarantees; workers are paid only for the hours they work.XXX.V000 Reconstructing work: Automation, artificial intelligence, and the essential role of human
Some say that artificial intelligence threatens to automate away all the work that people do. But what if there's a way to rethink the concept of "work" that not only makes humans essential, but allows them to take fuller advantage of their uniquely human abilities?
Pessimist or optimist?
Will pessimistic predictions of the rise of the robots come true? Will humans be made redundant by artificial intelligence (AI) and robots, unable to find work and left to face a future defined by an absence of jobs? Or will the optimists be right? Will historical norms reassert themselves and technology create more jobs than it destroys, resulting in new occupations that require new skills and knowledge and new ways of working?
It is possible that the most effective use of AI is not simply as a means to automate more tasks, but as an enabler to achieve higher-level goals, to create more value. The advent of AI makes it possible—indeed, desirable—to reconceptualize work, not as a set of discrete tasks laid end to end in a predefined process, but as a collaborative problem-solving effort where humans define the problems, machines help find the solutions, and humans verify the acceptability of those solutions.
Constructing work
Pre-industrial work was constructed around the product, with skilled artisans taking responsibility for each aspect of its creation. Early factories (commonly called manufactories at the time) were essentially collections of artisans, all making the same product to realize sourcing and distribution benefits. In contrast, our current approach to work is based on Adam Smith’s division of labor,1 in the form of the task. Indeed, if we were to pick one idea as the foundation of the Industrial Revolution it would be this division of labor: Make the coil spring rather than the entire watch.Specialization in a particular task made it worthwhile for workers to develop superior skills and techniques to improve their productivity. It also provided the environment for the task to be mechanized, capturing the worker’s physical actions in a machine to improve precision and reduce costs. Mechanization then begat automation when we replaced human power with water, then steam, and finally electric power, all of which increased capacity. Handlooms were replaced with power looms, and the artisanal occupation shifted from weaving to managing a collection of machines. Human computers responsible for calculating gunnery and astronomical tables were similarly replaced with analog and then digital computers and the teams of engineers required to develop the computer’s hardware and software. Word processors shifted responsibility for document production from the typing pool to the author, resulting in the growth of departmental IT. More recently, doctors responsible for interpreting medical images are being replaced by AI and its attendant team of technical specialists.2
This impressive history of industrial automation has resulted not only from the march of technology, but from the conception of work as a set of specialized tasks. Without specialization, problems wouldn’t have been formalized as processes, processes wouldn’t have been broken into well-defined tasks, and tasks wouldn’t have been mechanized and then automated. Because of this atomization of work into tasks (conceptually and culturally), jobs have come to be viewed largely as compartmentalized collections of tasks. (Typical corporate job descriptions and skills matrices take the form of lists of tasks.) Job candidates are selected based on their knowledge and skills, their ability to prosecute the tasks in the job description. A contemporary manifestation of this is the rise of task-based crowdsourcing sites—such as TaskRabbit3 and Kaggle,4 to name only two—that enable tasks to be commoditized and treated as piecework.
Does automation destroy or create jobs?
AI demonstrates the potential to replicate even highly complex, specialized tasks that only humans were once thought able to perform (while finding seemingly easy but more general tasks, such as walking or common sense reasoning, incredibly challenging). Unsurprisingly, some pundits worry that the age of automation is approaching its logical conclusion, with virtually all work residing in the ever-expanding domain of machines. These pessimists think that robotic process automation5 (RPA) and such AI solutions as autonomous vehicles will destroy jobs, relegating people to filling the few gaps left in the economy that AI cannot occupy. There may well be more jobs created in the short term to build, maintain, and enhance the technology, but not everyone will be able to gain the necessary knowledge, skills, and experience.6 For example, it seems unlikely that the majority of truck, bus, or taxi drivers supplanted by robots will be able to learn the software development skills required to build or maintain the algorithms replacing them.Further, these pessimists continue, we must consider a near future where many (if not all) low-level jobs, such as the administrative and process-oriented tasks that graduates typically perform as the first step in their career, are automated. If the lower levels of the career ladder are removed, they will likely struggle to enter professions, leaving a diminishing pool of human workers to compete for a growing number of jobs. Recent advances in AI prompt many to wonder just how long it will be before AI catches up with the majority of us. How far are we from a future where the only humans involved with a firm are its owners?
Of course, there is an alternative view. History teaches that automation, far from destroying jobs, can and usually does create net new jobs, and not just those for building the technology or training others in its use. This is because increased productivity and efficiency, and the consequent lowering of prices, has historically led to greater demand for goods and services. For example, as the 19th century unfolded, new technology (such as power looms) enabled more goods (cloth, for instance) to be produced with less effort;7 as a consequence, prices dropped considerably, thus increasing demand from consumers. Rising consumer demand not only drove further productivity improvements through progressive technological refinements, but also significantly increased demand for workers with the right skills.8 The optimistic view holds that AI, like other automation technologies before it, will operate in much the same way. By automating more and more complex tasks, AI could potentially reduce costs, lower prices, and generate more demand—and, in doing so, create more jobs.
The productivity problem and the end of a paradigm
Often overlooked in this debate is the assumption made by both camps that automation is about using machines to perform tasks traditionally performed by humans. And indeed, the technologies introduced during the Industrial Revolution progressively (though not entirely) did displace human workers from particular tasks.9 Measured in productivity terms, by the end of the Industrial Revolution, technology had enabled a weaver to increase by a factor of 50 the amount of cloth produced per day;10 yet a modern power loom, however more efficient, executes the work in essentially the same way a human weaver does. This is a pattern that continues today: For example, we have continually introduced more sophisticated technology into the finance function (spreadsheets, word processing, and business intelligence tools are some common examples), but even the bots of modern-day robotic process automation complete tasks in the conventional way, filling in forms and sending emails as if a person were at the keyboard, while “exceptions�? are still handled by human workers.We are so used to viewing work as a series of tasks, automation as the progressive mechanization of those tasks, and jobs as collections of tasks requiring corresponding skills, that it is difficult to conceive of them otherwise. But there are signs that this conceptualization of work may be nearing the end of its useful life. One such major indication is the documented fact that technology, despite continuing advances, no longer seems to be achieving the productivity gains that characterized the years after the Industrial Revolution. Short-run productivity growth, in fact, has dropped from 2.82 (1920–1970) to 1.62 percent (1970–2014).11 Many explanations for this have been proposed, including measurement problems, our inability to keep up with the rapid pace of technological change, and the idea that the tasks being automated today are inherently “low productivity.�?12 In The Rise and Fall of American Growth,13 Robert Gordon argues that today’s low-productivity growth environment is due to a material difference in the technologies invented between 1850 and 1980 and those invented more recently. Gordon notes that prior to the Industrial Revolution mean growth was 1.79 percent (1870–1920),14 and proposes that what we’re seeing today is a reversion to this mean.
None of these explanations is entirely satisfying. Measurement questions have been debated to little avail. And there is little evidence that technology is developing more rapidly today than in the past.15 Nor is there a clear reason for why, say, a finance professional managing a team of bots should not realize a similar productivity boost as a weaver managing a collection of power looms. Even Robert Gordon’s idea of one-time technologies, while attractive, must be taken with a grain of salt: It is always risky to underestimate human ingenuity.
One explanation that hasn’t been considered, however, is that the industrial paradigm itself—where jobs are constructed from well-defined tasks—has simply run its course. We forget that jobs are a social construct, and our view of what a job is, is the result of a dialogue between capital and labor early in the Industrial Revolution. But what if we’re heading toward a future where work is different, rather than an evolution of what we have today?
Suitable for neither human nor machine
Constructing work around a predefined set of tasks suits neither human nor machine. On one hand, we have workers complaining of monotonous work,16 unreasonable schedules, and unstable jobs.17 Cost pressure and a belief that humans are simply one way to prosecute a task leads many firms to slice the salami ever more finely, turning to contingent labor and using smaller (and therefore more flexible) units of time to schedule their staff. The reaction to this has been a growing desire to recut jobs and make them more human, designing new jobs that make the most of our human advantages (and thereby make us humans more productive). On the other hand, we have automation being deployed in a manner similar to human labor, which may also not be optimal.The conundrum of low productivity growth might well be due to both under-utilized staff and under-utilized technology. Treating humans as task-performers, and a cost to be minimized, might be conventional wisdom, but Zeynep Ton found (and documented in her book The Good Jobs Strategy) that a number of firms across a range of industries—including well-known organizations such as Southwest Airlines, Toyota, Zappos, Wegmans, Costco, QuikTrip, and Trader Joe’s—were all able to realize above-average service, profit, and growth by crafting jobs that made the most of their employees’ inherent nature to be social animals and creative problem-solvers.18 Similarly, our inability to realize the potential of many AI technologies might not be due to the limitations of the technologies themselves, but, instead, our insistence on treating them as independent mechanized task performers. To be sure, AI can be used to automate tasks. But its full potential may lie in putting it to a more substantial use.
There are historical examples of new technologies being used in a suboptimal fashion for years, sometimes decades, before their more effective use was realized.19 For example, using electricity in place of steam in the factory initially resulted only in a cleaner and quieter work environment. It drove a productivity increase only 30 years later, when engineers realized that electrical power was easier to distribute (via wires) than mechanical power (via shafts, belts, and pulleys). The single, centralized engine (and mechanical power distribution), which was a legacy of the steam age, was swapped for small engines directly attached to each machine (and electrical power distribution). This enabled the shop floor to be optimized for workflow rather than power distribution, delivering a sudden productivity boost.
A new line between human and machine
The question then arises: If AI’s full potential doesn’t lie in automating tasks designed for humans, what is its most appropriate use? Here, our best guidance comes from evidence that suggests human and machine intelligence are best viewed as complements rather than substitutes20—and that humans and AI, working together, can achieve better outcomes than either alone.21 The classic example is freestyle chess. When IBM’s Deep Blue defeated chess grandmaster Garry Kasparov in 1997, it was declared to be “the brain’s last stand.�? Eight years later, it became clear that the story is considerably more interesting than “machine vanquishes man.�? A competition called “freestyle chess�? was held, allowing any combination of human and computer chess players to compete. The competition resulted in an upset victory that Kasparov later reflected upon:
The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching�? their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process… Human strategic guidance combined with the tactical acuity of a computer was overwhelming.22
The lesson here is that human and machine intelligence are different in complementary, rather than conflicting, ways. While they might solve the same problems, they approach these problems from different directions. Machines find highly complex tasks easy, but stumble over seemingly simple tasks that any human can do. While the two might use the same knowledge, how they use it is different. To realize the most from pairing human and machine, we need to focus on how the two interact, rather than on their individual capabilities.Tasks versus knowledge
Rather than focusing on the task, should we conceptualize work to focus on the knowledge, the raw material common to human and machine? To answer this question, we must first recognize that knowledge is predominantly a social construct,23 one that is treated in different ways by humans and machines.Consider the group of things labeled “kitten.�? Both human and robot learn to recognize “kitten�? the same way:24 by considering a labeled set of exemplars (images).25 However, although kittens are clearly things in the world, the concept of “kitten�?—the knowledge, the identification of the category, its boundaries, and label—is the result of a dialogue within a community.26
Much of what we consider to be common sense is defined socially. Polite behavior, for example, is simply common convention among one’s culture, and different people and cultures can have quite different views on what is correct behavior (and what is inexcusable). How we segment customers; the metric system along with other standards and measures; how we decompose problems into business processes and the tasks they contain; measure business performance; define the rules of the road and drive cars; regulation and legislation in general; and the cliché of Eskimos having dozens, if not hundreds, of words for snow,27 all exemplify knowledge that is socially constructed. Even walking—and the act of making a robot walk—is a social construct,28 as it was the community that identified “walking�? as a phenomenon and gave it a name, ultimately motivating engineers to create a walking robot, and it’s something we and robots learn by observation and encouragement. There are many possible ways of representing the world and dividing up reality, to understand the nature and relation of things, and to interact with the world around us, and the representation we use is simply the one that we agreed on.29 Choosing one word or meaning above the others has as much to do with societal convention as ontological necessity.
Socially constructed knowledge can be described as encultured knowledge, as it is our culture that determines what is (and what isn’t) a kitten, just as it is culture that determines what is and isn’t a good job. (We might even say that knowledge is created between people, rather than within them.) Encultured knowledge extends all the way up to formal logic, math, and hard science. Identifying and defining a phenomenon for investigation is thus a social process, something researchers must do before practical work can begin. Similarly, the rules, structures, and norms that are used in math and logic are conventions that have been agreed upon over time.30 A fish is a fish insofar as we all call it a fish. Our concept of “fish�? was developed in dialogue within the community. Consequently, our concept of fish drifts over time: In the past “fish�? included squid (and some other, but not all, cephalopods), but not in current usage. The concepts that we use to think, theorize, decide, and command are defined socially, by our community, by the group, and evolve with the group.
Knowledge and understanding
How is this discussion of knowledge related to AI? Consider again the challenge of recognizing images containing kittens. Before either human or machine can recognize kittens, we need to agree on what a “kitten�? is. Only then can we collect the set of labeled images required for learning.The distinction between human and machine intelligence, then, is that the human community is constantly constructing new knowledge (labeled exemplars in the case of kittens) and tearing down the old, as part of an ongoing dialogue within the community. When a new phenomenon is identified that breaks the mold, new features and relationships are isolated and discussed, old ones reviewed, concepts shuffled, unlearning happens, and our knowledge evolves. The European discovery of the platypus in 1798 is a case in point.31 When Captain John Hunter sent a platypus pelt to Great Britain,32 many scientists’ initial hunch was that it was a hoax. One pundit even proposed that it might have been a novelty created by an Asian taxidermist (and invested time in trying to find the stitches).33 The European community didn’t know how to describe or classify the new thing. A discussion ensued, new evidence was sought, and features identified, with the community eventually deciding that the platypus wasn’t a fake, and our understanding of animal classification evolved in response.
Humans experience the world in all its gloriously messy and poorly defined nature, where concepts are ill-defined and evolving and relationships fluid. Humans are quite capable of operating in this confusing and noisy world; of reading between the lines; tapping into weak signals; observing the unusual and unnamed; and using their curiosity, understanding, and intuition to balance conflicting priorities and determine what someone actually meant or what is the most important thing to do. Indeed, as Zeynep Ton documented in The Good Jobs Strategy,34 empowering employees to use their judgment, to draw on their own experience and observations, to look outside the box, and to consider the context of the problem they are trying to understand (and solve), as well as the formal metrics, policies, and rules of the firm, enabled them to make wiser decisions and consequentially deliver higher performance. Unfortunately, AI doesn’t factor in the unstated implications and repercussions, the context and nuance, of a decision or action in the way humans do.
It is this ability to refer to the context around an idea or problem—to craft more appropriate solutions, or to discover new knowledge to create (and learn)—that is uniquely human. Technology cannot operate in such an environment: It needs its terms specified and objectives clearly articulated, a well-defined and fully contextualized environment within which it can reliably operate. The problem must be identified and formalized, the inputs and outputs articulated, before technology can be leveraged. Before an AI can recognize kittens, for instance, we must define what a kitten is (by exemplar or via a formal description) and find a way to represent potential kittens that the AI can work with. Similarly, the recent boom in autonomous vehicles is due more to the development of improved sensors and hyper-accurate maps, which provide the AI with the dials and knobs it needs to operate, than the development of vastly superior algorithms.
It is through the social process of knowledge construction that we work together to identify a problem, define its boundaries and dependences, and discover and eliminate the unknowns until we reach the point where a problem has been defined sufficiently for knowledge and skills to be brought to bear.
A bridge between human and machine
If we’re to draw a line between human and machine, then it is the distinction between creating and using knowledge. On one side is the world of the unknowns (both known and unknown), of fuzzy concepts that cannot be fully articulated, the land of the humans, where we work together to make sense of the world. The other side is where terms and definitions have been established, where the problem is known and all variables are quantified, and automation can be applied. The bridge between the two is the social process of knowledge creation.Consider the question of what a “happy retirement�? is: We all want one, but we typically can’t articulate what it is. It’s a vague and subjective concept with a circular definition: A happy retirement is one in which you’re happy. Before we can use an AI-powered robo-advisor to create our investment portfolio, we need to take our concept of a “happy retirement�? through grounding the concept (“what will actually make me happy, as opposed to what I think will make me happy�?), establishing reasonable expectations (“what can I expect to fund�?), to attitudes and behaviors (“how much can I change my habits, how and where I spend my money, to free up cash to invest�?), before we reach the quantifiable data against which a robo-advisor can operate (investment goals, income streams, and appetite for risk). Above quantifiable investment goals and income streams is the social world, where we need to work with other people to discover what our happy retirement might be, to define the problem and create the knowledge. Below is where automation—with its greater precision and capacity for consuming data—can craft our ultimate investment strategy. Ideally there is interaction between the two layers—as with freestyle chess—with automation enabling the humans to play what-if games and explore how the solution space changes depending on how they shape the problem definition.
Reconstructing work
The foundation of work in the pre-industrial, craft era was the product. In the industrial era it is the task, specialized knowledge, and skills required to execute a step in a production process. Logically, the foundation of post-industrial work will be the problem—the goal to be achieved35—one step up from the solution provided by a process.If we’re to organize work around problems and successfully integrate humans and AI into the same organization, then it is management of the problem definition—rather than the task as part of a process to deliver a solution—that becomes our main concern.36 Humans take responsibility for shaping the problem—the data to consider, what good looks like, the choices to act—which they do in collaboration with those around them and their skill in doing this will determine how much additional value the solution creates. Automation (including AI) will support the humans by augmenting them with a set of digital behaviors37 (where a behavior is the way in which one acts in response to a particular situation or stimulus) that replicate specific human behaviors, but with the ability to leverage more data and provide more precise answers while not falling prey to the various cognitive biases to which we humans are prone. Finally, humans will evaluate the appropriateness and completeness of the solution provided and will act accordingly.
Indeed, if automation in the industrial era was the replication of tasks previously isolated and defined for humans, then in the post-industrial era, automation might be the replication of isolated and well-defined behaviors that were previously unique to humans.
Integrating humans and AI
Consider the challenge of eldercare. A recent initiative in the United Kingdom is attempting to break down the silos in which specialized health care professionals currently work.38 Each week, the specialists involved with a single patient—health care assistant, physiotherapist, occupational therapist, and so on—gather to discuss the patient. Each specialist brings his or her own point of view and domain knowledge to the table, but as a group they can build a more comprehensive picture of how best to help the patient by integrating observations from their various specialties as well as discussing more tacit observations that they might have made when interacting with the patient. By moving the focus from the tasks to be performed to the problem to be defined―how to improve the patient’s quality of life―the first phase of the project saw significant improvements in patient outcomes over the first nine months.Integrating AI (and other digital) tools into this environment to augment the humans might benefit the patient even more by providing better and more timely decisions and avoiding cognitive biases, resulting in an even higher quality of care. To do this, we could create a common digital workspace where the team can capture its discussions; a whiteboard (or blackboard) provides a suitable metaphor, as it’s easy to picture the team standing in front of the board discussing the patient while using the board to capture important points or share images, charts, and other data. A collection of AI (and non-AI) digital behaviors would also be integrated directly into this environment. While the human team stands in front of the whiteboard, the digital behaviors stand behind it, listening to the team’s discussion and watching as notes and data are captured, and reacting appropriately, or even responding to direct requests.
Data from tests and medical monitors could be fed directly to the board, with predictive behaviors keeping a watchful eye on data streams to determine if something unfortunate is about to happen (similar to how electrical failures can be predicted by looking for characteristic fluctuations in power consumption, or how AI can be used to provide early warning of struggling students by observing patterns in communication, attendance, and assignment submission), flagging possible problems to enable the team to step in before an event and prevent it, rather than after. A speech-to-text behavior creates a transcription of the ensuing discussion so that what was discussed is easily searchable and referenceable. A medical image—an MRI perhaps—is ordered to explore a potential problem further, with the resulting image delivered directly to the board, where it is picked up by a cancer-detection behavior to highlight possible problems for the team’s specialist to review. With a diagnosis in hand, the team works with a genetic drug-compatibility39 behavior to find the best possible response for this patient and a drug-conflict40 behavior that studies the patient’s history, prescriptions, and the suggested interventions to determine how they will fit in the current care regime, and explore the effectiveness of different possible treatment strategies. Once a treatment strategy has been agreed on, a planning behavior41 converts the strategy into a detailed plan—taking into account the urgency, sequencing, and preferred providers for each intervention—listing the interventions to take place and when and where each should take place, along with the data to be collected, updating the plan should circumstances change, such as a medical imaging resource becoming available early due to a cancellation.
Ideally, we want to populate this problem-solving environment with a comprehensive collection of behaviors. These behaviors might be predictive, flagging possible events before they happen. They might enable humans to explore the problem space, as the chess computer is used in freestyle chess, or the drug-compatibility and drug-conflict AIs in the example above. They might be analytical, helping us avoid our cognitive biases. They might be used to solve the problem, such as when the AI planning engine takes the requirements from the treatment strategy and the availability constraints from the resources the strategy requires, and creates a detailed plan for execution. Or they might be a combination of all of these. These behaviors could also include non-AI technologies, such as calculators, enterprise applications such as customer relationship management (CRM) (to determine insurance options for the patient), or even physical automations and non-technological solutions such as checklists.42
Uniquely human
It’s important to note that scenarios similar to the eldercare example just mentioned exist across a wide range of both blue- and white-collar jobs. The Toyota Production System is a particularly good blue-collar example, where work on the production line is oriented around the problem of improving the process used to manufacture cars, rather than the tasks required to assemble a car.One might assume that the creation of knowledge is the responsibility of academy-anointed experts. In practice, as Toyota found, it is the people at the coalface, finding and chipping away at problems, who create the bulk of new knowledge.43 It is our inquisitive nature that leads us to try and explain the world around us, creating new knowledge and improving the world in the process. Selling investment products, as we’ve discussed, can be reframed to focus on determining what a happy retirement might look like for this particular client, and guiding the client to his or her goal. Electric power distribution might be better thought of as the challenge of improving a household’s ability to manage its power consumption. The general shift from buying products to consuming services44 provides a wealth of similar opportunities to help individuals improve how they consume these services, be they anything from toilet paper subscriptions45 through cars46 and eldercare (or other medical and health services) to jet engines,47 while internally these same firms will have teams focused on improving how these services are created.
Advances (and productivity improvements) are typically made by skilled and curious practitioners solving problems, whether it was weavers in a mill finding and sharing a faster (but more complex) method of joining a broken thread in a power loom or diagnosticians in the clinic noticing that white patches sometimes appear on the skin when melanomas regress spontaneously.48 The chain of discovery starts at the coalface with our human ability to notice the unusual or problematic—to swim through the stream of the unknowns and of fuzzy concepts that cannot be fully articulated. This is where we collaborate to make sense of the world and create knowledge, whether it be the intimate knowledge of what a happy retirement means for an individual, or grander concepts that help shape the world around us. It is this ability to collectively make sense of the world that makes us uniquely human and separates us from the robots—and it cuts across all levels of society.
If we persist in considering a job to be little more than a collection of related tasks, where value is determined by the knowledge and skill required to prosecute them, then we should expect that automation will eventually consume all available work, as we must assume that any well-defined task, no matter how complex, will be eventually automated. This comes at a high cost, as while machines can learn, they don’t in themselves, create new knowledge. An AI tool might discover patterns in data, but it is the humans who noticed that the data set was interesting and then inferred meaning into the patterns discovered by the machine. As we relegate more and more tasks to machines, we are also eroding the connection between the problems to be discovered and the humans who can find and define them. Our machines might be able to learn, getting better at doing what they do, but they won’t be able to reconceive what ought to be done, and think outside their algorithmic box.
Conclusion
At the beginning of this article, we asked if the pessimists or optimists would be right. Will the future of work be defined by a lack of suitable jobs for much of the population? Or will historical norms reassert themselves, with automation creating more work than it destroys? Both of these options are quite possible since, as we often forget, work is a social construct, and it is up to us to decide how it should be constructed.There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems. The difficulty is in defining production as a problem to be solved, rather than a process to be streamlined. To do this, we must first establish the context for the problem (or contexts, should we decompose a large production into a set of smaller interrelated problems). Within each context, we need to identify what is known and what is unknown and needs to be discovered. Only then can we determine for each problem whether human or machine, or human and machine, is best placed to move the problem forward.
Reframing work, changing the foundation of how we organize work from task to be done to problem to be solved (and the consequent reframing of automation from the replication of tasks to the replication of behaviors) might provide us with the opportunity to jump from the industrial productivity improvement S-curve49 to a post-industrial one. What drove us up the industrial S-curve was the incremental development of automation for more and more complex tasks. The path up the post-industrial S-curve might be the incremental development of automation for more and more complex behaviors.
The challenge, though, is to create not just jobs, but good jobs that make the most of our human nature as creative problem identifiers. It was not clear what a good job was at the start of the Industrial Revolution. Henry Ford’s early plants were experiencing nearly 380 percent turnover and 10 percent daily absenteeism from work,50 and it took a negotiation between capital and labor to determine what a good job should look like, and then a significant amount of effort to create the infrastructure, policies, and social institutions to support these good jobs. If we’re to change the path we’re on, if we’re to choose the third option and construct work around problems whereby we can make the most of our own human abilities and those of the robots, then we need a conscious decision to engage in a similar dialogue.
XXX . V000 Will Robots Make Humans Unnecessary?
A lone researcher recently made a remarkable discovery that may save millions of lives. She identified a chemical compound that effectively targets a key growth enzyme in Plasmodium vivax, the microscopic parasite responsible for most of the world's malaria cases. The scientist behind this new weapon against one of humanity's great biological foes didn't expect praise, a bonus check, or even so much as a hardy pat on the back for her efforts. In fact, "she" lacks the ability to expect anything.
This breakthrough came courtesy of Eve, a "robotic scientist" that resides at the University of Manchester's Automation Lab. Eve was designed to find new disease-fighting drugs faster and cheaper than her human peers. She achieves this by using advanced artificial intelligence to form original hypotheses about which compounds will murder malicious microbes (while sparing human patients) and then conducting controlled experiments on disease cultures via a pair of specialized robotic arms.
Eve is still under development, but her proven efficacy guarantees that Big Pharma will begin to "recruit" her and her automated ilk in place of comparatively measured human scientists who demand annoying things like "monetary compensation," "safe work environments," and "sleep."
If history is any guide, human pharmaceutical researchers won't disappear entirely—at least not right away. What will probably happen is that the occupation will follow the path of so many others (assembly line worker, highway toll taker, bank teller) in that the ratio of humans to non-sentient entities will tilt dramatically.
Machines outperforming humans is a tale as old as the Industrial Revolution. But as this process takes hold in the logarithmically evolving Information Age, , many are beginning to question if human workers will be necessary at all.
The Brand New Thing That Is Happening
The Luddites were an occasionally violent group of 19th-century English textile workers who raged against the industrial machines that were beginning to replace human workers. The Luddites' anxieties were certainly understandable, if—as history would eventually bear out—misguided. Rather than crippling the economy, the mechanization the Luddites feared actually improved the standard of living for most Brits. New positions that took advantage of these rising technologies and the cheaper wares they produced (eventually) supplanted the jobs that were lost.
Fast-forward to today and "Luddite" has become a derogatory term used to describe anyone with an irrational fear or distrust of new technology. The so-called "Luddite fallacy" has become near-dogma among economists as a way to describe and dismiss the fear that new technologies will eat up all the jobs and leave nothing in their place. So, perhaps the HR assistant who's been displaced by state-of-the-art applicant tracking software or the cashier who got the boot in exchange for a self-checkout kiosk can take solace in the fact that the bomb that just blew up in their lives was just clearing the way for a new higher-skill job in their future. And why shouldn't that be the case? This technology-employment paradigm has been validated by the past 200 or so years of history.
Yet some economists have openly pondered if the Luddite fallacy might have an expiration date. The concept only holds true when workers are able to retrain for jobs in other parts of the economy that are still in need of human labor. So, in theory, there could very well come a time when technology becomes so pervasive and evolves so quickly that human workers are no longer able to adapt fast enough.
One of the earliest predictions of this personless workforce came courtesy of an English economist who famously observed (PDF), "We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labor outrunning the pace at which we can find new uses for labor."
That economist was John Maynard Keynes, and the excerpt was from his 1930 essay "Economic Possibilities for our Grandchildren." Well, here we are some 85 years later (and had Keynes had any grandchildren they'd be well into retirement by now, if not moved on to that great job market in the sky), and the "disease" he spoke of never materialized. It might be tempting to say that Keynes's prediction was flat-out wrong, but there is reason to believe that he was just really early.
Fears of technological unemployment have ebbed and flowed through the decades, but recent trends are spurring renewed debate as to whether we may—in the not-crazy-distant future—be innovating ourselves toward unprecedented economic upheaval. This past September in New York City, there was even a World Summit on Technological Unemployment that featured economic heavies like Robert Reich (Secretary of Labor during the Clinton administration), Larry Summers (Secretary of the Treasury, also under Clinton), and Nobel Prize–winning economist Joseph Stiglitz.
So why might 2016 be so much more precarious than 1930? Today, particularly disruptive technologies like artificial intelligence, robotics, \3D printing, and nanotechnology are not only steadily advancing, but the data clearly shows that their rate of advancement is increasing (the most famous example of which being Moore's Law's near-flawless record of describing how computer processors grow exponentially brawnier with each generation). Furthermore, as the technologies develop independently, they will hasten the development of other segments (for example, artificial intelligence might program 3D printers to create the next generation of robots, which in turn will build even better 3D printers). It's what futurist and inventor Ray Kurzweil has described as the Law of Accelerating Returns: Everything is getting faster—faster.
The evolution of recorded music illustrates this point. It's transformed dramatically over the past century, but the majority of that change has occurred in just the past two decades. Analog discs were the most important medium for more than 60 years before they were supplanted by CDs and cassettes in the 1980s, only to be taken over two decades later by MP3s, which are now rapidly being replaced by streaming audio. This is the type of acceleration that permeates modernity.
"I believe we're reaching an inflection point," explains software entrepreneur and author of the book Rise of the Robots, Martin Ford (read the full interview here)."Specifically in the way that machines—algorithms—are starting to pick up cognitive tasks. In a limited sense, they're starting to think like people. It's not like in agriculture, where machines were just displacing muscle power for mechanical activities. They're starting to encroach on that fundamental capability that sets us apart as a species—the ability to think. The second thing [that is different than the Industrial Revolution] is that information technology is so ubiquitous. It's going to invade the entire economy, every employment sector. So there isn't really a safe haven for workers. It's really going to impact across the board. I think it's going to make virtually every industry less labor-intensive. "
To what extent this fundamental shift will take place—and on what timescale—is still very much up for debate. Even if there isn't the mass economic cataclysm some fear, many of today's workers are completely unprepared for a world in which it's not only the steel-driving John Henrys who find that machines can do their job better (and for far cheaper), but the Michael Scotts and Don Drapers, too. A white-collar job and a college degree no longer offer any protection from automation.
If I Only Had a Brain
There is one technology in particular that stands out as a disruption super-tsunami in waiting. Machine learning is a subfield of AI that makes it possible for computers to perform complex tasks for which they weren't specifically programmed—indeed, for which they couldn't be programmed—by enabling them to both gather information and utilize it in useful ways.
Machine learning is how Pandora knows what songs you'll enjoy before you do. It's how Siri and other virtual assistants are able to adapt to the peculiarities of your voice commands. It even rules over global finances (high-frequency trading algorithms now account for more than three-quarters of all stock trades; one venture capital firm, Deep Knowledge Ventures, has gone so far as to appoint an algorithm to its board of directors).
Another notable example—and one that will itself displace thousands, if not millions, of human jobs—is the software used in self-driving cars. We may think of driving as a task involving a simple set of decisions (stop at a red light, make two lefts and a right to get to Bob's house, don't run over anybody), but the realities of the road demand that drivers make lots of decisions—far more than could ever be accounted for in a single program. It would be difficult to write code that could handle, say, the wordless negotiation between two drivers who simultaneously arrive at a four-way-stop intersection, let alone the proper reaction to a family of deer galloping into heavy traffic. But machines are able to observe human behavior and use that data to approximate a proper response to a novel situation.
"People tried just imputing all the rules of the road, but that doesn't work," explains Pedro Domingos, professor of computer science at the University of Washington and author of The Master Algorithm. "Most of what you need to know about driving are things that we take for granted, like looking at the curve in a road you've never seen before and turning the wheel accordingly. To us, this is just instinctive, but it's difficult to teach a computer to do that. But [one] can learn by observing how people drive. A self-driving car is just a robot controlled by a bunch of algorithms with the accumulated experience of all the cars it has observed driving before—and that's what makes up for a lack of common sense."
Mass adoption of self-driving cars is still many years away, but by all accounts they are quite capable at what they do right now (though Google's autonomous car apparently still has trouble discerning the difference between a deer and a plastic bag blowing in the wind). That's truly amazing when you look at what computers were able to achieve only a decade ago. With the prospect of accelerating evolution, we can only imagine what tasks they will be able to take on in another 10 years.
Is There a There There?
No one disagrees that technology will continue to achieve once-unthinkable feats, but the idea that mass technical unemployment is an inevitable result of these advancements remains controversial. Many economists maintain an unshakable faith in The Market and its ability to provide jobs regardless of what robots and other assorted futuristic machines happen to be zooming around. There is, however, one part of the economy where technology has, beyond the shadow of any doubt, pushed humanity aside: manufacturing.
Between 1975 and 2011, manufacturing output in the U.S. more than doubled (and that's despite NAFTA and the rise of globalization), while the number of (human) workers employed in manufacturing positions decreased by 31 percent. This dehumanizing of manufacturing isn't just a trend in America—or even rich Western nations—it's a global phenomenon. It found its way into China, too, where manufacturing output increased by 70 percent between 1996 and 2008 even as manufacturing employment declined by 25 percent over the same period.
There's a general consensus among economists that our species' decreasing relevance in manufacturing is directly attributable to technology's ability to make more stuff with fewer people. And what business wouldn't trade an expensive, lunch-break-addicted human workforce for a fleet of never-call-out-sick machines? (Answer: all the ones driven into extinction by the businesses that did.)
The $64 trillion question is whether this trend will be replicated in the services sector that more than two-thirds of U.S. employees now call their occupational home. And if it does, where will all those human workers move on to next?
"There's no doubt that automation is already having an effect on the labor market," says James Pethokoukis, a fellow with the libertarian-leaning American Enterprise Institute. "There's been a lot of growth at high-end jobs, but we've lost a lot of middle-skill jobs—the kind where you can create a step-by-step description of what those jobs are, like bank tellers or secretaries or front-office people."
It may be tempting to discount fears about technological unemployment when we see corporate profits routinely hitting record highs. Even the unemployment rate in the U.S. has fallen back to pre-economic-train-crash levels. But we should keep in mind that participation in the labor market remains mired at the lowest levels seen in four decades. There are numerous contributing factors here (not the least of which is the retiring baby boomers), but some of it is surely due to people so discouraged with their prospects in today's job market that they simply "peace out" altogether.
Another important plot development to consider is that even among those with jobs, the fruits of this increased productivity are not shared equally. Between 1973 and 2013, average U.S. worker productivity in all sectors increased an astounding 74.4 percent, while hourly compensation only increased 9.2 percent. It's hard not to conclude that human workers are simply less valuable than they once were.
So What Now, Humans?
Let's embark on a thought experiment and assume that technological unemployment is absolutely happening and its destructive effects are seeping into every employment nook and economic cranny. (To reiterate: This is far from a consensus viewpoint.) How should society prepare? Perhaps we can find a way forward by looking to our past.
Nearly two centuries ago, as the nation entered the Industrial Revolution, it also engaged in a parallel revolution in education known as the Common School Movement. In response to the economic upheavals of the day, society began to promote the radical concept that all children should have access to a basic education regardless of their family's wealth (or lack thereof). Perhaps most important, students in these new "common schools" were taught standardized skills and adherence to routine, which helped them go on to become capable factory workers.
"This time around we have the digital revolution, but we haven't had a parallel revolution in our education system," says economist and Education Evolution founder Lauren Paer. "There's a big rift between the modern economy and our education system. Students are being prepared for jobs in the wrong century. Adaptability will probably be the most valuable skill we can learn. We need to promote awareness of a landscape that is going to change quickly."
In addition to helping students learn to adapt—in other words, learn to learn—Paer encourages schools to place more emphasis on cultivating the soft skills in which "humans have a natural competitive advantage over machines," she says. "Things like asking questions, planning, creative problem solving, and empathy—those skills are very important for sales, it's very important for marketing, not to mention in areas that are already exploding, like eldercare."
One source of occupational hope lies in the fact that even as technology removes humanity from many positions, it can also help us retrain for new roles. Thanks to the Internet, there are certainly more ways to access information than ever before. Furthermore (if not somewhat ironically), advancing technologies can open new opportunities by lowering the bar to positions that previously required years of training; people without medical degrees might be able to handle preliminary emergency room diagnoses with the aid of an AI-enabled device, for example.
So, perhaps we shouldn't view these bots and bytes as interlopers out to take our jobs, but rather as tools that can help us do our jobs better. In fact, we may not have any other course of action—barring a global Amish-style rejection of progress, increasingly capable and sci-fabulous technologies are going to come online. That's a given; the workers who learn to embrace them will fare best.
"There will be a lot of jobs that won't disappear, but they will change because of machine learning," says Domingos. "I think what everyone needs to do is look at how they can take advantage of these technologies. Here's an analogy: A human can't win a race against a horse, but if you ride a horse, you'll go a lot further. We all know that Deep Blue beat Kasparov and then computers became the best chess players in the world—but that's actually not correct. The current world champions are what we call 'centaurs,' that's a team of a human and a computer. A human and a computer actually complement each other very well. And, as it turns out, human-computer teams beat all solely human or solely computer competitors. I think this is a good example of what's going to happen in a lot of areas."
Technologies such as machine learning can indeed help humans—at least those with the technical know-how—excel. Take the example of Cory Albertson, a "professional" fantasy sports better who has earned millions from daily gaming sites using hand-crafted algorithms to stake an advantage over human competitors whose strategies are often based on little more than what they gleaned from last night's SportsCenter. Also, consider the previously mentioned stock-trading algorithms that have enabled financial players to amass fortunes on the market. In the case of these so-called "algo-trading" scenarios, the algorithms do all the heavy lifting and rapid trading, but carbon-based humans are still in the background implementing the investment strategies.
Of course, even with the most robust educational reform and distributed technical expertise, accelerating change will probably push a substantial portion of the workforce to the sidelines. There are only so many people who will be able to use coding magic to their benefit. And that type of disparity can only turn out badly.
One possible solution many economists have proposed is some form of universal basic income (UBI), aka just giving people money. As you might expect, this concept has the backing of many on the political left, but it's also had notable supporters on the right (libertarian economic rock star Friedrich Hayek famously endorsed the concept). Still, many in the U.S. are positively allergic to anything with even the faintest aroma of "socialism."
"It's really not socialism—quite the opposite," comments Ford, who supports the idea of a UBI at some point down the road to counter the inability of large swaths of society to earn a living the way they do today. "Socialism is about having the government take over the economy, owning the means of production, and—most importantly—allocating resources…. And that's actually the opposite of a guaranteed income. The idea is that you give people enough money to survive on and then they go out and participate in the market just as they would if they were getting that money from a job. It's actually a free market alternative to a safety net."
The exact shape of a Homo sapiens safety net depends on whom you ask. Paer endorses a guaranteed jobs program, possibly in conjunction with some form of UBI, while "the conservative version would be through something like a negative income tax," according to Pethokoukis. "If you're making $15 per hour and we as a society think you should be making $20 per hour, then we would close the gap. We would cut you a check for $5 per hour."
In addition to maintaining workers' livelihoods, the very nature of work might need to be re-evaluated. Alphabet CEO Larry Page has suggested the implementation of a four-day workweek in order to allow more people to find employment. This type of shift isn't so pie-in-the-sky when you consider that, in the late 19th century, the average American worker logged nearly 75 hours per week, but the workweek evolved in response to new political, economic, and technological forces. There's no real reason that another shift of this magnitude couldn't (or wouldn't) happen again.
If policies like these seem completely unattainable in America's current gridlock-choked political atmosphere, that's because they most certainly are. If mass technological unemployment does begin to manifest itself as some anticipate, however, it will bring about a radical new economic reality that would demand a radical new political response.
Toward the Star Trek Economy
Nobody knows what the future holds. But that doesn't mean it isn't fun to play the "what if" game. What if no one can find a job? What if everything comes under control of a few trillionaires and their robot armies? And, most interesting of all: What if we're asking the wrong questions altogether?
What if, after a tumultuous transition period, the economy evolves beyond anything we would recognize today? If technology continues on its current trajectory, it inevitably leads to a world of abundance. In this new civilization 2.0, machines will conceivably be able to answer just about any question and make just about everything available. So, what does that mean for us lowly humans?
"I think we're heading towards a world where people will be able to spend their time doing what they enjoy doing, rather than what they need to be doing," Planetary Ventures CEO, X-Prize cofounder, and devoted techno-optimist Peter Diamandis told me when I interviewed him last year. "There was a Gallup Poll that said something like 70 percent of people in the United States don't enjoy their job—they work to put food on the table and get health insurance to survive. So, what happens when technology can do all that work for us and allow us to actually do what we enjoy with our time?"
It's easy to imagine a not-so-distant future where automation takes over all the dangerous and boring jobs that humans do now only because they have to. Surely there are drudging elements of your workday that you wouldn't mind outsourcing to a machine so you could spend more time with the parts of your job that you do care about.
One glass-half-full vision could look something like the galaxy portrayed in Star Trek: The Next Generation, where abundant food replicators and a post-money economy replaced the need to do... well, anything. Anyone in Starfleet could have chosen to spend all their time playing 24th-century video games without the fear of starvation or homelessness, but they decided a better use of their time would be spent exploring the unknown. Captain Picard and the crew of the USS Enterprise didn't work because they feared what would happen if they didn't—they worked because they wanted to.
Nothing is inevitable, of course. A thousand things could divert us from this path. But if we ever do reach a post-scarcity world, then humanity will be compelled to undergo a radical reevaluation of its values. And maybe that's not the worst thing that could happen to us.
Perhaps we shouldn't fear the idea that all the jobs are disappearing. Perhaps we should celebrate the hope that nobody will have to work again.
XXX . V00000 Views of Technology
Technology may be defined as the application of organized knowledge to practical tasks by ordered systems of people and machines.5 There are several advantages to such a broad definition. “Organized knowledge” allows us to include technologies based on practical experience and invention as well as those based on scientific theories. The “practical tasks” can include both the production of material goods (in industry and agriculture, for instance) and the provision of services (by computers, communications media, and biotechnologies, among others). Reference to “ordered systems of people and machines” directs attention to social institutions as well as to the hardware of technology. The breadth of the definition also reminds us that there are major differences among technologies.
2. Opportunity for Choice. Individual choice has a wider scope today than ever before because technology has produced new options not previously available and a greater range of products and services. Social and geographical mobility allow a greater choice of jobs and locations. In an urban industrial society, a person's options are not as limited by parental or community expectations as they were in a small-town agrarian society. The dynamism of technology can liberate people from static and confining traditions to assume responsibility for their own lives. Birth control techniques, for example, allow a couple to choose the size and timing of their family. Power over nature gives greater opportunity for the exercise of human freedom.6
3. More Leisure. Increases in productivity have led to shorter working hours. Computers and automation hold the promise of eliminating much of the monotonous work typical of earlier industrialism. Through most of history, leisure and cultural pursuits have been the privilege of the few, while the mass of humanity was preoccupied with survival. In an affluent society there is time for continuing education, the arts, social service, sports, and participation in community life. Technology can contribute to the enrichment of human life and the flowering of creativity. Laborsaving devices free us to do what machines cannot do. Proponents of this viewpoint say that people can move behind materialism when their material needs are met.
4. Improved Communications. With new forms of transportation, one can in a few hours travel to distant cities that once took months to reach. With electronic technologies (radio, television, computer networks, and so on), the speed, range, and scope of communication have vastly increased. The combination of visual images and auditory message have an immediacy not found in the linear sequence of the printed word. These new media offer the possibility of instant worldwide communication, greater interaction, understanding, and mutual appreciation in the “global village.” It has been suggested that by dialing coded numbers on telephones hooked into computer networks, citizens could participate in an instant referendum on political issues. According as its defenders, technology brings psychological and social benefits as well as material progress.
In part 2 we will encounter optimistic forecasts of each of the particular technologies examined. In agriculture, some experts anticipate that the continuing Green Revolution and the genetic engineering of new crops will provide adequate food for a growing world population. In the case of energy, it is claimed that breeder reactors and fusion will provide environmentally benign power to replace fossil fuels. Computer enthusiasts anticipate the Information Age in which industry is automated and communications networks enhance commercial, professional, and personal life. Biotechnology promises the eradication of genetic diseases, the improvement of health, and the deliberate design of new species—even the modification of humanity itself. In subsequent chapters we will examine each of these specific claims as well as the general attitudes they reveal.
A postindustrial society, it is said, is already beginning to emerge. In this new society, according to the sociologist Daniel Bell, power will be based on knowledge rather than property. The dominant class will be scientists, engineers, and technical experts; the dominant institutions will be intellectual ones (universities, industrial laboratories, and research institutes). The economy will be devoted mainly to services rather than material goods. Decisions will be made on rational-technical grounds, marking “the end of ideology.” There will be a general consensus on social values; experts will coordinate social planning, using rational techniques such as decision theory and systems analysis. This will be a future-oriented society, the age of the professional managers, the technocrats.9 A bright picture of the coming technological society has been given by many “futurists,” including Buckminster Fuller, Herman Kahn, and Alvin Toffler.10
Samuel Florman is an articulate engineer and author who has written extensively defending technology against its detractors. He insists that the critics have romanticized the life of earlier centuries and rural societies. Living standards were actually very low, work was brutal, and roles were rigidly defined. People have much greater freedom in technological societies. The automobile, for example, enables people to do what they want and enhances geographical and class mobility. People move to cities because they prefer life there to “the tedium and squalor of the countryside.” Florman says that worker alienation in industry is rare, and many people prefer the comfortable monotony of routine tasks to the pressures of decision and accountability. Technology is not an independent force out of control; it is the product of human choice, a response to public demand expressed through the market place.11
Florman grants that technology often has undesirable side effects, but he says that these are amenable to technological solutions. One of his heroes is Benjamin Franklin, who “proposed technological ways of coping with the unpleasant consequences of technology.”12 Florman holds that environmental and health risks are inherent in every technical advance. Any product or process can be made safer, but always at an economic cost. Economic growth and lower prices for consumers are often more important than additional safety, and absolute safety is an illusory goal. Large-scale systems are usually more efficient than small-scale ones. It is often easier to find a “technical fix” for a social problem than to try to change human behavior or get agreement on political policies.13
Florman urges us to rely on the judgment of experts in decisions about technology. He says that no citizen can be adequately informed about complex technical questions such as acid rain or radioactive waste disposal. Public discussion of these issues only leads to anxiety and erratic political actions. We should rely on the recommendations of experts on such matters.14 Florman extols the “unquenchable spirit” and “irrepressible human will” evident in technology: For all our apprehensions, we have no choice but to press ahead. We must do so, first, as the name of compassion. By turning our backs on technological change, we would be expressing our satisfaction with current world levels of hunger, disease, and privates. Further, we must press ahead in the name of the human adventure. Without experimentation and change our existence would be a dull business. We simply cannot sleep while there are masses to feed and diseases to conquer, seas to explore and heaving to servey.15
Some theologians have also given very positive appraisals of technology. They see it as a source not only of higher living standards but also of greater freedom and creative expression. In his earlier writings, Harvey Cox held that freedom to master and shape the world through technology liberates us from the confines of tradition. Christianity brought about the desacralization of nature and allowed it to be controlled and used for human welfare.16 Norris Clarke was technology as an instrument of human fulfillment and self-expression in the use of our God-given intelligence to transform the world. Liberation from bondage to nature, he says, is the victory of spirit over matter. As cocreators with God we can celebrate the contribution of reason to the enrichment of human life.17 Other theologians have affirmed technology as an instrument of love and compassion in relieving human suffering—a modern response to the biblical command to feed the hungry and help the neighbor in need.
The Jesuit paleontologist Pierre Teilhard de Chardin, writing in the early year of nuclear power, computers, and molecular biology, expressed a hopeful vision of the technological future. He envisioned computers and electronic communication in a network of interconnected consciousness, a global layer of thought that he called “the noosphere.” He defended eugenics, “artificial neo-life,” and the remodeling of the human organism by manipulation of the genes. With this new power over heredity, he said, we can replace the crude forces of natural selection and “seize the tiller” to control the direction of future evolution. We will have total power over matter, “reconstructing the very stuff of the universe.” He looked to a day of interplanetary travel and the unification of our own planet, based on intellectual and cultural interaction.18
Here was an inspiring vision of a planetary future in which technology and spiritual development would be linked together. Teilhard affirmed the value of secular life in the world and the importance of human efforts in “building the earth” as we cooperate in the creative work of God. Technology is participation in divine creativity. He rejected any note of despair, which would cut the nerve of constructive action. At times he seemed to have unlimited confidence in humanity's capacity to shape its own destiny. But his confidence really lay in the unity, convergence, and ascent of the cosmic process of which humanity and technology are manifestations. The ultimate source of that unity and ascent is God as known in the Christ whose role is cosmic. For Teilhard eschatological hope looks not to an intervention discontinuous from history, but to the fulfillment of a continuing process to which our own actions contribute.
Teilhard's writings present us with a magnificent sweep of time from past to future. But they do not consider the institutional structures of economic power and self-interest that now control the directions of technological development. Teilhard seldom acknowledged the tragic hold of social injustice on human life. He was writing before the destructive environmental impacts of technology were evident. When Teilhard looked to the past, he portrayed humanity as an integral part of the natural world, interdependent with other creatures. But when he looked to the future, he expected that because of our technology and our spirituality we will be increasingly separated from other creatures. Humanity will move beyond dependence on the organic world. Though he was ultimately theocentric (centered on God), and he talked about the redemption of the whole cosmos, many of his images are anthro-pocentric (centered on humanity) and imply that other forms of life are left behind in the spiritualization of humankind that technology will help to bring about.
Second, environmental destruction is symptomatic of a deeper problem: alienation from nature. The idea of human domination of nature has many roots. Western religious traditions have often drawn a sharp line between humanity and other creatures (see chapter 3). Economic institutions treat nature as a resource for human exploitation. But technological enthusiasts contribute to this devaluation of the natural world if they view it as an object to be controlled and manipulated. Many engineers are trained in the physical sciences and interpret living things in mechanistic rather than ecological terms. Others spend their entire professional lives in the technosphere of artifacts, machines, electronics, and computers, cut off from the world of nature. To be sure, sensitivity to nature is sometimes found among technological optimists, but it is more frequently found among the critics of technology.
Third, technology has contributed to the concentration of economic and political prove. Only relatively affluent groups or nations can afford the latest technology the gaps between rich and poor have been perpetuated and in many ideas increased by technological developments. In a world of limited resources, it also appears impossible for all nations to sustain the standards of living of industrial nations today, much less the higher standards that industrial nations expect in the future. Affluent nations use a grossly disproportionate share of the world's energy and resources. Commitment to justice within nations also requires a more serious analysis of the distribution of the fonts and benefits of technology. We will find many technologies in which one group enjoys the benefits while another group is exposed to the risks and social costs.
Fourth, large-scale technologies typical of industrial nations today are particularly problematic. They are capital-intensive rather than labor-intensive, and they add to unemployment in many parts of the world. Large-scale systems send to be vulnerable to error, accident, or sabotage. The near catastrophe at the Three Mile Island nuclear plant in 1979 and the Chernobyl disaster in 1986 were the products of human errors, faulty equipment, poor design, and unreliable safety procedures. Nuclear energy is a prime example of a vulnerable, centralized, capital-intensive technology. Systems in which human or mechanical failures can be disastrous are risky even in a stable society, quite apart from additional risks under conditions of social unrest. The large scale of many current systems is as much the product of government subsidies, tax and credit policies, and particular corporate interests as of any inherent economies of scale.
Fifth, greater dependence on experts for policy decisions would not be desirable. The technocrats claim that their judgments are value free; the technical elite is supposedly nonpolitical. But those with power seldom use it rationally and objectively when their own interests are at slake. When social planners think they are deciding for the good of all—whether in the French or Russian revolutions or in the proposed technocracy of the future—the assumed innocence of moral intentions is likely to be corrupted in practice. Social controls over the controllers are always essential. I will suggest that the most important from of freedom is participation in the decisions affecting our lives. We will return in chapter 8 to this crucial question: How can both experts and citizens contribute to technological policy decisions in a democracy?
Lastly, we must question the linear view of the science-technology-society relationship, which is assumed by many proponents of optimistic views. Technology is taken to be applied science, and it is thought to have an essentially one-way impact on society. The official slogan of the Century of Progress exposition in Chicago in 1933 was: “Science Finds—Industry Applies—Man Conforms.” This has been called “the assembly-line view” because it pictures science at the start of the line and a stream of technological products pouring off the end of the line.19 If technology is fundamentally benign, there is no need for government interference except to regulate the most serious risks. Whatever guidance is needed for technological development is supplied by the expression of consumer preferences through the marketplace. In this view, technologies develop from the “push” of science and the “pull” of economic profits.
I accept the basic framework of private ownership in a free market economy, but I believe it has severe limitations that require correction through political processes. When wealth is distributed unevenly, the luxuries of a few people carry much more weight in the marketplace than the basic needs of many others. Many of the social and environmental costs of industrial processes are not included in market prices. Because long-term consequences are discounted at the current interest rate, they are virtually ignored in economic decisions. Our evaluation of technology, in short, must encompass questions of justice, participation, environmental protection, and long-term sustainability, as well as short-term economic efficiency.
2. Narrow Criteria of Efficiency. Technology leads to rational and efficient organization, which requires fragmentation, specialization, speed, the maximization of output. The criterion is efficiency in achieving a single goal or a narrow range of objectives; side effects and human costs are ignored. Quantitative criteria tend to crowd out qualitative ones. The worker becomes the servant of the machine, adjusting to its schedule and tempo, adapting to its requirements. Meaningful work roles exist for only a small number of people in industrial societies today. Advertising creates demand for new products, whether or not they fill real needs, in order to stimulate a larger volume of production and a consumer society.
3. Impersonality and Manipulation Relationships in a technological society are specialized and functional. Genuine community and interpersonal interaction are threatened when people feel like cogs in a well-oiled machine. In a bureaucracy, the goals of the organization are paramount and responsibility is diffused, so that no one feels personally responsible. Moreover, technology has created subtle ways of manipulating people and new techniques of electronic surveillance and psychological conditioning. When the technological mentality is dominant, people are viewed and treated like objects.
4 Uncontrollability. Separate technologies form an interlocking system, a total, mutually reinforcing network that seems to lead a life of its own. “Run-away technology” is said to be like a vehicle out of control, with a momentum that cannot be stopped. Some critics assert that technology is not just a set of adaptable tools lot human use but an all-encompassing form of life, a pervauttuie with its own logic and dynamic. Its consequences are unintended and unforeseeable. Like the sorcerer's apprentice who found the magic formula to make his broom carry water but did not know how to make it stop, we have set in motion forces that we cannot control. The individual feels powerless facing a monolithic system.
5. Alienation of the Worker. The worker's alienation was a central theme in the writing of Karl Marx. Under capitalism, he said, workers do not own their own tools or machines, and they are powerless in their work life. They can sell their labor as a commodity, but their work is not a meaningful form of self-expression. Marx held that such alienation is a product of capitalist ownership and would disappear under state ownership. He was optimistic about the use of technology in a communist economic order, and thus he belongs with the third group below, the contextualists, but his idea of alienation has influenced the pessimists.
More recent writers point out that alienation has been common in state-managed industrial economies too and seems to be a product of the division of labor, rationalization of production, and hierarchical management in large organizations, regardless of the economic system. Studs Terkel and others have found in interviews that resentment, frustration, and a sense of powerlessness are widespread among American industrial workers. This contrasts strongly with the greater work autonomy, job satisfaction, and commitment to work found in the professions, skilled trades, and family-owned farms.21
Other features of technological development since World War II have evoked widespread concern. The allocation of more than two-thirds of the U.S. federal research and development budget to military purposes has diverted expertise from environmental problems and urgent human needs, Technology also seems to have contributed to the impoverishment of human relationships and a loss of community. The youth counterculture of the 1970s was critical of technology and sought harmony with nature, intensity of personal experience, supportive communities, and alternative life-styles apart from the prevailing industrial order. While many of its expressions were short-lived, many of its characteristic attitudes, including disillusionment with technology, have persisted among some of the younger generation.22
The political scientist Langdon Winner has given a sophisticated version of the argument that technology is an autonomous system that shapes all human activities to its own requirements. It makes little difference who is nominally in control—elected politicians, technical experts, capitalist executives, or socialist managers—if decisions are determined by the demands of the technical system. Human ends are then adapted to suit the techniques available rather than the reverse. Winner says that large-scale systems are self-perpetuating, extending their control over resources and markets and molding human life to fit their own smooth functioning. Technology is not a neutral means to human ends but an all-encompassing system that imposes its patterns on every aspect of life and thought.25
The philosopher Hans Jonas is impressed by the new scale of technological power and its influence on events distant in time and place. Traditional Western ethics have been anthropocentric and have considered only short-range consequences. Technological change has its own momentum, and its pace is too rapid for trial-and-error readjustments. Now genetics gives us power over humanity itself. Jonas calls for a new ethic of responsibility for the human future and for nonhuman nature. We should err on the side of caution, adopting policies designed to avert catastrophe rather than to maximize short-run benefits. “The magnitude of these stakes, taken with the insufficiency of our predictive knowledge, leads to the pragmatic rule to give the prophecy of doom priority over the prophecy of bliss.”26 We should seek “the least harm,” not “the greatest good.” We have no right to tamper genetically with human nature or to accept policies that entail even the remote possibility of the extinction of humanity in a nuclear holocaust.
Another philosopher, Albert Borgmann, does not want to return to a pretechnological past, but he urges the selection of technologies that encourage genuine human fulfillment. Building on the ideas of Heidegger, he holds that authentic human existence requires the engagement and depth that occur when simple things and practices focus our attention and center our lives. We have let technology define the good life in terms of production and consumption, and we have ended with mindless labor and mindless leisure. A fast-food restaurant replaces the family meal, which was an occasion of communication and celebration. The simple pleasures of making music, hiking and running, gathering with friends around the hearth, or engaging in creative and self-reliant work should be our goals. Borgmann thinks that some large-scale capital-intensive industry is needed (especially in transportation and communication), but he urges the development of small-scale labor-intensive, locally owned enterprises (in arts and crafts, health care, and education, for example). We should challenge the rule of technology and restrict it to the limited role of supporting the humanly meaningful activities associated with a simpler life.27
In Technology and Power, the psychologist David Kipnis maintains that those who control a technology have power over other people and this affects personal attitudes as well as social structures, Power holders interpret technological superiority as moral superiority and tend to look down on weaker parties. Kipnis shows that military and transportation technologies fed the conviction of colonists that they were superior to colonized peoples. Similarly, medical knowledge and specialization have led doctors to treat patients as impersonal cases and to keep patients at arms length with a minimum of personal communication. Automation gave engineers and managers increased power over workers, who no longer needed special skills. In general, “power corrupts” and leads people to rationalize their use of power for their own ends. Kipnis claims that the person with technological knowledge-often has not only a potent instrument of control but also a self-image that assumes superiority over people who lack that knowledge and the concomitant opportunities to make decisions affecting their lives.28
Some Christian groups are critical of the impact of technology on human life. The Amish for example, have resolutely turned their backs on radios, television, and even automobiles. By hard work, community cooperation, and frugal ways, they have prospered in agriculture and have continued their distinctive life-styles and educational patterns. Main theologians who do not totally reject technology criticize its tendency to generate a Promethean pride and a quest for unlimited power. The search for omnipotence is a denial of creaturehood. Unqualified devotion to technology as a total way of life, they say, is a form of idolatry. Technology is finally thought of as the source of salvation, the agent of secularized redemption.29 In an affluent society, a legitimate concern for material progress readily becomes a frantic pursuit of comfort, a total dedication to self-gratification. Such an obsession with things distorts our basic values as well as our relationships with other persons. Exclusive dependence on technological rationality also leads to a truncation of experience, a loss of imaginative and emotional life, and an impoverishment of personal existence.
Technology is imperialistic and addictive, according to these critics. The optimists may think that, by fulfilling our material needs, technology liberates us from materialism and allows us to turn to intellectual, artistic, and spiritual pursuits. But it does not seem to be working out that way. Our material wants have escalated and appear insatiable. Yesterday's luxuries are today's necessities. The rich are usually more anxious about their future than the poor. Once we allow technology to define the good life, we have excluded many important human values from consideration.
Several theologians have expressed particular concern for the impact of technology on religious life. Paul Tillich claims that the rationality and impersonality of technological systems undermine the personal presuppositions of religious commitment.30 Gabriel Marcel believes that the technological outlook pervades our lives and excludes a sense of the sacred. The technician treats everything as a problem that can be solved by manipulative techniques without personal involvement. But this misses the mystery of human existence, which is known only through involvement as a total person. The technician treats other people as objects to be understood and controlled.31Martin Buber contrasts the I–It relation of objective detachment with the I–Thou relation of mutuality, responsiveness, and personal involvement. If the calculating attitude of control and mastery dominates a person's life, it excludes the openness and receptivity that are prerequisites of a relationship to God or to other persons.32 P. H. Sun holds that a high-tech environment inhibits the life of prayer. Atdtudes of power and domination are incompatible with the humility and reverence that prayer requires.33
Third, technology can be the servant of human values. Life is indeed impoverished if the technological attitudes of mastery and power dominate one's outlook. Calculation and control do exclude mutuality and receptivity in human relationships and prevent the humility and reverence that religious awareness requires. But I would submit that the threat to these areas of human existence comes not from technology itself but from preoccupation with material progress and unqualified reliance on technology. We can make decisions about technology within a wider context of human and environmental values.
The interlocking structure of technologically based government agencies and corporations, sometimes called the “technocomplex,” is wider than the “military-industrial complex.” Many companies are virtually dependent on government contracts. The staff members of regulatory agencies, in turn, are mainly recruited from the industries they are supposed to regulate. We will see later that particular legislative committees, government agencies, and industries have formed three-way alliances to promote such technologies as nuclear energy or pesticides. Networks of industries with common interests form lobbies of immense political power. For example, U.S. legislation supporting railroads and public mass transit systems was blocked by a coalition of auto manufacturers, insurance companies, oil companies, labor unions, and the highway construction industry. But citizens can also influence the direction of technological development. Public opposition to nuclear power plants was as important as rising costs in stopping plans to construct new plants in almost all Western nations.
The historian Arnold Pacey gives many examples of the management of technology for power and profit. This is most clearly evident in the defense industries with their close ties to government agencies. But often the institutional biases associated with expertise are more subtle. Pacey gives as one example the Western experts in India and Bangladesh who in the 1960s advised the use of large drilling rigs and diesel pumps for wells, imported from the West. By 1975, two thirds of the pumps had broken down because the users lacked the skills and maintenance networks to operate them. Pacey calls for greater public participation and a more democratic distribution of power in the decisions affecting technology. He also urges the upgrading of indigenous technologies, the exploration of intermediate-scale processes, and greater dialogue between experts and users. Need-oriented values and local human benefits would then play a larger part in technological change.35
How, then, do Western Marxists view the human effects of technology in Soviet history? Reactions vary, but many would agree with Bernard Gendron that in the Soviet Union workers were as alienated, factories as hierarchically organized, experts as bureaucratic, and pollution and militarism as rampant as in the United States. But Gendron insists that the Soviet Union did not follow Marx's vision. The means of production were controlled by a small group within the Communist party, not by the workers. Gendron maintains that in a truly democratic socialism, technology would be humane and work would not be alienating.37 Most commentators hold that the demise of communism in eastern Europe and the Soviet Union was a product of both its economic inefficiency and its political repression. It remains to be seen whether any distinctive legacy from Marxism will remain there after the economic and political turmoil of the early nineties.
We have seen that a few theologians are technological optimists, while others have adopted pessimistic positions. A larger number, however, see technology as an ambiguous instrument of social power. As an example consider Norman Faramelli, an engineer with theological training, who writes in a framework of christian ideas: stewardship of creation, concern for the dispossessed, and awareness of the corrupting influence of power. He distrusts technology as an instrument of corporate profit, but he believes it can be reoriented toward human liberation and ecological balance. Technology assessment and the legislative processes of democratic politics, he holds, can be effective in controlling technology. But Faramelli also advocates restructuring the economic order to achieve greater equality in the distribution of the fruits of technology.38 Similar calls for the responsible use of technology in the service of basic human needs have been issued by task forces and conferences of the National Council of Churches and by the World Council of Churches (WCC).39 According to one summary of WCC documents, “technological society is to be blessed for its capacity to meet basic wants, chastised for its encouragement of inordinate wants, transformed until it serves communal wants.”40
Egbert Schuurman, a Calvinist engineer from Holland, rejects many features of current technology but holds that it can be transformed and redeemed to be an instrument of God's love serving all creatures. Western thought since the Renaissance has increasingly encouraged “man the master of nature”; secular and reductionistic assumptions have prevailed. Schuurman says that technology was given a messianic role as the source of salvation, and under the rule of human sin it has ended by enslaving us so we are “exiles in Babylon.” But we can be converted to seek God's Kingdom, which comes as a gift, not by human effort. Receiving it in joy and love, and responding in obedience, we can cooperate in meaningful service of God and neighbor. Schuurman holds that technology can be redirected to advance both material and spiritual well-being. It has “a magnificent future” if it is incorporated into God's work of creation and redemption. A liberated technology could do much to heal the brokenness of nature and society. Unfortunately, he gives us few examples of what such a technology would be like or how we can work to promote it.41
The American theologian Roger Shinn has written extensively on Christian ethics and gives attention to the structures of political and economic power within \ which technological decisions are made. He agrees with the pessimists that various technologies reinforce each other in interlocking systems, and he acknowledges that large-scale technologies lead to the concentration of economic and political power. But he argues that when enough citizens are concerned, political processes can be effective in guiding technology toward human welfare. Policy changes require a combination of protest, political pressure, and the kind of new vision that the biblical concern for social justice can provide.42
This third position seems to me more consistent with the biblical outlook than either of the alternatives. Preoccupation with technology does become a form of idolatry, a denial of the sovereignty of God, and a threat to distinctively human existence. But technology directed to genuine human needs is a legitimate expression of humankind's creative capacities and an essential contribution to its welfare. In a world of disease and hunger, technology rightly used can be a far-reaching expression of concern for persons. The biblical understanding of human nature is realistic about the abuses of power and the institutionalization of self-interest. But it also is idealistic in its demands for social justice in the distribution of the fruits of technology. It brings together celebration of human creativity and suspicion of human power.
The attitudes toward technology outlined in this chapter can be correlated with the typology of historic Christian attitudes toward society set forth by H. Richard Niebuhr.43 At the one extreme is accommodation to society. Here society is considered basically good and its positive potentialities are affirmed. Niebuhr Cites the example of liberal theologians of the nineteenth century who had little to say concerning sin, revelation, or grace. They were confident about human reason, scientific and technological knowledge, and social progress. They would side with our first group, those who are optimistic about technology.
At the opposite extreme, Niebuhr describes Christian groups advocating withdrawal from society. They believe that society is basically sinful. The Christian perfectionists, seeking to maintain their purity and to practice radical obedience, have withdrawn into monasteries or into separate communities, as the Mennonites and Amish have done. They would tend to side with our sec-and group, the critics of technology.
Niebuhr holds that the majority of Christians are in three movements that fall between the extremes of accommodation and withdrawal. A synthesis of christianity and society has been advocated historically by the Roman Catholic Church. Aquinas held that there is both a revealed law, known through scripture the church, and a natural law, built into the created order and accesable human reason. Church and state have different roles but can cooperate for human welfare in society. This view encourages a qualified optimism about social change (and, I suggest, about technology).
Another option is the view of Christian life and society as two separate realms, as held in the Lutheran tradition. Here there is a compartmentalization of spiritual and temporal spheres and different standards for personal and public life. Sin is prevalent in all life, but in personal life it is overcome by grace; gospel comes before law as the Christian responds in faith and in love of neighbor. In the public sphere, however, sin must be restrained by the secular structures of authority and order. This view tends to be more pessimistic about social change, but it does not advocate withdrawal from society.
The final option described by Niebuhr is a transformation of society by Christian values. This position has much in common with the Catholic view and shares its understanding that God is at work in history, society, and nature as well as in personal life and the church. But it is more skeptical about the exercise power by the institutional church, and it looks instead to the activity of the layperson in society. Calvin, the Reformed and Puritan traditions, the Anglican, and the Methodists all sought a greater expression of Christian values in public life. They had great respect for the created world ordered by God, and they called for social justice and the redirection of cultural life. This position holds that social change (including the redirection of technology) is possible, but it is difficult because of the structures of group self-interest and institutional power. I favor this last option and will develop it further in subsequent chapters.
2. Technological Determinism. Several degrees and types of determinism can be distinguished. Strict determinism asserts that only one outcome is possible. A more qualified claim is that there are very strong tendencies present in technological systems, but these could be at least partly counteracted if enough people were committed to resisting them. Again, technology may be considered an autonomous interlocking system, which develops by its own inherent logic, extended to the control of social institutions. Or the more limited claim is made that the development and deployment of technology in capitalist societies follows only one path, but the outcomes might be different in other economic systems. In all these versions, science is itself driven primarily by technological needs. Technology is either the “independent variable” on which other variables are dependent, or it is the overwhelmingly predominant force in historical change.
Technolological determinists will be pessimists if they hold that the consequences of technology are on balance socially and environmentally harmful Moreover, any from of determinism implies a limitation of human freedom and technological choice. However, some determinists retain great optimism about the consequences of technology. On the other hand, pessimists do not necessarily accept determinism, even in its weaker form. They may acknowledge the presence of technological choices but expect such choices to be missed because they are pessimistic about human nature and institutionalized greed. They may be pessimistic about our ability to respond to a world of global inequities and scarce resources. Nevertheless, determinism and pessimism are often found together among the critics of technology.
3. Contextual Interaction. Here there are six arrows instead of two, representing the complex interactions between science, technology, and society. Social and political forces affect the design as well as the uses of particular to technologies. Technologies are not neutral because social goals and institutional interests are built into the technical designs that are chosen. Because there are choices, public policy decisions about technology play a larger role here than in either of the other views. Contextualism is most common among our third group, those who see technology as an ambiguous instrument of social power.
Contextualists also point to the diversity of science-technology interactions. Sometimes a technology was indeed based on recent scientific discoveries. Biotechnology, for example, depends directly on recent research in molecular biology. In other cases, such as the steam engine or the electric power system, innovations occurred with very little input from new scientific discoveries. A machine or process may have been the result of creative practical innovation or the modification of an existing technology. As Frederick Ferré puts it, science and technology in the modern world are both products of the combination of theoretical and practical intelligence, and “neither gave birth to the other.”44 Technology has its own distinctive problems and builds up its own knowledge base and professional community, though it often uses science as a resource to draw on. The reverse contribution of technology to science is also often evident. The work of astronomers, for instance, has been dependent on a succession of new technologies, from optical telescopes to microwave antennae and rockets. George Wise writes, “Historical studies have shown that the relations between science and technology need not be those of domination and subordination. Each has maintained its distinctive knowledge base and methods while contributing to the other and to its patrons as well.”45
In the previous volume, I discussed the “social construction of science” thesis, in which it is argued that not only the direction of scientific development but also the concepts and theories of science are determined by cultural assumption and interests. I concluded that the “strong program” among sociologists and philosophers of science carries this historical and cultural relativism too far, and I defended a reformulated understanding of objectivity, which gives a major role to empirical data while acknowledging the influence of society on interpretive paradigms.
The case for “the social construction of technology” seems to me much stronger. Values are built into particular technological designs. There is no one “best way” to design a technology. Different individuals and groups may define a problem differently and may have diverse criteria of success. Bijker and Pinch show that in the late nineteenth century inventors constructed many different types of bicycles. Controversies developed about the relative size of front and rear wheels, seat location, air tires, brakes, and so forth. Diverse users were en-visioned (workers, vacationers, racers, men and women) and diverse criteria (safety, comfort, speed, and so forth). In addition, the bicycle carried cultural meanings, affecting a person's self-image and social status. There was nothing logically or technically necessary about the model that finally won out and now found around the world.46
The historian John Staudenmaier writes that
Strong gender divisions are present among employees of technology-related companies. When telephones were introduced, women were the switchboard operators and record keepers, while men designed and repaired the equipment and managed the whole system. Typesetting in large printing frames once required physical strength and mechanical skills and was a male occupation. But men continued to exclude women from compositors’ unions when some type, and more recently computer formatting, required only typing and formatting skills.48 Today most computer designers and programmers are men, while in offices most of the data are entered at computer keyboards by women. With many middle-level jobs eliminated, these lower-level jobs often become dead ends for women.49 A study of three computerized industries in Britain found that women were the low-paid operators, while only men understood and controlled the equipment, and men almost never worked under the supervision of women.50
Note that contextualism allows for a two-way interaction between technology and society. When technology is treated as merely one form of cultural expression among others, its distinctive characteristics may be ignored. In some renditions, the ways in which technology shapes culture are forgotten while the cultural forces on technology are scrutinized. The impact of technology sin society is particularly important in the transfer of a technology to a new cultural setting in a developing country. Some Third World authors have been beenly aware of technology as an instrument of power, and they portray a two-way interaction between technology and society across national boundaries.
The pessimists typically make personal fulfillment their highest priority, and they interpret fulfillment in terms of human relationships and community life rather than material possessions. They are concerned about individual rights and the dignity of persons. They hold that meaningful work is as important as economic productivity in policies for technology. The pessimists are dedicated to resource sustainability and criticize the high levels of consumption in industrial societies today. They often advocate respect for all creatures and question the current technological goal of mastery of nature.
The contextualists are more likely to give prominence to social justice because they interpret technology as both a product and an instrument of social power. For them the most important fount of participatory freedom are opportunities for participation in political processes and in work-related decisions. They are less concerned about economic growth than about how that growth is distributed and who receives the costs and the benefits. Contextualists often seek environmental protection because they are aware of the natural as well as the social contexts in which technologies operate.
I am most sympathetic with the contextualists, though I am indebted to many of the insights of the pessimists. Four issues seem to me particularly important in analyzing the differences among the positions outlined above.
1. Defense of the Personal. The pessimists have defended human values in a materialistic and impersonal society. The place to begin, they say, is one's own life. Each of us can adopt individual life-styles more consistent with human and environmental values. Moreover, strong protest and vivid examples are needed to challenge the historical dominance of technological optimism and the disproportionate resource consumption of affluent societies. I admire these critics for defending individuality and choice in the face of standardization and bureaucracy. I join them in upholding the significance of personal relationships and a vision of personal fulfillment that goes beyond material affluence. I affirm the importance of the spiritual life, but I do not believe that it requires a rejection of technology. The answer to the destructive features of technology is not less technology, but technology of the right kind.
2. The Role of Politics. Differing models of social change are implied in the three positions. The first group usually assumes a free market model. Technology is predominantly beneficial, and the reduction of any undesirable side effects is itself a technical problem for the experts. Government intervention is needed only to regulate the most harmful impacts. Writers mentioned in the second section, by contrast, typically adopt some variant of technological determinism. Technology is dehumanizing and uncontrollable. They see run-away technology as an autonomous and all-embracing system that molds all of life, including the political sphere, to its requirements. The individual is helpless within the system. The views expressed in the third section presuppose a “social conflict” model. Technology influences human life but is itself part of a cultural system; it is an instrument of social power serving the purposes of those who control it. It does systematically impose distinctive forms on all areas of life, but these can be modified through political processes. Whereas the first two groups give little emphasis to politics, the third, with which I agree, holds that conflicts concerning technology must be resolved primarily in the political arena.
3. The Redirection of Technology. I believe that we should neither accept uncritically the past directions of technological development nor reject technology in toto but redirect it toward the realization of human and environmental values. In the past, technological decisions have usually been governed by narrowly economic criteria, to the neglect of environmental and human costs. In a later chapter we will look at technology assessment, a procedure designed to use a broad range of criteria to evaluate the diverse consequences of an emerging technology—before it has been deployed and has developed the rested interests and institutional momentum that make it seem uncontrollable. I will argue that new policy priorities concerning agriculture, energy, resource allocation, and the redirection of technology toward basic human seeds can be achieved within democratic political institutions. The key question will be: What decision-making processes and what technological policies can contribute to human and environmental values?
4. The Scale of Technology. Appropriate technology can be thought of as an attempt to achieve some of the material benefits of technology outlined in the first section without the destructive human costs discussed in the second section most of which result from large-scale centralized technologies. Intermediate-scale technology allows decentralization and greater local participation in decisions The decentralization of production also allows greater use of local materials and often a reduction of impact on the environment. Appropriate technology does not imply a return to primitive and prescientific methods; Father seeks to use the best science available toward goals different from those that have governed industrial production in the past.
Industrial technology was developed when capital and resources were abundant, and we continue to assume these conditions. Automation, for example, is capital-intensive and labor saving. Yet in developing nations capital is scarce and labor is abundant. The technologies needed there must be relatively inexpensive and labor-intensive. They must be of intermediate scale so that jobs can be created in rural areas and small towns, to slow down mass migration to the cities. They must fulfill basic human needs, especially for food, housing and health. Alternative patterns of modernization are less environmentally and socially destructive than the path that we have followed. It is increasingly evident that many of these goals are desirable also in industrial nations, I will suggest that we should develop a mixture of large and intermediate-scale technologies, which will require deliberate encouragement of the latter.
The redirection of technology will be no easy task. Contemporary technology is so tightly tied to industry, government, and the structures of economic power that changes in direction will be difficult to achieve. As the critics of technology recognize, the person who tries to work for change within the existing order may be absorbed by the establishment. But the welfare of humankind requires a creative technology that is economically productive, ecologically sound, socially just, and personally fulfilling.
XXX . V00000 Insurance of electronics
Insurance for electronics provides, withing the appropriate policy terms, comprehensive protection for stationary electronic equipment and information, communication and medical technology equipment, or sets thereof. Other electrotechnical and electronic equipment and apparatus can also be insured, as well as portable equipment or fixed equipment in a vehicle.
Indemnity for the insured software will be, within the insurance for electronics, issued only when the loss of the software, or alterations to it, were caused as a result of insured damage to the data carriers on which this data was saved.
You can report a claim under insurance against liability for damages by phone, in writing or online.
Electronic Equipment Insurance
Computer
== MA THEREFORE INSURANCE TO CAPITAL ELECTRONICS AND AUTOMATION ==
This breakthrough came courtesy of Eve, a "robotic scientist" that resides at the University of Manchester's Automation Lab. Eve was designed to find new disease-fighting drugs faster and cheaper than her human peers. She achieves this by using advanced artificial intelligence to form original hypotheses about which compounds will murder malicious microbes (while sparing human patients) and then conducting controlled experiments on disease cultures via a pair of specialized robotic arms.
Eve is still under development, but her proven efficacy guarantees that Big Pharma will begin to "recruit" her and her automated ilk in place of comparatively measured human scientists who demand annoying things like "monetary compensation," "safe work environments," and "sleep."
If history is any guide, human pharmaceutical researchers won't disappear entirely—at least not right away. What will probably happen is that the occupation will follow the path of so many others (assembly line worker, highway toll taker, bank teller) in that the ratio of humans to non-sentient entities will tilt dramatically.
Machines outperforming humans is a tale as old as the Industrial Revolution. But as this process takes hold in the logarithmically evolving Information Age, , many are beginning to question if human workers will be necessary at all.
The Brand New Thing That Is Happening
The Luddites were an occasionally violent group of 19th-century English textile workers who raged against the industrial machines that were beginning to replace human workers. The Luddites' anxieties were certainly understandable, if—as history would eventually bear out—misguided. Rather than crippling the economy, the mechanization the Luddites feared actually improved the standard of living for most Brits. New positions that took advantage of these rising technologies and the cheaper wares they produced (eventually) supplanted the jobs that were lost.
Fast-forward to today and "Luddite" has become a derogatory term used to describe anyone with an irrational fear or distrust of new technology. The so-called "Luddite fallacy" has become near-dogma among economists as a way to describe and dismiss the fear that new technologies will eat up all the jobs and leave nothing in their place. So, perhaps the HR assistant who's been displaced by state-of-the-art applicant tracking software or the cashier who got the boot in exchange for a self-checkout kiosk can take solace in the fact that the bomb that just blew up in their lives was just clearing the way for a new higher-skill job in their future. And why shouldn't that be the case? This technology-employment paradigm has been validated by the past 200 or so years of history.
Yet some economists have openly pondered if the Luddite fallacy might have an expiration date. The concept only holds true when workers are able to retrain for jobs in other parts of the economy that are still in need of human labor. So, in theory, there could very well come a time when technology becomes so pervasive and evolves so quickly that human workers are no longer able to adapt fast enough.
One of the earliest predictions of this personless workforce came courtesy of an English economist who famously observed (PDF), "We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labor outrunning the pace at which we can find new uses for labor."
That economist was John Maynard Keynes, and the excerpt was from his 1930 essay "Economic Possibilities for our Grandchildren." Well, here we are some 85 years later (and had Keynes had any grandchildren they'd be well into retirement by now, if not moved on to that great job market in the sky), and the "disease" he spoke of never materialized. It might be tempting to say that Keynes's prediction was flat-out wrong, but there is reason to believe that he was just really early.
Fears of technological unemployment have ebbed and flowed through the decades, but recent trends are spurring renewed debate as to whether we may—in the not-crazy-distant future—be innovating ourselves toward unprecedented economic upheaval. This past September in New York City, there was even a World Summit on Technological Unemployment that featured economic heavies like Robert Reich (Secretary of Labor during the Clinton administration), Larry Summers (Secretary of the Treasury, also under Clinton), and Nobel Prize–winning economist Joseph Stiglitz.
So why might 2016 be so much more precarious than 1930? Today, particularly disruptive technologies like artificial intelligence, robotics, \3D printing, and nanotechnology are not only steadily advancing, but the data clearly shows that their rate of advancement is increasing (the most famous example of which being Moore's Law's near-flawless record of describing how computer processors grow exponentially brawnier with each generation). Furthermore, as the technologies develop independently, they will hasten the development of other segments (for example, artificial intelligence might program 3D printers to create the next generation of robots, which in turn will build even better 3D printers). It's what futurist and inventor Ray Kurzweil has described as the Law of Accelerating Returns: Everything is getting faster—faster.
The evolution of recorded music illustrates this point. It's transformed dramatically over the past century, but the majority of that change has occurred in just the past two decades. Analog discs were the most important medium for more than 60 years before they were supplanted by CDs and cassettes in the 1980s, only to be taken over two decades later by MP3s, which are now rapidly being replaced by streaming audio. This is the type of acceleration that permeates modernity.
"I believe we're reaching an inflection point," explains software entrepreneur and author of the book Rise of the Robots, Martin Ford (read the full interview here)."Specifically in the way that machines—algorithms—are starting to pick up cognitive tasks. In a limited sense, they're starting to think like people. It's not like in agriculture, where machines were just displacing muscle power for mechanical activities. They're starting to encroach on that fundamental capability that sets us apart as a species—the ability to think. The second thing [that is different than the Industrial Revolution] is that information technology is so ubiquitous. It's going to invade the entire economy, every employment sector. So there isn't really a safe haven for workers. It's really going to impact across the board. I think it's going to make virtually every industry less labor-intensive. "
To what extent this fundamental shift will take place—and on what timescale—is still very much up for debate. Even if there isn't the mass economic cataclysm some fear, many of today's workers are completely unprepared for a world in which it's not only the steel-driving John Henrys who find that machines can do their job better (and for far cheaper), but the Michael Scotts and Don Drapers, too. A white-collar job and a college degree no longer offer any protection from automation.
If I Only Had a Brain
There is one technology in particular that stands out as a disruption super-tsunami in waiting. Machine learning is a subfield of AI that makes it possible for computers to perform complex tasks for which they weren't specifically programmed—indeed, for which they couldn't be programmed—by enabling them to both gather information and utilize it in useful ways.
Machine learning is how Pandora knows what songs you'll enjoy before you do. It's how Siri and other virtual assistants are able to adapt to the peculiarities of your voice commands. It even rules over global finances (high-frequency trading algorithms now account for more than three-quarters of all stock trades; one venture capital firm, Deep Knowledge Ventures, has gone so far as to appoint an algorithm to its board of directors).
Another notable example—and one that will itself displace thousands, if not millions, of human jobs—is the software used in self-driving cars. We may think of driving as a task involving a simple set of decisions (stop at a red light, make two lefts and a right to get to Bob's house, don't run over anybody), but the realities of the road demand that drivers make lots of decisions—far more than could ever be accounted for in a single program. It would be difficult to write code that could handle, say, the wordless negotiation between two drivers who simultaneously arrive at a four-way-stop intersection, let alone the proper reaction to a family of deer galloping into heavy traffic. But machines are able to observe human behavior and use that data to approximate a proper response to a novel situation.
"People tried just imputing all the rules of the road, but that doesn't work," explains Pedro Domingos, professor of computer science at the University of Washington and author of The Master Algorithm. "Most of what you need to know about driving are things that we take for granted, like looking at the curve in a road you've never seen before and turning the wheel accordingly. To us, this is just instinctive, but it's difficult to teach a computer to do that. But [one] can learn by observing how people drive. A self-driving car is just a robot controlled by a bunch of algorithms with the accumulated experience of all the cars it has observed driving before—and that's what makes up for a lack of common sense."
Mass adoption of self-driving cars is still many years away, but by all accounts they are quite capable at what they do right now (though Google's autonomous car apparently still has trouble discerning the difference between a deer and a plastic bag blowing in the wind). That's truly amazing when you look at what computers were able to achieve only a decade ago. With the prospect of accelerating evolution, we can only imagine what tasks they will be able to take on in another 10 years.
Is There a There There?
No one disagrees that technology will continue to achieve once-unthinkable feats, but the idea that mass technical unemployment is an inevitable result of these advancements remains controversial. Many economists maintain an unshakable faith in The Market and its ability to provide jobs regardless of what robots and other assorted futuristic machines happen to be zooming around. There is, however, one part of the economy where technology has, beyond the shadow of any doubt, pushed humanity aside: manufacturing.
Between 1975 and 2011, manufacturing output in the U.S. more than doubled (and that's despite NAFTA and the rise of globalization), while the number of (human) workers employed in manufacturing positions decreased by 31 percent. This dehumanizing of manufacturing isn't just a trend in America—or even rich Western nations—it's a global phenomenon. It found its way into China, too, where manufacturing output increased by 70 percent between 1996 and 2008 even as manufacturing employment declined by 25 percent over the same period.
There's a general consensus among economists that our species' decreasing relevance in manufacturing is directly attributable to technology's ability to make more stuff with fewer people. And what business wouldn't trade an expensive, lunch-break-addicted human workforce for a fleet of never-call-out-sick machines? (Answer: all the ones driven into extinction by the businesses that did.)
The $64 trillion question is whether this trend will be replicated in the services sector that more than two-thirds of U.S. employees now call their occupational home. And if it does, where will all those human workers move on to next?
"There's no doubt that automation is already having an effect on the labor market," says James Pethokoukis, a fellow with the libertarian-leaning American Enterprise Institute. "There's been a lot of growth at high-end jobs, but we've lost a lot of middle-skill jobs—the kind where you can create a step-by-step description of what those jobs are, like bank tellers or secretaries or front-office people."
It may be tempting to discount fears about technological unemployment when we see corporate profits routinely hitting record highs. Even the unemployment rate in the U.S. has fallen back to pre-economic-train-crash levels. But we should keep in mind that participation in the labor market remains mired at the lowest levels seen in four decades. There are numerous contributing factors here (not the least of which is the retiring baby boomers), but some of it is surely due to people so discouraged with their prospects in today's job market that they simply "peace out" altogether.
Another important plot development to consider is that even among those with jobs, the fruits of this increased productivity are not shared equally. Between 1973 and 2013, average U.S. worker productivity in all sectors increased an astounding 74.4 percent, while hourly compensation only increased 9.2 percent. It's hard not to conclude that human workers are simply less valuable than they once were.
So What Now, Humans?
Let's embark on a thought experiment and assume that technological unemployment is absolutely happening and its destructive effects are seeping into every employment nook and economic cranny. (To reiterate: This is far from a consensus viewpoint.) How should society prepare? Perhaps we can find a way forward by looking to our past.
Nearly two centuries ago, as the nation entered the Industrial Revolution, it also engaged in a parallel revolution in education known as the Common School Movement. In response to the economic upheavals of the day, society began to promote the radical concept that all children should have access to a basic education regardless of their family's wealth (or lack thereof). Perhaps most important, students in these new "common schools" were taught standardized skills and adherence to routine, which helped them go on to become capable factory workers.
"This time around we have the digital revolution, but we haven't had a parallel revolution in our education system," says economist and Education Evolution founder Lauren Paer. "There's a big rift between the modern economy and our education system. Students are being prepared for jobs in the wrong century. Adaptability will probably be the most valuable skill we can learn. We need to promote awareness of a landscape that is going to change quickly."
In addition to helping students learn to adapt—in other words, learn to learn—Paer encourages schools to place more emphasis on cultivating the soft skills in which "humans have a natural competitive advantage over machines," she says. "Things like asking questions, planning, creative problem solving, and empathy—those skills are very important for sales, it's very important for marketing, not to mention in areas that are already exploding, like eldercare."
One source of occupational hope lies in the fact that even as technology removes humanity from many positions, it can also help us retrain for new roles. Thanks to the Internet, there are certainly more ways to access information than ever before. Furthermore (if not somewhat ironically), advancing technologies can open new opportunities by lowering the bar to positions that previously required years of training; people without medical degrees might be able to handle preliminary emergency room diagnoses with the aid of an AI-enabled device, for example.
So, perhaps we shouldn't view these bots and bytes as interlopers out to take our jobs, but rather as tools that can help us do our jobs better. In fact, we may not have any other course of action—barring a global Amish-style rejection of progress, increasingly capable and sci-fabulous technologies are going to come online. That's a given; the workers who learn to embrace them will fare best.
"There will be a lot of jobs that won't disappear, but they will change because of machine learning," says Domingos. "I think what everyone needs to do is look at how they can take advantage of these technologies. Here's an analogy: A human can't win a race against a horse, but if you ride a horse, you'll go a lot further. We all know that Deep Blue beat Kasparov and then computers became the best chess players in the world—but that's actually not correct. The current world champions are what we call 'centaurs,' that's a team of a human and a computer. A human and a computer actually complement each other very well. And, as it turns out, human-computer teams beat all solely human or solely computer competitors. I think this is a good example of what's going to happen in a lot of areas."
Technologies such as machine learning can indeed help humans—at least those with the technical know-how—excel. Take the example of Cory Albertson, a "professional" fantasy sports better who has earned millions from daily gaming sites using hand-crafted algorithms to stake an advantage over human competitors whose strategies are often based on little more than what they gleaned from last night's SportsCenter. Also, consider the previously mentioned stock-trading algorithms that have enabled financial players to amass fortunes on the market. In the case of these so-called "algo-trading" scenarios, the algorithms do all the heavy lifting and rapid trading, but carbon-based humans are still in the background implementing the investment strategies.
Of course, even with the most robust educational reform and distributed technical expertise, accelerating change will probably push a substantial portion of the workforce to the sidelines. There are only so many people who will be able to use coding magic to their benefit. And that type of disparity can only turn out badly.
One possible solution many economists have proposed is some form of universal basic income (UBI), aka just giving people money. As you might expect, this concept has the backing of many on the political left, but it's also had notable supporters on the right (libertarian economic rock star Friedrich Hayek famously endorsed the concept). Still, many in the U.S. are positively allergic to anything with even the faintest aroma of "socialism."
"It's really not socialism—quite the opposite," comments Ford, who supports the idea of a UBI at some point down the road to counter the inability of large swaths of society to earn a living the way they do today. "Socialism is about having the government take over the economy, owning the means of production, and—most importantly—allocating resources…. And that's actually the opposite of a guaranteed income. The idea is that you give people enough money to survive on and then they go out and participate in the market just as they would if they were getting that money from a job. It's actually a free market alternative to a safety net."
The exact shape of a Homo sapiens safety net depends on whom you ask. Paer endorses a guaranteed jobs program, possibly in conjunction with some form of UBI, while "the conservative version would be through something like a negative income tax," according to Pethokoukis. "If you're making $15 per hour and we as a society think you should be making $20 per hour, then we would close the gap. We would cut you a check for $5 per hour."
In addition to maintaining workers' livelihoods, the very nature of work might need to be re-evaluated. Alphabet CEO Larry Page has suggested the implementation of a four-day workweek in order to allow more people to find employment. This type of shift isn't so pie-in-the-sky when you consider that, in the late 19th century, the average American worker logged nearly 75 hours per week, but the workweek evolved in response to new political, economic, and technological forces. There's no real reason that another shift of this magnitude couldn't (or wouldn't) happen again.
If policies like these seem completely unattainable in America's current gridlock-choked political atmosphere, that's because they most certainly are. If mass technological unemployment does begin to manifest itself as some anticipate, however, it will bring about a radical new economic reality that would demand a radical new political response.
Toward the Star Trek Economy
Nobody knows what the future holds. But that doesn't mean it isn't fun to play the "what if" game. What if no one can find a job? What if everything comes under control of a few trillionaires and their robot armies? And, most interesting of all: What if we're asking the wrong questions altogether?
What if, after a tumultuous transition period, the economy evolves beyond anything we would recognize today? If technology continues on its current trajectory, it inevitably leads to a world of abundance. In this new civilization 2.0, machines will conceivably be able to answer just about any question and make just about everything available. So, what does that mean for us lowly humans?
"I think we're heading towards a world where people will be able to spend their time doing what they enjoy doing, rather than what they need to be doing," Planetary Ventures CEO, X-Prize cofounder, and devoted techno-optimist Peter Diamandis told me when I interviewed him last year. "There was a Gallup Poll that said something like 70 percent of people in the United States don't enjoy their job—they work to put food on the table and get health insurance to survive. So, what happens when technology can do all that work for us and allow us to actually do what we enjoy with our time?"
It's easy to imagine a not-so-distant future where automation takes over all the dangerous and boring jobs that humans do now only because they have to. Surely there are drudging elements of your workday that you wouldn't mind outsourcing to a machine so you could spend more time with the parts of your job that you do care about.
One glass-half-full vision could look something like the galaxy portrayed in Star Trek: The Next Generation, where abundant food replicators and a post-money economy replaced the need to do... well, anything. Anyone in Starfleet could have chosen to spend all their time playing 24th-century video games without the fear of starvation or homelessness, but they decided a better use of their time would be spent exploring the unknown. Captain Picard and the crew of the USS Enterprise didn't work because they feared what would happen if they didn't—they worked because they wanted to.
Nothing is inevitable, of course. A thousand things could divert us from this path. But if we ever do reach a post-scarcity world, then humanity will be compelled to undergo a radical reevaluation of its values. And maybe that's not the worst thing that could happen to us.
Perhaps we shouldn't fear the idea that all the jobs are disappearing. Perhaps we should celebrate the hope that nobody will have to work again.
XXX . V00000 Views of Technology
Appraisals of modern technology diverge widely. Some see it as the beneficent source of higher living standards, improved health, and better communications. They claim that any problems created by technology are themselves amenable to technological solutions. Others are critical of technology, holding that it leads to alienation from nature, environmental destruction, the mechanization of human life, and the loss of human freedom. A third group asserts that technology is ambiguous, its impacts varying according to the social context in which it is designed and used, because it is both a product and a source of economic and political power.4
In this chapter, views of technology are grouped under three headings: Technology as Liberator, Technology as Threat, and Technology as Instrument of Power. In each case the underlying assumptions and value judgments are examined. I will indicate why I agree with the third of these positions, which emphasizes the social construction and use of particular technologies. The issues cut across disciplines; I draw from the writings of engineers, historians, sociologists, political scientists, philosophers, and theologians. The human and environmental values relevant to the appraisal of technology are further analyzed in chapters 2 and 3. These three chapters provide the ethical categories and principles for examining policy decisions about particular technologies in later chapters.Technology may be defined as the application of organized knowledge to practical tasks by ordered systems of people and machines.5 There are several advantages to such a broad definition. “Organized knowledge” allows us to include technologies based on practical experience and invention as well as those based on scientific theories. The “practical tasks” can include both the production of material goods (in industry and agriculture, for instance) and the provision of services (by computers, communications media, and biotechnologies, among others). Reference to “ordered systems of people and machines” directs attention to social institutions as well as to the hardware of technology. The breadth of the definition also reminds us that there are major differences among technologies.
I. TECHNOLOGY AS LIBERATOR
Throughout modern history, technological developments have been enthusiastically welcomed because of their potential for liberating us from hunger, disease, and poverty. Technology has been celebrated as the source of material progress and human fulfillment.
1. THE BENEFITS OF TECHNOLOGY
Defenders of technology point out that four kinds of benefits can be distinguished if one looks at its recent history and considers its future:
1. Higher Living Standards. New drugs, better medical attention, and improved sanitation and nutrition have more than doubled the average life span in industrial nations within the past century. Machines have released us from much of the backbreaking labor that in previous ages absorbed most of people's time and energy. Material progress represents liberation from the tyranny of nature. The ancient dream of a life free from famine and disease is beginning to be realized through technology. The standard of living of low-income families in industrial societies has doubled in a generation, even though relative incomes have changed little. Many people in developing nations now look on technology as their principal source of hope. Productivity and economic growth, it is said, benefit everyone in the long run.2. Opportunity for Choice. Individual choice has a wider scope today than ever before because technology has produced new options not previously available and a greater range of products and services. Social and geographical mobility allow a greater choice of jobs and locations. In an urban industrial society, a person's options are not as limited by parental or community expectations as they were in a small-town agrarian society. The dynamism of technology can liberate people from static and confining traditions to assume responsibility for their own lives. Birth control techniques, for example, allow a couple to choose the size and timing of their family. Power over nature gives greater opportunity for the exercise of human freedom.6
3. More Leisure. Increases in productivity have led to shorter working hours. Computers and automation hold the promise of eliminating much of the monotonous work typical of earlier industrialism. Through most of history, leisure and cultural pursuits have been the privilege of the few, while the mass of humanity was preoccupied with survival. In an affluent society there is time for continuing education, the arts, social service, sports, and participation in community life. Technology can contribute to the enrichment of human life and the flowering of creativity. Laborsaving devices free us to do what machines cannot do. Proponents of this viewpoint say that people can move behind materialism when their material needs are met.
4. Improved Communications. With new forms of transportation, one can in a few hours travel to distant cities that once took months to reach. With electronic technologies (radio, television, computer networks, and so on), the speed, range, and scope of communication have vastly increased. The combination of visual images and auditory message have an immediacy not found in the linear sequence of the printed word. These new media offer the possibility of instant worldwide communication, greater interaction, understanding, and mutual appreciation in the “global village.” It has been suggested that by dialing coded numbers on telephones hooked into computer networks, citizens could participate in an instant referendum on political issues. According as its defenders, technology brings psychological and social benefits as well as material progress.
In part 2 we will encounter optimistic forecasts of each of the particular technologies examined. In agriculture, some experts anticipate that the continuing Green Revolution and the genetic engineering of new crops will provide adequate food for a growing world population. In the case of energy, it is claimed that breeder reactors and fusion will provide environmentally benign power to replace fossil fuels. Computer enthusiasts anticipate the Information Age in which industry is automated and communications networks enhance commercial, professional, and personal life. Biotechnology promises the eradication of genetic diseases, the improvement of health, and the deliberate design of new species—even the modification of humanity itself. In subsequent chapters we will examine each of these specific claims as well as the general attitudes they reveal.
OPTIMISTIC VIEWS OF TECHNOLOGY
Let us look at some authors who have expressed optimism regarding technology. Melvin Kranzberg, a prominent historian of technology, has presented a very positive picture of the technological past and future. He argues that urban industrial societies offer more freedom than rural ones and provide grater choice of occupations, friends, activities, and life-styles. The work week has been cut in half, and human wants have been dramatically fulfilled.7Emanuel Mesthene, former director of the Harvard Program in Technology and Society, grants that every technology brings risks as well as benefits, but he says that our task is the rational management of risk. Some technologies poison the environment, but others reduce pollution. A new technology may displace some workers but it also creates new jobs. Nineteenth-century factories and twentieth-century assembly lines did involve dirty and monotonous work, but the newer technologies allow greater creativity and individuality.8
A postindustrial society, it is said, is already beginning to emerge. In this new society, according to the sociologist Daniel Bell, power will be based on knowledge rather than property. The dominant class will be scientists, engineers, and technical experts; the dominant institutions will be intellectual ones (universities, industrial laboratories, and research institutes). The economy will be devoted mainly to services rather than material goods. Decisions will be made on rational-technical grounds, marking “the end of ideology.” There will be a general consensus on social values; experts will coordinate social planning, using rational techniques such as decision theory and systems analysis. This will be a future-oriented society, the age of the professional managers, the technocrats.9 A bright picture of the coming technological society has been given by many “futurists,” including Buckminster Fuller, Herman Kahn, and Alvin Toffler.10
Samuel Florman is an articulate engineer and author who has written extensively defending technology against its detractors. He insists that the critics have romanticized the life of earlier centuries and rural societies. Living standards were actually very low, work was brutal, and roles were rigidly defined. People have much greater freedom in technological societies. The automobile, for example, enables people to do what they want and enhances geographical and class mobility. People move to cities because they prefer life there to “the tedium and squalor of the countryside.” Florman says that worker alienation in industry is rare, and many people prefer the comfortable monotony of routine tasks to the pressures of decision and accountability. Technology is not an independent force out of control; it is the product of human choice, a response to public demand expressed through the market place.11
Florman grants that technology often has undesirable side effects, but he says that these are amenable to technological solutions. One of his heroes is Benjamin Franklin, who “proposed technological ways of coping with the unpleasant consequences of technology.”12 Florman holds that environmental and health risks are inherent in every technical advance. Any product or process can be made safer, but always at an economic cost. Economic growth and lower prices for consumers are often more important than additional safety, and absolute safety is an illusory goal. Large-scale systems are usually more efficient than small-scale ones. It is often easier to find a “technical fix” for a social problem than to try to change human behavior or get agreement on political policies.13
Florman urges us to rely on the judgment of experts in decisions about technology. He says that no citizen can be adequately informed about complex technical questions such as acid rain or radioactive waste disposal. Public discussion of these issues only leads to anxiety and erratic political actions. We should rely on the recommendations of experts on such matters.14 Florman extols the “unquenchable spirit” and “irrepressible human will” evident in technology: For all our apprehensions, we have no choice but to press ahead. We must do so, first, as the name of compassion. By turning our backs on technological change, we would be expressing our satisfaction with current world levels of hunger, disease, and privates. Further, we must press ahead in the name of the human adventure. Without experimentation and change our existence would be a dull business. We simply cannot sleep while there are masses to feed and diseases to conquer, seas to explore and heaving to servey.15
Some theologians have also given very positive appraisals of technology. They see it as a source not only of higher living standards but also of greater freedom and creative expression. In his earlier writings, Harvey Cox held that freedom to master and shape the world through technology liberates us from the confines of tradition. Christianity brought about the desacralization of nature and allowed it to be controlled and used for human welfare.16 Norris Clarke was technology as an instrument of human fulfillment and self-expression in the use of our God-given intelligence to transform the world. Liberation from bondage to nature, he says, is the victory of spirit over matter. As cocreators with God we can celebrate the contribution of reason to the enrichment of human life.17 Other theologians have affirmed technology as an instrument of love and compassion in relieving human suffering—a modern response to the biblical command to feed the hungry and help the neighbor in need.
The Jesuit paleontologist Pierre Teilhard de Chardin, writing in the early year of nuclear power, computers, and molecular biology, expressed a hopeful vision of the technological future. He envisioned computers and electronic communication in a network of interconnected consciousness, a global layer of thought that he called “the noosphere.” He defended eugenics, “artificial neo-life,” and the remodeling of the human organism by manipulation of the genes. With this new power over heredity, he said, we can replace the crude forces of natural selection and “seize the tiller” to control the direction of future evolution. We will have total power over matter, “reconstructing the very stuff of the universe.” He looked to a day of interplanetary travel and the unification of our own planet, based on intellectual and cultural interaction.18
Here was an inspiring vision of a planetary future in which technology and spiritual development would be linked together. Teilhard affirmed the value of secular life in the world and the importance of human efforts in “building the earth” as we cooperate in the creative work of God. Technology is participation in divine creativity. He rejected any note of despair, which would cut the nerve of constructive action. At times he seemed to have unlimited confidence in humanity's capacity to shape its own destiny. But his confidence really lay in the unity, convergence, and ascent of the cosmic process of which humanity and technology are manifestations. The ultimate source of that unity and ascent is God as known in the Christ whose role is cosmic. For Teilhard eschatological hope looks not to an intervention discontinuous from history, but to the fulfillment of a continuing process to which our own actions contribute.
Teilhard's writings present us with a magnificent sweep of time from past to future. But they do not consider the institutional structures of economic power and self-interest that now control the directions of technological development. Teilhard seldom acknowledged the tragic hold of social injustice on human life. He was writing before the destructive environmental impacts of technology were evident. When Teilhard looked to the past, he portrayed humanity as an integral part of the natural world, interdependent with other creatures. But when he looked to the future, he expected that because of our technology and our spirituality we will be increasingly separated from other creatures. Humanity will move beyond dependence on the organic world. Though he was ultimately theocentric (centered on God), and he talked about the redemption of the whole cosmos, many of his images are anthro-pocentric (centered on humanity) and imply that other forms of life are left behind in the spiritualization of humankind that technology will help to bring about.
3. A REPLY TO THE OPTIMISTS
Subsequent chapters will point to inadequacies of these views, but some major criticisms can be summarized here.
First, the environmental costs and human risks of technology are dismissed too rapidly. The optimists are confident that technical solutions can be found for environmental problems. Of course, pollution abatement technologies can treat many of the effluents of industry, but often unexpected, indirect, or delayed consequences occur. The effects of carcinogens may not show up for twenty-five years or more. The increased death rates among shipyard workers exposed to asbestos in the early 1940s were not evident until the late 1960s. Toxic wastes may contaminate groundwater decades after they have been buried. The hole in the ozone layer caused by the release of chlorofluorocarbons had not been anticipated by any scientists. Above all, soil erosion and massive deforestation threaten the biological resources essential for human life, and global warming from our use of fossil fuels threatens devastating changes in world climates.Second, environmental destruction is symptomatic of a deeper problem: alienation from nature. The idea of human domination of nature has many roots. Western religious traditions have often drawn a sharp line between humanity and other creatures (see chapter 3). Economic institutions treat nature as a resource for human exploitation. But technological enthusiasts contribute to this devaluation of the natural world if they view it as an object to be controlled and manipulated. Many engineers are trained in the physical sciences and interpret living things in mechanistic rather than ecological terms. Others spend their entire professional lives in the technosphere of artifacts, machines, electronics, and computers, cut off from the world of nature. To be sure, sensitivity to nature is sometimes found among technological optimists, but it is more frequently found among the critics of technology.
Third, technology has contributed to the concentration of economic and political prove. Only relatively affluent groups or nations can afford the latest technology the gaps between rich and poor have been perpetuated and in many ideas increased by technological developments. In a world of limited resources, it also appears impossible for all nations to sustain the standards of living of industrial nations today, much less the higher standards that industrial nations expect in the future. Affluent nations use a grossly disproportionate share of the world's energy and resources. Commitment to justice within nations also requires a more serious analysis of the distribution of the fonts and benefits of technology. We will find many technologies in which one group enjoys the benefits while another group is exposed to the risks and social costs.
Fourth, large-scale technologies typical of industrial nations today are particularly problematic. They are capital-intensive rather than labor-intensive, and they add to unemployment in many parts of the world. Large-scale systems send to be vulnerable to error, accident, or sabotage. The near catastrophe at the Three Mile Island nuclear plant in 1979 and the Chernobyl disaster in 1986 were the products of human errors, faulty equipment, poor design, and unreliable safety procedures. Nuclear energy is a prime example of a vulnerable, centralized, capital-intensive technology. Systems in which human or mechanical failures can be disastrous are risky even in a stable society, quite apart from additional risks under conditions of social unrest. The large scale of many current systems is as much the product of government subsidies, tax and credit policies, and particular corporate interests as of any inherent economies of scale.
Fifth, greater dependence on experts for policy decisions would not be desirable. The technocrats claim that their judgments are value free; the technical elite is supposedly nonpolitical. But those with power seldom use it rationally and objectively when their own interests are at slake. When social planners think they are deciding for the good of all—whether in the French or Russian revolutions or in the proposed technocracy of the future—the assumed innocence of moral intentions is likely to be corrupted in practice. Social controls over the controllers are always essential. I will suggest that the most important from of freedom is participation in the decisions affecting our lives. We will return in chapter 8 to this crucial question: How can both experts and citizens contribute to technological policy decisions in a democracy?
Lastly, we must question the linear view of the science-technology-society relationship, which is assumed by many proponents of optimistic views. Technology is taken to be applied science, and it is thought to have an essentially one-way impact on society. The official slogan of the Century of Progress exposition in Chicago in 1933 was: “Science Finds—Industry Applies—Man Conforms.” This has been called “the assembly-line view” because it pictures science at the start of the line and a stream of technological products pouring off the end of the line.19 If technology is fundamentally benign, there is no need for government interference except to regulate the most serious risks. Whatever guidance is needed for technological development is supplied by the expression of consumer preferences through the marketplace. In this view, technologies develop from the “push” of science and the “pull” of economic profits.
I accept the basic framework of private ownership in a free market economy, but I believe it has severe limitations that require correction through political processes. When wealth is distributed unevenly, the luxuries of a few people carry much more weight in the marketplace than the basic needs of many others. Many of the social and environmental costs of industrial processes are not included in market prices. Because long-term consequences are discounted at the current interest rate, they are virtually ignored in economic decisions. Our evaluation of technology, in short, must encompass questions of justice, participation, environmental protection, and long-term sustainability, as well as short-term economic efficiency.
II. TECHNOLOGY AS THREAT
At the opposite extreme are the critics of modern technology who see it as a threat to authentic human life. We will confine ourselves here to criticisms of the human rather than environmental consequences of technology.
1. THE HUMAN COSTS OF TECHNOLOGY
Five characteristics of industrial technology seem to its critics particularly inimical to human fulfillment.20
1. Uniformity in a Mass Society. Mass production yields standardized products, and mass media tend to produce a uniform national culture. Individuality is lost and local or regional differences are obliterated in the homogeneity of industrialization. Nonconformity hinders efficiency, so cooperative and docile workers are rewarded. Even the interactions among people are mechanized and objectified. Human identity is defined by roles in organizations. Conformity to a mass society jeopardizes spontaneity and freedom. According to the critics, there is little evidence that an electronic, computerized, automated society will produce more diversity than earlier industrialism did.2. Narrow Criteria of Efficiency. Technology leads to rational and efficient organization, which requires fragmentation, specialization, speed, the maximization of output. The criterion is efficiency in achieving a single goal or a narrow range of objectives; side effects and human costs are ignored. Quantitative criteria tend to crowd out qualitative ones. The worker becomes the servant of the machine, adjusting to its schedule and tempo, adapting to its requirements. Meaningful work roles exist for only a small number of people in industrial societies today. Advertising creates demand for new products, whether or not they fill real needs, in order to stimulate a larger volume of production and a consumer society.
3. Impersonality and Manipulation Relationships in a technological society are specialized and functional. Genuine community and interpersonal interaction are threatened when people feel like cogs in a well-oiled machine. In a bureaucracy, the goals of the organization are paramount and responsibility is diffused, so that no one feels personally responsible. Moreover, technology has created subtle ways of manipulating people and new techniques of electronic surveillance and psychological conditioning. When the technological mentality is dominant, people are viewed and treated like objects.
4 Uncontrollability. Separate technologies form an interlocking system, a total, mutually reinforcing network that seems to lead a life of its own. “Run-away technology” is said to be like a vehicle out of control, with a momentum that cannot be stopped. Some critics assert that technology is not just a set of adaptable tools lot human use but an all-encompassing form of life, a pervauttuie with its own logic and dynamic. Its consequences are unintended and unforeseeable. Like the sorcerer's apprentice who found the magic formula to make his broom carry water but did not know how to make it stop, we have set in motion forces that we cannot control. The individual feels powerless facing a monolithic system.
5. Alienation of the Worker. The worker's alienation was a central theme in the writing of Karl Marx. Under capitalism, he said, workers do not own their own tools or machines, and they are powerless in their work life. They can sell their labor as a commodity, but their work is not a meaningful form of self-expression. Marx held that such alienation is a product of capitalist ownership and would disappear under state ownership. He was optimistic about the use of technology in a communist economic order, and thus he belongs with the third group below, the contextualists, but his idea of alienation has influenced the pessimists.
More recent writers point out that alienation has been common in state-managed industrial economies too and seems to be a product of the division of labor, rationalization of production, and hierarchical management in large organizations, regardless of the economic system. Studs Terkel and others have found in interviews that resentment, frustration, and a sense of powerlessness are widespread among American industrial workers. This contrasts strongly with the greater work autonomy, job satisfaction, and commitment to work found in the professions, skilled trades, and family-owned farms.21
Other features of technological development since World War II have evoked widespread concern. The allocation of more than two-thirds of the U.S. federal research and development budget to military purposes has diverted expertise from environmental problems and urgent human needs, Technology also seems to have contributed to the impoverishment of human relationships and a loss of community. The youth counterculture of the 1970s was critical of technology and sought harmony with nature, intensity of personal experience, supportive communities, and alternative life-styles apart from the prevailing industrial order. While many of its expressions were short-lived, many of its characteristic attitudes, including disillusionment with technology, have persisted among some of the younger generation.22
2. RECENT CRITICS OF TECHNOLOGY
To the French philosopher and social critic Jacques Ellul, technology is an autonomous and uncontrollable force that dehumanizes all that it touches. The enemy is “technique”—a broad term Ellul uses to refer to the technological mentality and structure that he sees pervading not only industrial processes, but also all social, political, and economic life affected by them. Efficiency and organization, he says, are sought in all activities. The machine enslaves people when they adapt to its demands. Technology has its own inherent logic and inner necessity. Rational order is everywhere imposed at the expense of spontaneity and freedom.
Ellul ends with a technological determinism, since technique is self-perpetuating, all-pervasive, and inescapable. Any opposition is simply absorbed as we become addicted to the products of technology. Public opinion and the state become the servants of technique rather than its masters. Technique is global, monolithic, and unvarying among diverse regions and nations. Ellul offers us no way out, since all our institutions, the media, and our personal lives are totally in its grip. He holds that biblical ethics can provide a viewpoint transcending society from which to judge the sinfulness of the technological order and can give us the motivation to revolt against it, but he holds out little hope of controlling it.23 Some interpreters see in Ellul's recent writings a very guarded hope that a radical Christian freedom that rejects cultural illusions of technological progress might in the long run lead to the transformation rather than the rejection of technology. But Ellul does not spell out such a transformation because he holds that the outcome is in God's hands, not outs, and most of his writings are extremely pessimistic about social change.24The political scientist Langdon Winner has given a sophisticated version of the argument that technology is an autonomous system that shapes all human activities to its own requirements. It makes little difference who is nominally in control—elected politicians, technical experts, capitalist executives, or socialist managers—if decisions are determined by the demands of the technical system. Human ends are then adapted to suit the techniques available rather than the reverse. Winner says that large-scale systems are self-perpetuating, extending their control over resources and markets and molding human life to fit their own smooth functioning. Technology is not a neutral means to human ends but an all-encompassing system that imposes its patterns on every aspect of life and thought.25
The philosopher Hans Jonas is impressed by the new scale of technological power and its influence on events distant in time and place. Traditional Western ethics have been anthropocentric and have considered only short-range consequences. Technological change has its own momentum, and its pace is too rapid for trial-and-error readjustments. Now genetics gives us power over humanity itself. Jonas calls for a new ethic of responsibility for the human future and for nonhuman nature. We should err on the side of caution, adopting policies designed to avert catastrophe rather than to maximize short-run benefits. “The magnitude of these stakes, taken with the insufficiency of our predictive knowledge, leads to the pragmatic rule to give the prophecy of doom priority over the prophecy of bliss.”26 We should seek “the least harm,” not “the greatest good.” We have no right to tamper genetically with human nature or to accept policies that entail even the remote possibility of the extinction of humanity in a nuclear holocaust.
Another philosopher, Albert Borgmann, does not want to return to a pretechnological past, but he urges the selection of technologies that encourage genuine human fulfillment. Building on the ideas of Heidegger, he holds that authentic human existence requires the engagement and depth that occur when simple things and practices focus our attention and center our lives. We have let technology define the good life in terms of production and consumption, and we have ended with mindless labor and mindless leisure. A fast-food restaurant replaces the family meal, which was an occasion of communication and celebration. The simple pleasures of making music, hiking and running, gathering with friends around the hearth, or engaging in creative and self-reliant work should be our goals. Borgmann thinks that some large-scale capital-intensive industry is needed (especially in transportation and communication), but he urges the development of small-scale labor-intensive, locally owned enterprises (in arts and crafts, health care, and education, for example). We should challenge the rule of technology and restrict it to the limited role of supporting the humanly meaningful activities associated with a simpler life.27
In Technology and Power, the psychologist David Kipnis maintains that those who control a technology have power over other people and this affects personal attitudes as well as social structures, Power holders interpret technological superiority as moral superiority and tend to look down on weaker parties. Kipnis shows that military and transportation technologies fed the conviction of colonists that they were superior to colonized peoples. Similarly, medical knowledge and specialization have led doctors to treat patients as impersonal cases and to keep patients at arms length with a minimum of personal communication. Automation gave engineers and managers increased power over workers, who no longer needed special skills. In general, “power corrupts” and leads people to rationalize their use of power for their own ends. Kipnis claims that the person with technological knowledge-often has not only a potent instrument of control but also a self-image that assumes superiority over people who lack that knowledge and the concomitant opportunities to make decisions affecting their lives.28
Some Christian groups are critical of the impact of technology on human life. The Amish for example, have resolutely turned their backs on radios, television, and even automobiles. By hard work, community cooperation, and frugal ways, they have prospered in agriculture and have continued their distinctive life-styles and educational patterns. Main theologians who do not totally reject technology criticize its tendency to generate a Promethean pride and a quest for unlimited power. The search for omnipotence is a denial of creaturehood. Unqualified devotion to technology as a total way of life, they say, is a form of idolatry. Technology is finally thought of as the source of salvation, the agent of secularized redemption.29 In an affluent society, a legitimate concern for material progress readily becomes a frantic pursuit of comfort, a total dedication to self-gratification. Such an obsession with things distorts our basic values as well as our relationships with other persons. Exclusive dependence on technological rationality also leads to a truncation of experience, a loss of imaginative and emotional life, and an impoverishment of personal existence.
Technology is imperialistic and addictive, according to these critics. The optimists may think that, by fulfilling our material needs, technology liberates us from materialism and allows us to turn to intellectual, artistic, and spiritual pursuits. But it does not seem to be working out that way. Our material wants have escalated and appear insatiable. Yesterday's luxuries are today's necessities. The rich are usually more anxious about their future than the poor. Once we allow technology to define the good life, we have excluded many important human values from consideration.
Several theologians have expressed particular concern for the impact of technology on religious life. Paul Tillich claims that the rationality and impersonality of technological systems undermine the personal presuppositions of religious commitment.30 Gabriel Marcel believes that the technological outlook pervades our lives and excludes a sense of the sacred. The technician treats everything as a problem that can be solved by manipulative techniques without personal involvement. But this misses the mystery of human existence, which is known only through involvement as a total person. The technician treats other people as objects to be understood and controlled.31Martin Buber contrasts the I–It relation of objective detachment with the I–Thou relation of mutuality, responsiveness, and personal involvement. If the calculating attitude of control and mastery dominates a person's life, it excludes the openness and receptivity that are prerequisites of a relationship to God or to other persons.32 P. H. Sun holds that a high-tech environment inhibits the life of prayer. Atdtudes of power and domination are incompatible with the humility and reverence that prayer requires.33
3. A REPLY TO THE PESSIMISTS
In replying to these authors, we may note first that there are great variations among technologies, which are ignored when they are lumped together and condemned wholesale. Computerized offices differ greatly from steel mills and auto assembly lines, even if they share some features in common. One survey of journal articles finds that philosophers and those historians who trace broad trends (in economic and urban history, for example) often claim that technology determines history, whereas the historians or sociologists who make detailed studies of particular technologies are usually aware of the diversity of social, political, and economic interests that affect the design of a machine and its uses.34 I will maintain that the uses of any technology vary greatly depending on its social contexts. To be sure, technological systems are interlocked, but they do not form a monolithic system impervious to political influence or totally dominating all other social forces In particular, technology assessment and legislation offer opportunities for controlling technology, as we shall see.
Second, technological pessimists neglect possible avenues for the redirection of technology. The “inevitability” or “inherent logic” of technological developments is not supported by historical studies. We will note below some cases in which there were competing technical designs and the choice among them was affected by various political and social factors. Technological determinism underestimates the diversity of forces that contribute to technological change. Unrelieved pessimism undercuts human action and becomes a self-fulfilling prophecy. If we are convinced that nothing can be done to improve the system, we will indeed do nothing to try to improve it. This would give to the commercial sponsors of technology the choices that are ours as responsible citizens.Third, technology can be the servant of human values. Life is indeed impoverished if the technological attitudes of mastery and power dominate one's outlook. Calculation and control do exclude mutuality and receptivity in human relationships and prevent the humility and reverence that religious awareness requires. But I would submit that the threat to these areas of human existence comes not from technology itself but from preoccupation with material progress and unqualified reliance on technology. We can make decisions about technology within a wider context of human and environmental values.
III. TECHNOLOGY AS INSTRUMENT OF POWER
A third basic position holds that technology is neither inherently good nor inherently evil but is an ambiguous instrument of power whose consequences depend on its social context. Some technologies seem to be neutral if they can be used for good or evil according to the goals of the users. A knife can be used for surgery or for minder. An isotope separator can enrich uranium for peaceful nuclear reactors or for aggression with nuclear weapons. But historical analysis suggests that most technologies are already molded by particular interests and institutional goals. Technologies are social constructions, and they are seldom neutral because particular purposes are already built into their design. Alternative purposes would lead to alternative designs. Yet most designs still allow some choice as to how they are deployed.
1. TECHNOLOGY AND POLITICAL POWER
Like the authors in the previous group, those in this group are critical of many features of current technology. But they offer hope that technology can he used for more humane ends, either by political measures for more effective guidance within existing institutions or by changes in the economic and political systems themselves.
The people who make most of the decisions about technology today are not a technical elite or technocrats trying to run society rationally or disinterested experts whose activity was supposed to mark “the end of ideology.” The decisions are made by managers dedicated to the interests of institutions, especially industrial corporations and government bureaucracies. The goals of research are determined largely by the goals of institutions: corporate profits, institutional growth, bureaucratic power, and so forth. Expertise serves the interests of organizations and only secondarily the welfare of people or the environment.The interlocking structure of technologically based government agencies and corporations, sometimes called the “technocomplex,” is wider than the “military-industrial complex.” Many companies are virtually dependent on government contracts. The staff members of regulatory agencies, in turn, are mainly recruited from the industries they are supposed to regulate. We will see later that particular legislative committees, government agencies, and industries have formed three-way alliances to promote such technologies as nuclear energy or pesticides. Networks of industries with common interests form lobbies of immense political power. For example, U.S. legislation supporting railroads and public mass transit systems was blocked by a coalition of auto manufacturers, insurance companies, oil companies, labor unions, and the highway construction industry. But citizens can also influence the direction of technological development. Public opposition to nuclear power plants was as important as rising costs in stopping plans to construct new plants in almost all Western nations.
The historian Arnold Pacey gives many examples of the management of technology for power and profit. This is most clearly evident in the defense industries with their close ties to government agencies. But often the institutional biases associated with expertise are more subtle. Pacey gives as one example the Western experts in India and Bangladesh who in the 1960s advised the use of large drilling rigs and diesel pumps for wells, imported from the West. By 1975, two thirds of the pumps had broken down because the users lacked the skills and maintenance networks to operate them. Pacey calls for greater public participation and a more democratic distribution of power in the decisions affecting technology. He also urges the upgrading of indigenous technologies, the exploration of intermediate-scale processes, and greater dialogue between experts and users. Need-oriented values and local human benefits would then play a larger part in technological change.35
2. THE REDIRECTION OF TECHNOLOGY
The political scientist Victor Ferkiss expresses hope about the redirection of technology. He thinks that both the optimists and the pessimists have neglected the diversity among different technologies and the potential role of political structures in reformulating policies. In the past, technology has been an instrument of profit, and decisions have been motivated in short-run private interests. Freedom understood individualistically became license for the economically powerful. Individual rights were given precedence over the common good, despite our increasing interdependence. Choices that could only be made and enforced collectively—such as laws concerning air and water pollution—were resisted AS infringements on free enterprise. But Ferkiss thinks that economic criteria can be subordinated to such social criteria as ecological balance and human need. He believes it is possible to combine centralized, systemwide planning in basic decisions with decentralized implementation, cultural diversity, and citizen participation.36
There is a considerable range of views among contemporary Marxists. Most there Marx's conviction that technology is necessary for solving social problems but that under capitalism it has been an instrument of exploitation, repression, and dehumanization. In modern capitalism, according to Marxists, corporations dominate the government and political processes serve the interests of the ruling class. The technical elite likewise serves the profits of the owners. Marxists grant that absolute standards of living have risen for every time under capitalist technology. But relative inequalities have increased, so that class distinctions and poverty amidst luxury remain. Marxists assign justice a higher priority than freedom. Clearly they blame capitalism rather than technology for these evils of modern industrialism. They believe that alienation and inequality will disappear and technology will be wholly benign when the working class owns the means of production. The workers, not the technologists, are the agents of liberation. Marxists are thus as critical as the pessimists concerning the consequences of technology within capitalism but as enthusiastic as the optimists concerning its potentialities—within a proletarian economic order.How, then, do Western Marxists view the human effects of technology in Soviet history? Reactions vary, but many would agree with Bernard Gendron that in the Soviet Union workers were as alienated, factories as hierarchically organized, experts as bureaucratic, and pollution and militarism as rampant as in the United States. But Gendron insists that the Soviet Union did not follow Marx's vision. The means of production were controlled by a small group within the Communist party, not by the workers. Gendron maintains that in a truly democratic socialism, technology would be humane and work would not be alienating.37 Most commentators hold that the demise of communism in eastern Europe and the Soviet Union was a product of both its economic inefficiency and its political repression. It remains to be seen whether any distinctive legacy from Marxism will remain there after the economic and political turmoil of the early nineties.
We have seen that a few theologians are technological optimists, while others have adopted pessimistic positions. A larger number, however, see technology as an ambiguous instrument of social power. As an example consider Norman Faramelli, an engineer with theological training, who writes in a framework of christian ideas: stewardship of creation, concern for the dispossessed, and awareness of the corrupting influence of power. He distrusts technology as an instrument of corporate profit, but he believes it can be reoriented toward human liberation and ecological balance. Technology assessment and the legislative processes of democratic politics, he holds, can be effective in controlling technology. But Faramelli also advocates restructuring the economic order to achieve greater equality in the distribution of the fruits of technology.38 Similar calls for the responsible use of technology in the service of basic human needs have been issued by task forces and conferences of the National Council of Churches and by the World Council of Churches (WCC).39 According to one summary of WCC documents, “technological society is to be blessed for its capacity to meet basic wants, chastised for its encouragement of inordinate wants, transformed until it serves communal wants.”40
Egbert Schuurman, a Calvinist engineer from Holland, rejects many features of current technology but holds that it can be transformed and redeemed to be an instrument of God's love serving all creatures. Western thought since the Renaissance has increasingly encouraged “man the master of nature”; secular and reductionistic assumptions have prevailed. Schuurman says that technology was given a messianic role as the source of salvation, and under the rule of human sin it has ended by enslaving us so we are “exiles in Babylon.” But we can be converted to seek God's Kingdom, which comes as a gift, not by human effort. Receiving it in joy and love, and responding in obedience, we can cooperate in meaningful service of God and neighbor. Schuurman holds that technology can be redirected to advance both material and spiritual well-being. It has “a magnificent future” if it is incorporated into God's work of creation and redemption. A liberated technology could do much to heal the brokenness of nature and society. Unfortunately, he gives us few examples of what such a technology would be like or how we can work to promote it.41
The American theologian Roger Shinn has written extensively on Christian ethics and gives attention to the structures of political and economic power within \ which technological decisions are made. He agrees with the pessimists that various technologies reinforce each other in interlocking systems, and he acknowledges that large-scale technologies lead to the concentration of economic and political power. But he argues that when enough citizens are concerned, political processes can be effective in guiding technology toward human welfare. Policy changes require a combination of protest, political pressure, and the kind of new vision that the biblical concern for social justice can provide.42
This third position seems to me more consistent with the biblical outlook than either of the alternatives. Preoccupation with technology does become a form of idolatry, a denial of the sovereignty of God, and a threat to distinctively human existence. But technology directed to genuine human needs is a legitimate expression of humankind's creative capacities and an essential contribution to its welfare. In a world of disease and hunger, technology rightly used can be a far-reaching expression of concern for persons. The biblical understanding of human nature is realistic about the abuses of power and the institutionalization of self-interest. But it also is idealistic in its demands for social justice in the distribution of the fruits of technology. It brings together celebration of human creativity and suspicion of human power.
The attitudes toward technology outlined in this chapter can be correlated with the typology of historic Christian attitudes toward society set forth by H. Richard Niebuhr.43 At the one extreme is accommodation to society. Here society is considered basically good and its positive potentialities are affirmed. Niebuhr Cites the example of liberal theologians of the nineteenth century who had little to say concerning sin, revelation, or grace. They were confident about human reason, scientific and technological knowledge, and social progress. They would side with our first group, those who are optimistic about technology.
At the opposite extreme, Niebuhr describes Christian groups advocating withdrawal from society. They believe that society is basically sinful. The Christian perfectionists, seeking to maintain their purity and to practice radical obedience, have withdrawn into monasteries or into separate communities, as the Mennonites and Amish have done. They would tend to side with our sec-and group, the critics of technology.
Niebuhr holds that the majority of Christians are in three movements that fall between the extremes of accommodation and withdrawal. A synthesis of christianity and society has been advocated historically by the Roman Catholic Church. Aquinas held that there is both a revealed law, known through scripture the church, and a natural law, built into the created order and accesable human reason. Church and state have different roles but can cooperate for human welfare in society. This view encourages a qualified optimism about social change (and, I suggest, about technology).
Another option is the view of Christian life and society as two separate realms, as held in the Lutheran tradition. Here there is a compartmentalization of spiritual and temporal spheres and different standards for personal and public life. Sin is prevalent in all life, but in personal life it is overcome by grace; gospel comes before law as the Christian responds in faith and in love of neighbor. In the public sphere, however, sin must be restrained by the secular structures of authority and order. This view tends to be more pessimistic about social change, but it does not advocate withdrawal from society.
The final option described by Niebuhr is a transformation of society by Christian values. This position has much in common with the Catholic view and shares its understanding that God is at work in history, society, and nature as well as in personal life and the church. But it is more skeptical about the exercise power by the institutional church, and it looks instead to the activity of the layperson in society. Calvin, the Reformed and Puritan traditions, the Anglican, and the Methodists all sought a greater expression of Christian values in public life. They had great respect for the created world ordered by God, and they called for social justice and the redirection of cultural life. This position holds that social change (including the redirection of technology) is possible, but it is difficult because of the structures of group self-interest and institutional power. I favor this last option and will develop it further in subsequent chapters.
3. THE SOCIAL CONSTRUCTION OF TECHNOLOGY
How are science, technology, and society related? Three views have been proposed (see Fig. 1).
Fig. 1. Views of the Interactions of Science, Technology, and Society.
1. Linear Development. In linear development it is assumed that science leads to technology, which in turn has an essentially one-way impact on society. The deployment of technology is primarily a function of the marketplace. This view is common among the optimists. They consider technology to be predominantly beneficial, and therefore little government regulation or public policy choice is needed; consumers can influence technological development by expressing their preferences through the marketplace.2. Technological Determinism. Several degrees and types of determinism can be distinguished. Strict determinism asserts that only one outcome is possible. A more qualified claim is that there are very strong tendencies present in technological systems, but these could be at least partly counteracted if enough people were committed to resisting them. Again, technology may be considered an autonomous interlocking system, which develops by its own inherent logic, extended to the control of social institutions. Or the more limited claim is made that the development and deployment of technology in capitalist societies follows only one path, but the outcomes might be different in other economic systems. In all these versions, science is itself driven primarily by technological needs. Technology is either the “independent variable” on which other variables are dependent, or it is the overwhelmingly predominant force in historical change.
Technolological determinists will be pessimists if they hold that the consequences of technology are on balance socially and environmentally harmful Moreover, any from of determinism implies a limitation of human freedom and technological choice. However, some determinists retain great optimism about the consequences of technology. On the other hand, pessimists do not necessarily accept determinism, even in its weaker form. They may acknowledge the presence of technological choices but expect such choices to be missed because they are pessimistic about human nature and institutionalized greed. They may be pessimistic about our ability to respond to a world of global inequities and scarce resources. Nevertheless, determinism and pessimism are often found together among the critics of technology.
3. Contextual Interaction. Here there are six arrows instead of two, representing the complex interactions between science, technology, and society. Social and political forces affect the design as well as the uses of particular to technologies. Technologies are not neutral because social goals and institutional interests are built into the technical designs that are chosen. Because there are choices, public policy decisions about technology play a larger role here than in either of the other views. Contextualism is most common among our third group, those who see technology as an ambiguous instrument of social power.
Contextualists also point to the diversity of science-technology interactions. Sometimes a technology was indeed based on recent scientific discoveries. Biotechnology, for example, depends directly on recent research in molecular biology. In other cases, such as the steam engine or the electric power system, innovations occurred with very little input from new scientific discoveries. A machine or process may have been the result of creative practical innovation or the modification of an existing technology. As Frederick Ferré puts it, science and technology in the modern world are both products of the combination of theoretical and practical intelligence, and “neither gave birth to the other.”44 Technology has its own distinctive problems and builds up its own knowledge base and professional community, though it often uses science as a resource to draw on. The reverse contribution of technology to science is also often evident. The work of astronomers, for instance, has been dependent on a succession of new technologies, from optical telescopes to microwave antennae and rockets. George Wise writes, “Historical studies have shown that the relations between science and technology need not be those of domination and subordination. Each has maintained its distinctive knowledge base and methods while contributing to the other and to its patrons as well.”45
In the previous volume, I discussed the “social construction of science” thesis, in which it is argued that not only the direction of scientific development but also the concepts and theories of science are determined by cultural assumption and interests. I concluded that the “strong program” among sociologists and philosophers of science carries this historical and cultural relativism too far, and I defended a reformulated understanding of objectivity, which gives a major role to empirical data while acknowledging the influence of society on interpretive paradigms.
The case for “the social construction of technology” seems to me much stronger. Values are built into particular technological designs. There is no one “best way” to design a technology. Different individuals and groups may define a problem differently and may have diverse criteria of success. Bijker and Pinch show that in the late nineteenth century inventors constructed many different types of bicycles. Controversies developed about the relative size of front and rear wheels, seat location, air tires, brakes, and so forth. Diverse users were en-visioned (workers, vacationers, racers, men and women) and diverse criteria (safety, comfort, speed, and so forth). In addition, the bicycle carried cultural meanings, affecting a person's self-image and social status. There was nothing logically or technically necessary about the model that finally won out and now found around the world.46
The historian John Staudenmaier writes that
contextualism is rooted in the proposition that technical designs cannot be meaning fully interpreted in abstraction from their human context. The human fabric is not an envelope around a culturally neutral artifact. The values and world views, the intelligence and stupidity, the biases and vested interests of those who design, accept and maintain a technology are embedded in the technology itself.47
Both the linear and the determinist view imply that technology determines work organization. It is said that the technologies of the Industrial Revolution imposed their own requirements and made repetitive tasks inevitable. The contextualists reply that the design of a technology is itself affected by social relations. The replacement of workers by machines was intended not only to reduce labor costs but also to assert greater control by management over labor. For instance, the spinning mule helped to break the power of labor unions among skilled textile workers in nineteenth-century England. So me examples in the choice of designs for agricultural harvesters, nuclear reactors, and computer-controlled manufacturing are discussed in later chapters.
Other contextualists have pointed to the role of technology in the subordination of women. Engineering was once considered heavy and dirty work unsuitable for women, but long after it became a clean and intellectual profession, there are still few women in it. Technology has been an almost exclusively male preserve, reflected in toys for boys, the expectations of parents and teachers, and the vocational choices and job opportunities open to men and women. Most technologies are designed by men and add to the power of men.Strong gender divisions are present among employees of technology-related companies. When telephones were introduced, women were the switchboard operators and record keepers, while men designed and repaired the equipment and managed the whole system. Typesetting in large printing frames once required physical strength and mechanical skills and was a male occupation. But men continued to exclude women from compositors’ unions when some type, and more recently computer formatting, required only typing and formatting skills.48 Today most computer designers and programmers are men, while in offices most of the data are entered at computer keyboards by women. With many middle-level jobs eliminated, these lower-level jobs often become dead ends for women.49 A study of three computerized industries in Britain found that women were the low-paid operators, while only men understood and controlled the equipment, and men almost never worked under the supervision of women.50
Note that contextualism allows for a two-way interaction between technology and society. When technology is treated as merely one form of cultural expression among others, its distinctive characteristics may be ignored. In some renditions, the ways in which technology shapes culture are forgotten while the cultural forces on technology are scrutinized. The impact of technology sin society is particularly important in the transfer of a technology to a new cultural setting in a developing country. Some Third World authors have been beenly aware of technology as an instrument of power, and they portray a two-way interaction between technology and society across national boundaries.
IV. CONCLUSIONS
Let me try to summarize these three views of technology in relation to the conflicting values (identified in italics) that are discussed in the next two chapters, There are many variations within each of the three broad positions outlined above, but each represents a distinctive emphasis among these values.
The optimists stress the contribution of technology to economic development. They hold that greater productivity improves standards of living and makes food and health more widely available. For most of them, the most important form of participatory freedom is the economic freedom of the marketplace, though in general they are also committed to political democracy. These authors say that social justice and environmental protection should not be ignored, but they must not be allowed to jeopardize economic goals. The optimists usually evaluate technology in a utilitarian framework, seeking to maximize the balance of costs over benefits.The pessimists typically make personal fulfillment their highest priority, and they interpret fulfillment in terms of human relationships and community life rather than material possessions. They are concerned about individual rights and the dignity of persons. They hold that meaningful work is as important as economic productivity in policies for technology. The pessimists are dedicated to resource sustainability and criticize the high levels of consumption in industrial societies today. They often advocate respect for all creatures and question the current technological goal of mastery of nature.
The contextualists are more likely to give prominence to social justice because they interpret technology as both a product and an instrument of social power. For them the most important fount of participatory freedom are opportunities for participation in political processes and in work-related decisions. They are less concerned about economic growth than about how that growth is distributed and who receives the costs and the benefits. Contextualists often seek environmental protection because they are aware of the natural as well as the social contexts in which technologies operate.
I am most sympathetic with the contextualists, though I am indebted to many of the insights of the pessimists. Four issues seem to me particularly important in analyzing the differences among the positions outlined above.
1. Defense of the Personal. The pessimists have defended human values in a materialistic and impersonal society. The place to begin, they say, is one's own life. Each of us can adopt individual life-styles more consistent with human and environmental values. Moreover, strong protest and vivid examples are needed to challenge the historical dominance of technological optimism and the disproportionate resource consumption of affluent societies. I admire these critics for defending individuality and choice in the face of standardization and bureaucracy. I join them in upholding the significance of personal relationships and a vision of personal fulfillment that goes beyond material affluence. I affirm the importance of the spiritual life, but I do not believe that it requires a rejection of technology. The answer to the destructive features of technology is not less technology, but technology of the right kind.
2. The Role of Politics. Differing models of social change are implied in the three positions. The first group usually assumes a free market model. Technology is predominantly beneficial, and the reduction of any undesirable side effects is itself a technical problem for the experts. Government intervention is needed only to regulate the most harmful impacts. Writers mentioned in the second section, by contrast, typically adopt some variant of technological determinism. Technology is dehumanizing and uncontrollable. They see run-away technology as an autonomous and all-embracing system that molds all of life, including the political sphere, to its requirements. The individual is helpless within the system. The views expressed in the third section presuppose a “social conflict” model. Technology influences human life but is itself part of a cultural system; it is an instrument of social power serving the purposes of those who control it. It does systematically impose distinctive forms on all areas of life, but these can be modified through political processes. Whereas the first two groups give little emphasis to politics, the third, with which I agree, holds that conflicts concerning technology must be resolved primarily in the political arena.
3. The Redirection of Technology. I believe that we should neither accept uncritically the past directions of technological development nor reject technology in toto but redirect it toward the realization of human and environmental values. In the past, technological decisions have usually been governed by narrowly economic criteria, to the neglect of environmental and human costs. In a later chapter we will look at technology assessment, a procedure designed to use a broad range of criteria to evaluate the diverse consequences of an emerging technology—before it has been deployed and has developed the rested interests and institutional momentum that make it seem uncontrollable. I will argue that new policy priorities concerning agriculture, energy, resource allocation, and the redirection of technology toward basic human seeds can be achieved within democratic political institutions. The key question will be: What decision-making processes and what technological policies can contribute to human and environmental values?
4. The Scale of Technology. Appropriate technology can be thought of as an attempt to achieve some of the material benefits of technology outlined in the first section without the destructive human costs discussed in the second section most of which result from large-scale centralized technologies. Intermediate-scale technology allows decentralization and greater local participation in decisions The decentralization of production also allows greater use of local materials and often a reduction of impact on the environment. Appropriate technology does not imply a return to primitive and prescientific methods; Father seeks to use the best science available toward goals different from those that have governed industrial production in the past.
Industrial technology was developed when capital and resources were abundant, and we continue to assume these conditions. Automation, for example, is capital-intensive and labor saving. Yet in developing nations capital is scarce and labor is abundant. The technologies needed there must be relatively inexpensive and labor-intensive. They must be of intermediate scale so that jobs can be created in rural areas and small towns, to slow down mass migration to the cities. They must fulfill basic human needs, especially for food, housing and health. Alternative patterns of modernization are less environmentally and socially destructive than the path that we have followed. It is increasingly evident that many of these goals are desirable also in industrial nations, I will suggest that we should develop a mixture of large and intermediate-scale technologies, which will require deliberate encouragement of the latter.
The redirection of technology will be no easy task. Contemporary technology is so tightly tied to industry, government, and the structures of economic power that changes in direction will be difficult to achieve. As the critics of technology recognize, the person who tries to work for change within the existing order may be absorbed by the establishment. But the welfare of humankind requires a creative technology that is economically productive, ecologically sound, socially just, and personally fulfilling.
XXX . V00000 Insurance of electronics
Insurance for electronics provides, withing the appropriate policy terms, comprehensive protection for stationary electronic equipment and information, communication and medical technology equipment, or sets thereof. Other electrotechnical and electronic equipment and apparatus can also be insured, as well as portable equipment or fixed equipment in a vehicle.
The object of insurance for electronics
Is is only possible to insure items which were in working condition at the time of arranging the policy, and were put into operation in accordance with legal regulations and manufacturer's requirements; i.e. the insured item has been tested, and after the test run was either prepared to be handed over at the place of insurance, or is already in use there.
Insurance for electronics is arranged for cases of sudden damage to or destruction of insured equipment by a random event. The insurance covers damage by fire and by natural hazards, including damage due to the extinguishing of fires, demolition, cleaning activities or loss of insured equipment incurred in connection with certain insured hazards, theft or vandalism.
Included in insurance for electronics are data carriers (but only where they are not replaceable by the user – e.g. all kinds of hard drives) and software, as long as it is necessary for the basic functioning of the insured item.
Insurance for electronics is arranged for cases of sudden damage to or destruction of insured equipment by a random event. The insurance covers damage by fire and by natural hazards, including damage due to the extinguishing of fires, demolition, cleaning activities or loss of insured equipment incurred in connection with certain insured hazards, theft or vandalism.
Included in insurance for electronics are data carriers (but only where they are not replaceable by the user – e.g. all kinds of hard drives) and software, as long as it is necessary for the basic functioning of the insured item.
Supplementary insurance with insurance of electronics
Is is only possible to insure items which were in working condition at the time of arranging the policy, and were put into operation in accordance with legal regulations and manufacturer's requirements; i.e. the insured item has been tested, and after the test run was either prepared to be handed over at the place of insurance, or is already in use there.
- costs of cleaning, decontamination and removal of scrap
- costs of relocation and security
- costs of erecting scaffolding, costs of rescue, or costs of putting provisional arrangements into place,
- overtime, and work on Sundays, during holidays and at night,
- costs of air transport, or disruption or restriction of operations due to a breakdown of the insured machinery
Indemnification under insurance for electronics
In cases of destruction, loss or theft of the insured equipment, the insurer will pay out an amount corresponding to the approximate cost of the replacement of the equipment by the same device or a similar one, minus the value of the usable remaining parts of the equipment and the stipulated deductible.
In the case of damage to the insured equiment, the insurer will pay out a sum corresponding to the appropriate cost of repair of the damaged equipment minus the value of the usable residual parts of the replaced equipment, and the stipulated deductible. Indemnification will not exceed the amount necessary for replacement of the equipment by the same device, or a similar one.
The insurer will also indemnify, within the insurance for electronics, reasonable costs proven to have been incurred by the beneficiary due to:
In the case of damage to the insured equiment, the insurer will pay out a sum corresponding to the appropriate cost of repair of the damaged equipment minus the value of the usable residual parts of the replaced equipment, and the stipulated deductible. Indemnification will not exceed the amount necessary for replacement of the equipment by the same device, or a similar one.
The insurer will also indemnify, within the insurance for electronics, reasonable costs proven to have been incurred by the beneficiary due to:
- temporary repairs, as long as they do not increase the cost of the full repair,
- dismantling and assembly of damaged equipment,
- speedy and express transport of spare parts, if normal transport is not efficient enough
Indemnity for the insured software will be, within the insurance for electronics, issued only when the loss of the software, or alterations to it, were caused as a result of insured damage to the data carriers on which this data was saved.
You can report a claim under insurance against liability for damages by phone, in writing or online.
Electronic Equipment Insurance
Many trades rely extensively on electronic equipment. With this reliance comes the need to protect against material damage and subsequent financial losses.
Our Electronic Equipment policy is specifically designed to meet these needs for both owners and hirers of equipment.
Our Electronic Equipment policy is specifically designed to meet these needs for both owners and hirers of equipment.
Key Features
- Covers a variety of equipment from audio visual equipment through to medical equipment.
- Covers owned equipment and hired in equipment as well as data media.
- Worldwide data media cover.
- Further cover for resultant additional expenditure or financial loss can be purchased.
Computer
Our policy for computer equipment, software, data and cyber threats has been designed to suit the latest developments in computer technology.
We offer extensive cover for a large range of items, from smartphones and sat-navs to laptops and servers.
Computer is available as standalone cover from Allianz Engineering or you can select it as an optional section on our propositions for medium to large businesses, including:
Speak to your broker to find out more about any of these propositions.
Key Features
- Cover for loss, theft or damage to computers, supporting equipment, software and data
- Designed to protect against traditional and new risks, including fire, flood, data corruption and cyber attacks
- Breakdown cover even if no maintenance agreement is in place
- Computer equipment in transit within the EU and elsewhere in the world will be safeguarded
- Limit increase of 10% where equipment is replaced with greener items following an insured loss
- Costs relating to the recompiling or repurchasing of data and software following loss, damage or corruption can be covered
- 'Seek, Destroy and Protect' available - cover that includes the expense of employing a cyber security consultant to protect your business
- 'Business Interruption' cover available for financial loss as a result of disrupted computer operations within your business.
== MA THEREFORE INSURANCE TO CAPITAL ELECTRONICS AND AUTOMATION ==
Tidak ada komentar:
Posting Komentar