machine translation

back to index

145 results

pages: 174 words: 56,405

Machine Translation
by Thierry Poibeau
Published 14 Sep 2017

ISBN: 978-0-262-53421-5 eISBN 9780262342438 ePub Version 1.0 Table of Contents Series page Title page Copyright page Series Foreword Acknowledgments 1 Introduction 2 The Trouble with Translation 3 A Quick Overview of the Evolution of Machine Translation 4 Before the Advent of Computers… 5 The Beginnings of Machine Translation: The First Rule-Based Systems 6 The 1966 ALPAC Report and Its Consequences 7 Parallel Corpora and Sentence Alignment 8 Example-Based Machine Translation 9 Statistical Machine Translation and Word Alignment 10 Segment-Based Machine Translation 11 Challenges and Limitations of Statistical Machine Translation 12 Deep Learning Machine Translation 13 The Evaluation of Machine Translation Systems 14 The Machine Translation Industry: Between Professional and Mass-Market Applications 15 Conclusion: The Future of Machine Translation Glossary Bibliography and Further Reading Index About Author List of Tables Table 1 Example of possible translations in French for the English word “motion” List of Illustrations Figure 1 The Necker cube, the famous optical illusion published by Louis Albert Necker in 1832.

See also Part-of-speech tagging Literary text, 11, 12, 100, 154, 197–199 Logical form, 55, 58, 60, 85, 179 Logos Corporation, 88 Machine learning, 175, 181, 183, 236. See also Deep learning Machine translation evaluation. See Evaluation Machine translation industry. See Machine translation market Machine translation market, 89, 221–246, 247–251 Machine translation quality. See Evaluation Machine translation systems Apertium, 172 Ariane-78 system, 85 Babelfish, 227, 228 Bing Translation, 33, 36, 194, 226–229, 231 (see also Microsoft) Google Translation, 33, 36, 159, 167–168, 172, 190–194, 225–229, 232, 234, 240, 248–250 (see also Google) IBM WebSphere, 232 Metal, 87 Météo (TAUM Météo), 84, 87 Microsoft Translation (see Bing Translation) Systranet, 226, 228 (see also Systran) TAUM Météo (see Météo) Watson, 241 Maintenance applications, 243 Maltese, 212, 213 Manual correction, 138.

A website (http://www.statmt.org) gives access to a large amount of information on the domain, including research papers, tutorials, links to free software, and so on. Chapter 10: Segment-Based Machine Translation The previous website (http://www.statmt.org) is probably the best source of information for recent trends related to statistical machine translation, of which segment-based machine translation is part. Chapter 11: Challenges and Limitations of Statistical Machine Translation See http://www.statmt.org,as for chapter 10 above. Kenneth Church (2011). “A pendulum swung too far.” Linguistic Issues in Language Technology, 6(5). Chapter 12: Deep Learning Machine Translation The book by Goodfellow et al., although technical, offers an affordable and comprehensible introduction to deep learning.

pages: 350 words: 98,077

Artificial Intelligence: A Guide for Thinking Humans
by Melanie Mitchell
Published 14 Oct 2019

Even though statistical machine-translation systems had very little knowledge of syntax in either language, on the whole these methods produced better translations than the earlier rule-based approaches. Google Translate—probably the most widely used automated-translation program—employed these kinds of statistical machine-translation methods from the time of its launch in 2006 until 2016, at which time Google researchers had developed what they claimed was a superior translation method based on deep learning, called neural machine translation. Soon after, neural machine translation was adopted for all state-of-the-art machine-translation programs.

These systems often include several tricks to improve their performance, such as inputting the original sentence both forward and backward, as well as mechanisms for focusing attention on different parts of the sentence at different time steps.9 Evaluating Machine Translation After Google Translate launched its neural machine translation in 2016, the company claimed that the new approach was “bridging the gap between human and machine translation.”10 Other large tech companies, sprinting to catch up, created their own online machine-translation programs, similarly based on the encoder-decoder architecture that I described above. These companies, and the tech media covering them, have enthusiastically promoted these translation services.

I also used Google Translate to help interpret our landlady’s often confusing replies, and while the program’s translations gave me a fairly clear sense of her meaning, the English it produced was full of errors, large and small. I still cringe when I imagine what my French messages looked like to our landlady. In 2016, Google launched a new “neural machine translation” system, which the company claims has achieved “the largest improvements to date for machine translation quality,”1 but the caliber of machine-translation systems remains far below that of capable human translators. Spurred in part by the U.S.-Soviet Cold War, automated translation—particularly between English and Russian—was one of the earliest AI projects. Early approaches to automated translation were enthusiastically promoted by the mathematician Warren Weaver in 1947: “One naturally wonders if the problem of translation could conceivably be treated as a problem in cryptography.

pages: 301 words: 89,076

The Globotics Upheaval: Globalisation, Robotics and the Future of Work
by Richard Baldwin
Published 10 Jan 2019

Way back in the old days—which means 2015 on the digitech calendar—the language barrier and telecom limits restricted telemigration to a few sectors and source countries. Foreign freelancers had to speak “good-enough English,” and they were limited to modular tasks. Telemigrants were common in web development, and a few back-office jobs, but little else. Things are different now in two ways. Machine Translation and the Talent Tsunami First, machine translation unleashed a talent tsunami. Since machine translation went mainstream in 2017, anyone with a laptop, internet connection, and skills can potentially telecommute to US and European offices. This is amplified by the rapid spread of excellent internet connections. This means that people living in countries where ten dollars an hour is a decent middle-class income will soon be your workmates or potential replacements.

The defendant was happy to proceed without a human translator since Google Translate is now so accurate. In June 2017, the US Army paid Raytheon four million dollars for a machine translation package that lets soldiers converse with Iraqi Arabic and Pashto speakers as well as read foreign-language documents and digital media on their smartphones and laptops. Machine translation used to be a joke. A famous example, related by Google’s director of research Peter Norvig, was what old-school machine translators did with the phrase, “the spirit is willing but the flesh is weak.” Translated into Russian and then back to English, it turned into “the vodka is good but the meat is rotten.”7 Even as recently as 2015, it was little more than a party trick, or a very rough first draft.

According to Google, which uses humans to score machine translations on a scale from zero (complete nonsense) to six (perfect), the AI-trained algorithm “Google Translate” got a grade of 3.6 in 2015—far worse than the average human translator, who gets scores around 5.1. In 2016, Google Translate hits numbers like 5.8 And the capabilities are advancing in leaps and bounds. As is true of almost everything globots do, machine translation is not as good as expert humans, but it is a whole lot cheaper and a whole lot more convenient. Expert human translators, in particular, are quick to heap scorn on the talents of machine translation. The Atlantic Monthly, for instance, published an article in 2018 by Douglas Hofstadter doing just this.9 Hofsadter is a very sophisticated observer with very high standards when it comes to machine translation.

pages: 274 words: 73,344

Found in Translation: How Language Shapes Our Lives and Transforms the World
by Nataly Kelly and Jost Zetzsche
Published 1 Oct 2012

(For a high-tech version of the old telephone or gossip game, go to www.translationparty.com, a site that keeps on translating between Japanese and English until an equilibrium is reached.) One famous example of machine translation gone awry is actually an urban legend. As the story goes, the sentence “The spirit is willing, but the flesh is weak” was plugged into a machine translation system to be rendered into Russian. Allegedly, the computer produced “The vodka is strong, but the meat is rotten” in Russian. This tale has never been substantiated, but it’s not completely inconceivable. The story probably serves a good purpose as a warning that generic machine translation cannot and should not be blindly trusted. Parlez-Vous C++? Anyone who’s taken a language course in school knows how hard it is to learn a foreign language.

GPHIN’s developers soon realized that the daily diet of approximately four thousand original articles with potentially relevant content could be handled only with a mixture of computerized or machine translation and appropriate human oversight. So the developers chose several software programs to automatically translate information in the various language combinations. Once the articles are machine-translated, the system either rejects them as irrelevant, flags them for analysis by humans, or publishes them immediately to alert the worldwide meta-government and government subscribers of a potential threat.

Indeed, unique visitor data from third-party comScore shows a significant increase in Wikipedia’s traffic in the global south, which typically refers to developing countries in the Southern Hemisphere.5 What does it take for Wikipedia to get a new language off the ground? Language communities within Wikipedia generally want to create their own projects in their own languages. Using automatic translation tools to translate content from other Wikipedias is possible but does not always work very well. “Efforts to machine-translate Wikipedia articles and then bring in volunteers to build on top of those machine translations have not been particularly successful,” Jay explains. Wikipedia currently boasts more than twenty million articles across all languages. Most of its new content growth comes from the non-English projects. Many of these languages have millions of speakers.

The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do
by Erik J. Larson
Published 5 Apr 2021

Marvin Minsky, too, declared in 1967 that “within a generation, the problem of creating ‘artificial intelligence’ will be substantially solved.”2 But machine translation was a different ballgame, as researchers soon discovered. Having begun with a simplistic assumption, that language could be understood by analyzing words in large texts (called corpora) using statistical techniques, they were quickly proven wrong. Computers made automatic translation possible, but the results were far from high quality. Even programs working in specific domains such as biomedical literature were not fail-proof, and the failures were often embarrassingly incorrect and mindless. Machine translation researchers, in response, expanded their approach by exploring methods for “parsing” sentences, or finding syntactic structure in them, using new and powerful “transformational” grammars developed by a young MIT linguist who was soon to be world-famous—Noam Chomsky.

They found ways to, for example, extract names and other patterns from web pages (a capability called entity recognition); to disambiguate polysemous (multi-sense) words such as bank; to perform web-specific tasks like ranking and retrieving web pages (the famous example being Google’s PageRank, which Larry Page and Sergey Brin developed as Stanford graduate students in the 1990s); to classify news stories and other web pages by topic; to filter spam for email; and to serve up spontaneous product recommendations on commerce sites like Amazon. The list goes on and on. The shift away from linguistics and rule-based approaches to data-driven or “empirical” methods seemed to liberate AI from those early, cloudy days of work on machine translation, when seemingly endless problems with capturing meaning and context plagued engineering efforts. In fact, machine translation itself was later cracked by a group of IBM researchers using a statistical (that is, not grammar-based) approach that was essentially an ingenious application of Claude Shannon’s early work on information theory. Called the “noisy channel” approach, it viewed sentences from a source language (say, French) and a target language (say, English) as an information exchange in which bad translations constituted a form of noise—making it the system’s task to reduce the noise in the translation channel between source and target sentences.

SUCCESS … OR NOT The success of contemporary systems like Google Translate on the once puzzling problem of machine translation is often touted as evidence that AI will succeed, given enough time and the right ideas. The truth is more sobering. While it turns out that some problems in natural language understanding can be addressed with statistical or machine learning approaches, the original concerns of Bar-Hillel and others regarding semantics (meaning) and pragmatics (context) have proven to be well-founded. Machine translation, which had seemed like a difficult natural language problem, could be adequately accomplished with simple statistical analysis, given large corpora (datasets) in different languages.

pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron
Published 13 Mar 2017

Tip Embeddings are also useful for representing categorical attributes that can take on a large number of different values, especially when there are complex similarities between values. For example, consider professions, hobbies, dishes, species, brands, and so on. You now have almost all the tools you need to implement a machine translation system. Let’s look at this now. An Encoder–Decoder Network for Machine Translation Let’s take a look at a simple machine translation model10 that will translate English sentences to French (see Figure 14-15). Figure 14-15. A simple machine translation model The English sentences are fed to the encoder, and the decoder outputs the French translations. Note that the French translations are also used as inputs to the decoder, but pushed back by one step.

Pac-Man Using Deep Q-Learning min_after_dequeue, RandomShuffleQueue MNIST dataset, MNIST-MNIST model parallelism, Model Parallelism-Model Parallelism model parameters, Gradient Descent, Batch Gradient Descent, Early Stopping, Under the Hood, Quadratic Programming, Creating Your First Graph and Running It in a Session, Construction Phase, Training RNNsdefining, Model-based learning model selection, Model-based learning model zoos, Model Zoos model-based learning, Model-based learning-Model-based learning modelsanalyzing, Analyze the Best Models and Their Errors-Analyze the Best Models and Their Errors evaluating on test set, Evaluate Your System on the Test Set-Evaluate Your System on the Test Set moments, Adam Optimization Momentum optimization, Momentum optimization-Momentum optimization Monte Carlo tree search, Policy Gradients Multi-Layer Perceptrons (MLP), Introduction to Artificial Neural Networks, The Perceptron-Multi-Layer Perceptron and Backpropagation, Neural Network Policiestraining with TF.Learn, Training an MLP with TensorFlow’s High-Level API multiclass classifiers, Multiclass Classification-Multiclass Classification Multidimensional Scaling (MDS), Other Dimensionality Reduction Techniques multilabel classifiers, Multilabel Classification-Multilabel Classification Multinomial Logistic Regression (see Softmax Regression) multinomial(), Neural Network Policies multioutput classifiers, Multioutput Classification-Multioutput Classification MultiRNNCell, Distributing a Deep RNN Across Multiple GPUs multithreaded readers, Multithreaded readers using a Coordinator and a QueueRunner-Multithreaded readers using a Coordinator and a QueueRunner multivariate regression, Frame the Problem N naive Bayes classifiers, Multiclass Classification name scopes, Name Scopes natural language processing (NLP), Recurrent Neural Networks, Natural Language Processing-An Encoder–Decoder Network for Machine Translationencoder-decoder network for machine translation, An Encoder–Decoder Network for Machine Translation-An Encoder–Decoder Network for Machine Translation TensorFlow tutorials, Natural Language Processing, An Encoder–Decoder Network for Machine Translation word embeddings, Word Embeddings-Word Embeddings Nesterov Accelerated Gradient (NAG), Nesterov Accelerated Gradient-Nesterov Accelerated Gradient Nesterov momentum optimization, Nesterov Accelerated Gradient-Nesterov Accelerated Gradient network topology, Fine-Tuning Neural Network Hyperparameters neural network hyperparameters, Fine-Tuning Neural Network Hyperparameters-Activation Functionsactivation functions, Activation Functions neurons per hidden layer, Number of Neurons per Hidden Layer number of hidden layers, Number of Hidden Layers-Number of Hidden Layers neural network policies, Neural Network Policies-Neural Network Policies neuronsbiological, From Biological to Artificial Neurons-Biological Neurons logical computations with, Logical Computations with Neurons neuron_layer(), Construction Phase next_batch(), Execution Phase No Free Lunch theorem, Testing and Validating node edges, Visualizing the Graph and Training Curves Using TensorBoard nonlinear dimensionality reduction (NLDR), LLE(see also Kernel PCA; LLE (Locally Linear Embedding)) nonlinear SVM classification, Nonlinear SVM Classification-Computational Complexitycomputational complexity, Computational Complexity Gaussian RBF kernel, Gaussian RBF Kernel-Gaussian RBF Kernel with polynomial features, Nonlinear SVM Classification-Polynomial Kernel polynomial kernel, Polynomial Kernel-Polynomial Kernel similarity features, adding, Adding Similarity Features-Adding Similarity Features nonparametric models, Regularization Hyperparameters nonresponse bias, Nonrepresentative Training Data nonsaturating activation functions, Nonsaturating Activation Functions-Nonsaturating Activation Functions normal distribution (see Gaussian distribution) Normal Equation, The Normal Equation-Computational Complexity normalization, Feature Scaling normalized exponential, Softmax Regression norms, Select a Performance Measure notations, Select a Performance Measure-Select a Performance Measure NP-Complete problems, The CART Training Algorithm null hypothesis, Regularization Hyperparameters numerical differentiation, Numerical Differentiation NumPy, Create the Workspace NumPy arrays, Handling Text and Categorical Attributes NVidia Compute Capability, Installation nvidia-smi, Managing the GPU RAM n_components, Choosing the Right Number of Dimensions O observation space, Neural Network Policies off-policy algorithm, Temporal Difference Learning and Q-Learning offline learning, Batch learning one-hot encoding, Handling Text and Categorical Attributes one-versus-all (OvA) strategy, Multiclass Classification, Softmax Regression, Exercises one-versus-one (OvO) strategy, Multiclass Classification online learning, Online learning-Online learning online SVMs, Online SVMs-Online SVMs OpenAI Gym, Introduction to OpenAI Gym-Introduction to OpenAI Gym operation_timeout_in_ms, In-Graph Versus Between-Graph Replication Optical Character Recognition (OCR), The Machine Learning Landscape optimal state value, Markov Decision Processes optimizers, Faster Optimizers-Learning Rate SchedulingAdaGrad, AdaGrad-AdaGrad Adam optimization, Faster Optimizers, Adam Optimization-Adam Optimization Gradient Descent (see Gradient Descent optimizer) learning rate scheduling, Learning Rate Scheduling-Learning Rate Scheduling Momentum optimization, Momentum optimization-Momentum optimization Nesterov Accelerated Gradient (NAG), Nesterov Accelerated Gradient-Nesterov Accelerated Gradient RMSProp, RMSProp out-of-bag evaluation, Out-of-Bag Evaluation-Out-of-Bag Evaluation out-of-core learning, Online learning out-of-memory (OOM) errors, Static Unrolling Through Time out-of-sample error, Testing and Validating OutOfRangeError, Reading the training data directly from the graph, Multithreaded readers using a Coordinator and a QueueRunner output gate, LSTM Cell output layer, Multi-Layer Perceptron and Backpropagation OutputProjectionWrapper, Training to Predict Time Series-Training to Predict Time Series output_put_keep_prob, Applying Dropout overcomplete autoencoder, Unsupervised Pretraining Using Stacked Autoencoders overfitting, Overfitting the Training Data-Overfitting the Training Data, Create a Test Set, Soft Margin Classification, Gaussian RBF Kernel, Regularization Hyperparameters, Regression, Number of Neurons per Hidden Layeravoiding through regularization, Avoiding Overfitting Through Regularization-Data Augmentation P p-value, Regularization Hyperparameters PaddingFIFOQueue, PaddingFifoQueue Pandas, Create the Workspace, Download the Datascatter_matrix, Looking for Correlations-Looking for Correlations parallel distributed computing, Distributing TensorFlow Across Devices and Servers-Exercisesdata parallelism, Data Parallelism-TensorFlow implementation in-graph versus between-graph replication, In-Graph Versus Between-Graph Replication-Model Parallelism model parallelism, Model Parallelism-Model Parallelism multiple devices across multiple servers, Multiple Devices Across Multiple Servers-Other convenience functionsasynchronous communication using queues, Asynchronous Communication Using TensorFlow Queues-PaddingFifoQueue loading training data, Loading Data Directly from the Graph-Other convenience functions master and worker services, The Master and Worker Services opening a session, Opening a Session pinning operations across tasks, Pinning Operations Across Tasks sharding variables, Sharding Variables Across Multiple Parameter Servers sharing state across sessions, Sharing State Across Sessions Using Resource Containers-Sharing State Across Sessions Using Resource Containers multiple devices on a single machine, Multiple Devices on a Single Machine-Control Dependenciescontrol dependencies, Control Dependencies installation, Installation-Installation managing the GPU RAM, Managing the GPU RAM-Managing the GPU RAM parallel execution, Parallel Execution-Parallel Execution placing operations on devices, Placing Operations on Devices-Soft placement one neural network per device, One Neural Network per Device-One Neural Network per Device parameter efficiency, Number of Hidden Layers parameter matrix, Softmax Regression parameter server (ps), Multiple Devices Across Multiple Servers parameter space, Gradient Descent parameter vector, Linear Regression, Gradient Descent, Training and Cost Function, Softmax Regression parametric models, Regularization Hyperparameters partial derivative, Batch Gradient Descent partial_fit(), Incremental PCA Pearson's r, Looking for Correlations peephole connections, Peephole Connections penalties (see rewards, in RL) percentiles, Take a Quick Look at the Data Structure Perceptron convergence theorem, The Perceptron Perceptrons, The Perceptron-Multi-Layer Perceptron and Backpropagationversus Logistic Regression, The Perceptron training, The Perceptron-The Perceptron performance measures, Select a Performance Measure-Select a Performance Measureconfusion matrix, Confusion Matrix-Confusion Matrix cross-validation, Measuring Accuracy Using Cross-Validation-Measuring Accuracy Using Cross-Validation precision and recall, Precision and Recall-Precision/Recall Tradeoff ROC (receiver operating characteristic) curve, The ROC Curve-The ROC Curve performance scheduling, Learning Rate Scheduling permutation(), Create a Test Set PG algorithms, Policy Gradients photo-hosting services, Semisupervised learning pinning operations, Pinning Operations Across Tasks pip, Create the Workspace Pipeline constructor, Transformation Pipelines-Select and Train a Model pipelines, Frame the Problem placeholder nodes, Feeding Data to the Training Algorithm placers (see simple placer; dynamic placer) policy, Policy Search policy gradients, Policy Search (see PG algorithms) policy space, Policy Search polynomial features, adding, Nonlinear SVM Classification-Polynomial Kernel polynomial kernel, Polynomial Kernel-Polynomial Kernel, Kernelized SVM Polynomial Regression, Training Models, Polynomial Regression-Polynomial Regressionlearning curves in, Learning Curves-Learning Curves pooling kernel, Pooling Layer pooling layer, Pooling Layer-Pooling Layer power scheduling, Learning Rate Scheduling precision, Confusion Matrix precision and recall, Precision and Recall-Precision/Recall TradeoffF-1 score, Precision and Recall-Precision and Recall precision/recall (PR) curve, The ROC Curve precision/recall tradeoff, Precision/Recall Tradeoff-Precision/Recall Tradeoff predetermined piecewise constant learning rate, Learning Rate Scheduling predict(), Data Cleaning predicted class, Confusion Matrix predictions, Confusion Matrix-Confusion Matrix, Decision Function and Predictions-Decision Function and Predictions, Making Predictions-Estimating Class Probabilities predictors, Supervised learning, Data Cleaning preloading training data, Preload the data into a variable PReLU (parametric leaky ReLU), Nonsaturating Activation Functions preprocessed attributes, Take a Quick Look at the Data Structure pretrained layers reuse, Reusing Pretrained Layers-Pretraining on an Auxiliary Taskauxiliary task, Pretraining on an Auxiliary Task-Pretraining on an Auxiliary Task caching frozen layers, Caching the Frozen Layers freezing lower layers, Freezing the Lower Layers model zoos, Model Zoos other frameworks, Reusing Models from Other Frameworks TensorFlow model, Reusing a TensorFlow Model-Reusing a TensorFlow Model unsupervised pretraining, Unsupervised Pretraining-Unsupervised Pretraining upper layers, Tweaking, Dropping, or Replacing the Upper Layers Pretty Tensor, Up and Running with TensorFlow primal problem, The Dual Problem principal component, Principal Components Principal Component Analysis (PCA), PCA-Randomized PCAexplained variance ratios, Explained Variance Ratio finding principal components, Principal Components-Principal Components for compression, PCA for Compression-Incremental PCA Incremental PCA, Incremental PCA-Randomized PCA Kernel PCA (kPCA), Kernel PCA-Selecting a Kernel and Tuning Hyperparameters projecting down to d dimensions, Projecting Down to d Dimensions Randomized PCA, Randomized PCA Scikit Learn for, Using Scikit-Learn variance, preserving, Preserving the Variance-Preserving the Variance probabilistic autoencoders, Variational Autoencoders probabilities, estimating, Estimating Probabilities-Estimating Probabilities, Estimating Class Probabilities producer functions, Other convenience functions projection, Projection-Projection propositional logic, From Biological to Artificial Neurons pruning, Regularization Hyperparameters, Symbolic Differentiation Pythonisolated environment in, Create the Workspace-Create the Workspace notebooks in, Create the Workspace-Download the Data pickle, Better Evaluation Using Cross-Validation pip, Create the Workspace Q Q-Learning algorithm, Temporal Difference Learning and Q-Learning-Learning to Play Ms.

Schmidhuber (2000). 7 “Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation,” K. Cho et al. (2014). 8 A 2015 paper by Klaus Greff et al., “LSTM: A Search Space Odyssey,” seems to show that all LSTM variants perform roughly the same. 9 For more details, check out Christopher Olah’s great post, or Sebastian Ruder’s series of posts. 10 “Sequence to Sequence learning with Neural Networks,” I. Sutskever et al. (2014). 11 The bucket sizes used in the tutorial are different. 12 “On Using Very Large Target Vocabulary for Neural Machine Translation,” S. Jean et al. (2015). 13 “Neural Machine Translation by Jointly Learning to Align and Translate,” D.

pages: 502 words: 107,510

Natural Language Annotation for Machine Learning
by James Pustejovsky and Amber Stubbs
Published 14 Oct 2012

Languages: Various URL: http://gate.ac.uk/ Illinois NLP tools Modality: Written Use: Part-of-speech tagging, chunking, coreference, Named Entity tagging, semantic role labeling Language: Language-independent URL: http://cogcomp.cs.illinois.edu/page/software MADA + TOKAN Production status: Existing–used Modality: Written Use: Tokenization, diacritization, morphological disambiguation, part-of-speech, tagging, stemming, and lemmatization Language: Arabic URL: http://www1.ccls.columbia.edu/~cadim/ MorphAdorner Modality: Written Use: Tokenizing text, recognizing sentence boundaries, extracting names and places Language: English URL: http://morphadorner.northwestern.edu/ NLTK Modality: Written Use: Classification, tokenization, stemming, tagging, parsing, semantic reasoning, machine learning Language: English URL: http://nltk.org/ RACAI web service Modality: Written Use: Tokenization, sentence splitting, C-tagset part-of-speech tagging, MSD-tagset part-of-speech tagging, lemmatization, identify language (57 languages) Languages: English, Romanian URL: http://www.racai.ro/webservices Standford NLP tools Modality: Written Use: Parsing, part-of-speech tagging, Named Entity recognition, word segmentation, tokenizing, temporal tagging, topic modeling Language: English URL: http://nlp.stanford.edu/software/ Phonetic annotation FOLKER Modality: Speech Use: Transcription Language: Language-independent URL: http://agd.ids-mannheim.de/folker_en.shtml Julius Modality: Speech Use: Speech Recognition/understanding Language: Japanese URL: http://julius.sourceforge.jp/ SPPAS (SPeech Phonetization Alignment and Syllabification) Modality: Speech Use: Automatic phonetic transcription and segmentation Language: Basque URL: http://www.lpl-aix.fr/~bigi/sppas/ Part-of-speech taggers/syntactic parsers Alpino Modality: Written Use: Dependency parser Language: Dutch URL: http://www.let.rug.nl/vannoord/alp/Alpino/ Apertium-kir Modality: Written Use: Machine Translation Languages: Various URL: http://sourceforge.net/projects/apertium/ Automatic Syntactic Analysis for Polish Language (ASA-PL) Modality: Written Use: Syntactic analysis Language: Polish URL: http://seagrass.man.poznan.pl/~michzimny/asa-pl/ Berkeley Parser Modality: Written Use: Parsing Languages: Various URL: http://code.google.com/p/berkeleyparser/ BitPar Modality: Software Use: Syntactic parsing Language: English URL: http://www.ims.uni-stuttgart.de/tcl/SOFTWARE/BitPar.html C&C Toolkit Modality: Written Use: Parsing, tagging Language: English URL: http://svn.ask.it.usyd.edu.au/trac/candc/wiki Charniak Parser Modality: Written Use: Parsing Language: English URL: http://www.cs.brown.edu/people/ec/#software CombiTagger Modality: Written Use: Part-of-speech tagging Language: Language-independent URL: http://combitagger.sourceforge.net Dependency Shift Reduce parser (DeSR) Modality: Written Use: Dependency parsing Language: English URL: http://sites.google.com/site/desrparser/ DepPattern Production status: Newly created–in progress Modality: Written Use: Grammar compiler, part-of-speech tagger, dependency-based parser Languages: English, Spanish, Galician, French, Portuguese URL: http://gramatica.usc.es/pln/tools/deppattern.html DeSR Modality: Written Use: Parsing Languages: Various URL: http://desr.sourceforge.net/doc/ Enju Modality: Written Use: Syntactic parser Language: English URL: http://www-tsujii.is.s.u-tokyo.ac.jp/enju/ Granska tagger Modality: Written Use: Part-of-speech tagger Language: Swedish URL: http://www.csc.kth.se/tcs/humanlang/tools.html Greek POS Tagger Modality: Written Use: Part-of-speech tagger Language: Koine Greek URL: http://nlp.cs.aueb.gr/software.html Hunpos Modality: Written Use: Part-of-speech tagger Languages: Various URL: http://mokk.bme.hu/resources/hunpos IceNLP Modality: Written Use: Tokenization, part-of-speech tagging, parsing Language: Icelandic URL: http://icenlp.sourceforge.net/ J-Safran (Java Syntaxico-semantic French Analyser) Modality: Written Use: Syntactic dependency parsing Language: French URL: http://rapsodis.loria.fr/jsafran/index.html KyTea, the Kyoto Text Analysis Toolkit Modality: Written Use: Word segmentation, tagging Languages: Japanese, Chinese URL: http://www.phontron.com/kytea LGTagger Modality: Written Use: Part-of-speech tagging Language: French URL: http://igm.univ-mlv.fr/~mconstan/research/software/ Linguistica Modality: Written Use: Morpheme recognition Language: Language-independent URL: http://linguistica.uchicago.edu/linguistica.html Link Parser Modality: Written Use: Syntactic parsing Language: English URL: http://www.link.cs.cmu.edu/link/ LX-Parser Modality: Written Use: Text parsing Language: Portuguese URL: http://lxparser.di.fc.ul.pt MaltParser Modality: Written Use: Parsing Language: Language-independent URL: http://maltparser.org/ MiniPar Modality: Written Use: Parsing Language: English URL: http://www.cs.ualberta.ca/~lindek/minipar.htm Mogura Modality: Written Use: Syntactic parsing Language: English URL: http://www-tsujii.is.s.u-tokyo.ac.jp/enju/ Morče Modality: Written Use: Czech morphological tagger Language: Czech URL: http://ufal.mff.cuni.cz/morce/ MSTParser (maximum spanning tree parser) Modality: Written Use: Parsing Language: Language-independent URL: http://sourceforge.net/projects/mstparser MuNPEx Modality: Written Use: Noun phrase extraction Languages: English, German, French, Spanish URL: http://www.semanticsoftware.info/munpex Pantera-tagger Modality: Not applicable Use: Morphosyntactic tagging Languages: Various URL: http://code.google.com/p/pantera-tagger/ RASP Modality: Written Use: Parsing Language: English URL: http://www.informatics.sussex.ac.uk/research/groups/nlp/rasp/ RelEx Modality: Written Use: Semantic dependency parsing Language: English URL: http://opencog.org/wiki/RelEx SEMAFOR 2.0 Modality: Written Use: Shallow semantic parsing Language: English URL: http://www.ark.cs.cmu.edu/SEMAFOR/ Shalmaneser Modality: Written Use: Automatic semantic parsing Languages: English, German URL: http://www.coli.uni-saarland.de/projects/salsa/shal/ SVMTool Modality: Written Use: Part-of-speech tagging and chunking Languages: Various URL: http://www.lsi.upc.edu/~nlp/SVMTool/ SET (Syntax in Elements of Text) Modality: Written Use: Syntactic parsing Language: Czech URL: http://nlp.fi.muni.cz/trac/set TreeTagger Modality: Written Use: Part-of-speech tagging Language: Language-independent URL: http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/ upparse Modality: Written Use: Partial parsing Language: Language-independent URL: http://elias.ponvert.net/upparse WMBT Modality: Not applicable Use: Morphosyntactic tagging Language: Polish URL: http://nlp.pwr.wroc.pl/redmine/projects/wmbt/wiki Word clAss taGGER (WAGGER) Modality: Written Use: Part-of-speech tagging Languages: English, Portuguese URL: http://www.inf.pucrs.br/afonso.sales/wagger Tokenizers/chunkers/stemmers JOS ToTaLe text analyser Modality: Written Use: Morphological disambiguation and lemmatization Language: Slovenian URL: http://nl.ijs.si/jos/analyse/ MMSEG Modality: Written Use: Word segmentation Language: Chinese URL: http://code.google.com/p/pymmseg-cpp/ MorphTagger Modality: Written Use: Morpheme annotation Language: Arabic URL: http://www-i6.informatik.rwth-aachen.de/~mansour/MorphSegmenter/ Snowball Modality: Written Use: Stemming Language: English URL: http://snowball.tartarus.org/download.php Yet Another Multipurpose CHunk Annotator (YamCha) Modality: Written Use: Text chunking Language: English URL: http://chasen.org/~taku/software/yamcha/ Other BART Anaphora Resolution Toolkit Modality: Written Use: Coreference/anaphora resolution Language: English URL: http://www.bart-coref.org/ GIZA++ Modality: Written Use: Machine Translation Languages: Various URL: http://code.google.com/p/giza-pp/ Google Translate Modality: Written Use: Machine Translation Languages: Various URL: http://www.translate.google.com HeidelTime Modality: Written Use: Temporal expression tagger Languages: Various URL: http://dbs.ifi.uni-heidelberg.de/heideltime Illinois Coreference Package Modality: Written Use: Coreference resolution Language: English URL: http://cogcomp.cs.illinois.edu/page/software_view/18 MAZEA-Web Modality: Written Use: Rhetorical structure annotation Language: Discourse English URL: http://www.nilc.icmc.usp.br/mazea-web/ TARSQI Toolkit Modality: Written Use: Temporal expression and event tagging, temporal linking Language: English URL: http://www.timeml.org/site/tarsqi/toolkit/ Machine Learning Resources GATE (General Architecture for Text Engineering) Modality: Written Use: Corpus creation and management, automatic annotation, manual correction of annotation, part-of-speech tagging, Named Entity recognition, word sense disambiguation, etc.

(While systems such as Siri for the iPhone are a good start to this process, it’s clear that Siri doesn’t fully understand all of natural language, just a subset of key phrases.) Summarization This area includes applications that can take a collection of documents or emails and produce a coherent summary of their content. Such programs also aim to provide snap “elevator summaries” of longer documents, and possibly even turn them into slide presentations. Machine Translation The holy grail of NLP applications, this was the first major area of research and engineering in the field. Programs such as Google Translate are getting better and better, but the real killer app will be the BabelFish that translates in real time when you’re looking for the right train to catch in Beijing.

The British National Corpus (BNC) is compiled and released as the largest corpus of English to date (100 million words). The Text Encoding Initiative (TEI) is established to develop and maintain a standard for the representation of texts in digital form. 2000s: As the World Wide Web grows, more data is available for statistical models for Machine Translation and other applications. The American National Corpus (ANC) project releases a 22-million-word subcorpus, and the Corpus of Contemporary American English (COCA) is released (400 million words). Google releases its Google N-gram Corpus of 1 trillion word tokens from public web pages. The corpus holds up to five n-grams for each word token, along with their frequencies . 2010s: International standards organizations, such as ISO, begin to recognize and co-develop text encoding formats that are being used for corpus annotation efforts.

pages: 304 words: 82,395

Big Data: A Revolution That Will Transform How We Live, Work, and Think
by Viktor Mayer-Schonberger and Kenneth Cukier
Published 5 Mar 2013

IBM demo, words, and quotation—IBM, “701 Translator,” press release, IBM archives, January 8, 1954 (http://www-03.ibm.com/ibm/history/exhibits/701/701_translator.html). See also John Hutchins, “The First Public Demonstration of Machine Translation: The Georgetown-IBM System, 7th January 1954,” November 2005 (http://www.hutchinsweb.me.uk/GU-IBM-2005.pdf). IBM Candide—Adam L. Berger et al., “The Candide System for Machine Translation,” Proceedings of the 1994 ARPA Workshop on Human Language Technology, 1994 (http://aclweb.org/anthology-new/H/H94/H94-1100.pdf). History of machine translation—Yorick Wilks, Machine Translation: Its Scope and Limits (Springer, 2008), p. 107. [>] Candide’s millions of texts versus Google’s billions of texts—Och interview with Cukier, December 2009.

“Mi pyeryedayem mislyi posryedstvom ryechyi,” was entered into the IBM 701 machine via punch cards, and out came “We transmit thoughts by means of speech.” The sixty sentences were “smoothly translated,” according to an IBM press release celebrating the occasion. The director of the research program, Leon Dostert of Georgetown University, predicted that machine translation would be “an accomplished fact” within “five, perhaps three years hence.” But the initial success turned out to be deeply misleading. By 1966 a committee of machine-translation grandees had to admit failure. The problem was harder than they had realized it would be. Teaching computers to translate is about teaching them not just the rules, but the exceptions too. Translation is not just about memorization and recall; it is about choosing the right words from many alternatives.

A British physicist developed near-winning algorithms to predict insurance claims and identify defective used cars. A Singaporean actuary led a competition to predict biological responses to chemical compounds. Meanwhile, at Google’s machine-translation group, the engineers celebrate their translations of languages that no one in the office speaks. Similarly, statisticians at Microsoft’s machine-translation unit relish trotting out an old quip: that the quality of translations increases whenever a linguist leaves the team. To be sure, subject-area experts won’t die out. But their supremacy will ebb. From now on, they must share the podium with the big-data geeks, just as princely causation must share the limelight with humble correlation.

pages: 392 words: 108,745

Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think
by James Vlahos
Published 1 Mar 2019

The CIA was excited and wanted to aggressively pursue machine translation. But the demonstration revealed challenges as much as potential. The system knew only 250 words. It had to be fed sentences that were grammatically simple, such as being in the third person only. They couldn’t contain conjunctions or pose questions. Still, Dostert promised the moon. Within three to five years, he predicted, automated translations between many languages “may well be an accomplished fact.” If only. A dozen years later, a 1966 report from the National Academy of Sciences concluded that machine translation—and every part of teaching computers to intelligently work with natural language—was proving to be no less complex than particle physics.

Some of the latest generative techniques were derived from advances in machine translation, so let’s detour briefly to explain those. The classic technique is for computers to start by analyzing sentences in a source language. The sentences are then transformed phrase by phrase into an interlingua, a machine-readable digital halfway house that encodes the linguistic information. Finally, sentences are converted from the interlingua into the target human language following all of the definitions and grammatical rules of that language. This process, which is known as “phrase-based statistical machine translation,” is every bit as onerous as it sounds.

Rabiner, “Automatic Speech Recognition—A Brief History of the Technology Development,” unpublished academic research paper, 2004, https://goo.gl/AB5DTi. 70 “that they were hearing something of startling scientific import”: Thomas Williams, “Our Exhibits at Two Fairs,” Bell Telephone Quarterly XIX, 1940, http://bit.ly/2FwjEwz. 71 “Things whirr.”: W. John Hutchins, ed., Early Years in Machine Translation (Amsterdam: John Benjamins Publishing Company, 2000), 113, https://goo.gl/Y7Z2yv. 72 “may well be an accomplished fact”: W. John Hutchins, “Milestones in machine translation,” Language Today, no. 16 (January 1999): 19–20, https://goo.gl/RCGeKx. 72 “should be spent hardheadedly”: “Language and Machines: Computers in Translational Linguistics,” National Academy of Sciences research report, no. 1416, 1966, https://goo.gl/DwXymV. 73 Weizenbaum recounted a typical exchange: Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (New York: W.

pages: 284 words: 84,169

Talk on the Wild Side
by Lane Greene
Published 15 Dec 2018

A translating machine was seen as proof that the breakthroughs could come on both sides, and must have seemed as otherworldly as Sputnik itself. So how far has machine translation come? Anyone who generously gave machine translation not five but Turing’s 50 years, and looked at the options like “BabelFish” available online around 2004, will have noticed that computer translation still had a very long way to go. After that early wave of optimism in the 1950s, scientists realised that it was a lot harder to get computers to deal with natural human language than they had realised. Progress was so slow that it became a joke in the scientific community that true machine translation was five years away, and always would be.

The same can be done with short phrases; now Candide can see that grand homme tends to get translated as “great man” in Hansard, and homme grand tends to get translated as “large man”. This is the heart of a statistical machine-translation system: making good guesses, based on lots of past data, about what chunks will translate as what. But the system needs a second component, too. French and English syntax differ quite a bit, and the ideal output will be a good English text, so statistical machine-translation systems also need a “language model”: essentially a model of what good English looks like. In other words, the translation engine is translated on lots of good-quality bilingual text like Hansard, but the language model is trained only on monolingual text (English, in this example).

Progress was so slow that it became a joke in the scientific community that true machine translation was five years away, and always would be. Today, though, language technologies are no longer hopeless. Not just machine translation but speech recognition, speech synthesis, and the ability to carry out basic spoken commands have gone from the pages of science fiction to the very real kitchens, bedrooms and pockets of many ordinary people. The digital assistants in smartphones (Apple’s Siri, Microsoft’s Cortana, Google’s Google Assistant), and their counter-top home-based equivalents (Amazon’s Alexa, Google Home), using only voice as their input, are no longer mere curiosities. They can now give useful output to questions like “What’s the weather going to be like tomorrow in Seattle?”

pages: 396 words: 107,814

Is That a Fish in Your Ear?: Translation and the Meaning of Everything
by David Bellos
Published 10 Oct 2011

Here he establishes easy and useful communication with the persons who have also descended from their towers.3 That dream of “easy and useful communication” with all our fellow humans in the “great open basement” that is the common foundation of human life expresses an ancient and primarily religious view of language and meaning that has proved very hard to escape, despite its manifestly hypothetical nature. For what language would humans use to communicate with one another in the “great open basement”? The language of pure meaning. At later stages in the adventure of machine translation and modern linguistics, it came to be called “interlingua” or “the invariant core” of meaning and thought that a communication in any language encodes. The task that machine-translation pioneers set themselves was therefore almost identical to the task of the translator as expressed by many modern theorists and philosophers: to discover and implement the purely hypothetical language that all people really speak in the great open basement of their souls.

The English says pretty much what the German says. Is it poetry? That’s a judgment everyone makes independently, by criteria that have absolutely nothing to do with the quality of the translation. This one, in fact, wasn’t done by a poet or by a translator. It was done (with a little help from a friend) by a machine translation service available for free on the Internet. Personal, quasi-biographical reasons for valuing poems are probably very common. We may say that we treasure a line or a rhyme or a lyric “in and for itself,” but it’s easier to demonstrate that poems often get attached to us, or we get attached to poems, in contexts that endow the attachment with personal emotion.

TWENTY-THREE The Adventure of Automated Language-Translation Machines The reluctance of European peoples to retain Latin or to adopt some other transmission language—such as Esperanto—for the dissemination of important information has created a costly and difficult set of translation tasks, carried out under time pressures unimaginable in earlier ages. Now that nearly all other aspects of news transmission are carried out not by couriers but by electronic devices, it seems natural to ask why the core activity itself cannot be handled likewise, by automatic translation machines. Although it is still in its infancy, machine translation has had an eventful and uneven history. It first arose in dramatic historical circumstances and in response to an overriding political need. It wasn’t initiated by an explicit act of political will, like the language rules of the European Union, but its launching ground was the climate of terror at the start of the Cold War.

pages: 364 words: 99,897

The Industries of the Future
by Alec Ross
Published 2 Feb 2016

As the amount of data that informs translation grows exponentially, the machines will grow exponentially more accurate and be able to parse the smallest detail. Whenever the machine translations get it wrong, users can flag the error—and that data too will be incorporated into future attempts. We just need more data, more computing power, and better software. These will come with the passage of time and will fill in the communication gaps in areas including pronunciation and interpreting a spoken response. The most interesting innovations in machine translation will come with the human interface. In ten years, a small earpiece will whisper what is being said to you in your native language near simultaneously to the foreign language being spoken.

When you respond, your language will be translated into the language of your counterpart either through his or her own earpiece or amplified by a speaker on your phone, watch, or whatever the personal device of 2025 is. Today’s translation tools also tend to move only between two languages. Try to engage in any sort of machine translation exercise involving three languages, and it is an incoherent mess. In the future, the number of languages being spoken will not matter. You could host a dinner party with eight people at the table speaking eight different languages, and the voice in your ear will always be whispering the one language you want to hear. Universal machine translation will accelerate globalization on a massive scale. While the current stage of globalization was propelled in no small part by the adoption of English as the lingua franca for business—to the point where there are twice as many nonnative English speakers as native speakers—the next wave of globalization will open up communication even more broadly by removing the need for a shared language.

Big data applied to translation will change that. It will take economically isolated parts of the world and help fold them into the global economy. As with any new technology, the rise of universal machine translation will also have its downsides—and two in particular come to mind. The first is the near-obliteration of a profession. The only professional translators in ten years are going to be the people who work on the translation software. Most machine translation programs (such as Google’s) continue to rely heavily on human translations, but once the data sets of translations are large enough, the translators won’t be needed.

pages: 414 words: 109,622

Genius Makers: The Mavericks Who Brought A. I. To Google, Facebook, and the World
by Cade Metz
Published 15 Mar 2021

“But I now believe intelligence is something we can re-create in our lifetime.” In the days that followed his Japanese lunch with Larry Page, as he typed up a formal pitch for the Google founder, this became a pillar of his proposal. He told Page that deep learning would not only provide image recognition and machine translation and natural language understanding, but would also push machines toward true intelligence. Before the year was out, the project was approved. It was called Project Marvin, in a nod to Marvin Minsky. Any irony was unintended. * * * — GOOGLE was headquartered in Mountain View, California, about forty miles south of San Francisco along Highway 101, at the southernmost edge of San Francisco Bay.

“He is somebody who is not afraid to believe,” says Sergey Levine, a robotics researcher who worked alongside Sutskever at Google during these years. “There are many people who are not afraid, but he especially is not afraid.” By the time Sutskever joined Google, deep learning had remade both speech and image recognition. The next big step was machine translation—technology that could instantly translate any language into any other. This was a harder problem. It didn’t involve identifying a single thing, like a dog in a photo. It involved taking a sequence of things (like the words that make up a sentence) and converting it into another sequence (the translation of that sentence).

All were years away from overhauling the field completely, if only because the task of drug discovery is prodigiously difficult and time-consuming. Dahl’s discovery amounted to a tweak rather than a transforming breakthrough. But the potential of neural networks quickly galvinized research across the medical field. When Ilya Sutskever published the paper that remade machine translation—known as the Sequence to Sequence paper—he said it was not really about translation. When Jeff Dean and Greg Corrado read it, they agreed. They decided it was an ideal way of analyzing healthcare records. If researchers fed years of old medical records into the same kind of neural network, they decided, it could learn to recognize signs that illness was on the way.

pages: 268 words: 109,447

The Cultural Logic of Computation
by David Golumbia
Published 31 Mar 2009

By the time funding for such projects has largely dried up in the late 1960s—perhaps in the face of the pullback from Vietnam and Chomsky’s outspoken opposition to it—Chomsky writes that “machine translation and related enterprises . . . seemed to me pointless as well as probably quite hopeless,” but that he is “surprised” to “read repeated and confident accounts of how work in generative grammar developed out of an interest in computers, machine translation, and related matters. At least as far as my own work is concerned, this is quite false” (Chomsky 1973, 40). What is false for Chomsky nevertheless seems true for the field as a whole and for the logico-philosophical and ideological structures that underlie it.

There were few so benighted as to question the possibility, in fact the immediacy, of a final solution to the problem of converting speech into writing by available engineering technique. And just a few years later, it was jubilantly discovered that machine translation and automatic abstracting were just around the corner” (3). 3. The editors of Machine Translation write in a footnote to “Translation”: “When [Weaver] sent [the memorandum] to some 200 of his acquaintances in various fields, it was literally the first suggestion that most had ever seen that language translation by computer techniques might be possible” (Booth and Locke, in Weaver 1949, 15). 4.

Chomsky’s logic papers and the Chomsky hierarchy are rarely mentioned in linguistics per se, and computerization is rarely mooted as a test of Chomsky’s theories. Yet from the outset the idea of computerization has been notably close to the heart of Chomsky’s program, not merely because Chomsky’s work was housed for years in MIT’s office of Machine Translation, but for some of the cultural reasons we have been discussing. What, after all, is a “transformation,” in Chomsky’s terms? One of the most plausible accounts is that it is precisely an algorithm, which is to say, a structure that is logically identical to (and often actually is) a computer program; as John Goldsmith, a leading practitioner of both computational linguistics (CL) and mainstream linguistics, has recently put it, “generative grammar is, more than it is anything else, a plea for the case that an insightful theory of language can be based on algorithmic explanation” (Goldsmith 2004, 1).

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

The Defense Advanced Research Project Agency (DARPA) stated that this single application more than paid back DARPA’s 30-year investment in AI. Every day, ride hailing companies such as Uber and mapping services such as Google Maps provide driving directions for hundreds of millions of users, quickly plotting an optimal route taking into account current and predicted future traffic conditions. Machine translation: Online machine translation systems now enable the reading of documents in over 100 languages, including the native languages of over 99% of humans, and render hundreds of billions of words per day for hundreds of millions of users. While not perfect, they are generally adequate for understanding. For closely related languages with a great deal of training data (such as French and English) translations within a narrow domain are close to the level of a human (Wu et al., 2016b).

CNNs have been applied to a wide range of vision tasks, from self-driving cars to grading cucumbers.8 Driving, which is covered in Section 27.7.6 and in several sections of Chapter 26, is among the most demanding of vision tasks: not only must the algorithm detect, localize, track, and recognize pigeons, paper bags, and pedestrians, but it has to do it in real time with near-perfect accuracy. 22.8.2Natural language processing Deep learning has also had a huge impact on natural language processing (NLP) applications such as machine translation and speech recognition. Some advantages of deep learning for these applications include the possibility of end-to-end learning, the automatic generation of internal representations for the meanings of words, and the interchangeability of learned encoders and decoders. End-to-end learning refers to the construction of entire systems as a single, learned function f. For example, an f for machine translation might take as input an English sentence SE and produce an equivalent Japanese sentence SJ = f (SE).

As with speech recognition, the introduction of deep recurrent neural networks led to a large improvement, with about 2/3 of listeners saying that the neural WaveNet system (van den Oord et al., 2016a) sounded more natural than the previous nonneural system. Machine translation transforms text in one language to another. Systems are usually trained using a bilingual corpus: a set of paired documents, where one member of the pair is in, say, English, and the other is in, say, French. The documents do not need to be annotated in any way; the machine translation system learns to align sentences and phrases and then when presented with a novel sentence in one language, can generate a translation to the other.

pages: 169 words: 41,887

Literary Theory for Robots: How Computers Learned to Write
by Dennis Yi Tenen
Published 6 Feb 2024

For instance: Histories of the Turing machine and the Turing test often neglect the direct influence owed to Ludwig Wittgenstein’s lectures, along with the presence of Margaret Masterman, the pioneer of machine translation, in the same classroom. Masterman’s universal thesaurus harkens back to Wilkins and other universal-­language makers, central to a whole separate and important branch of AI—­machine translation. It would require more than a chapter of its own, orthogonal to the direction of our travel. Aspects of encryption used for diplomacy or military communications would also lead to entirely different exit points.

Emergence somehow happens by the critical accumulation of smarts, where intelligence sometimes bubbles up, pulling along with it the more intangible silk of sentience, awareness, and conscience. Leaving the world of culinary and automotive hardware, I return to the bundle of specifically textual technologies, such as chatbots, machine translators, named entity extractors, automatic text summarizers, speech recognizers, spam filters, search engines, spell-­checkers, classifiers, auto-­taggers, sentence autocompletion, and story generators. (Let’s not forget, too, that whatever is meant by “AI” includes other, non-­verbalizing tech as well, from weather-­prediction models to drug-­discovery algorithms, robotics, and image classifiers.)

Paul, 1971), 3–­16. 84 Roman Jakobson, who helped coin: Roman Jakobson, Selected Writings, II: Word and Language (The Hague, NL: Mouton, 1971), 711. 87 Victor Yngve, another prominent MIT linguist: Victor H. Yngve, “Random Generation of English Sentences” (paper presented at the International Conference on Machine Translation of Languages and Applied Language Analysis, National Physical Laboratory, Teddington, UK, September 5–­8, 1961), 66–­80. 92 “No more than about seven”: Yngve, “Random Generation,” 66–­80. 94 The computer program complying Meehan’s: James Richard Meehan, “The Metanovel: Writing Stories by Computer” (PhD diss., Yale University, Department of Computer Science, September 1976).

pages: 301 words: 85,126

AIQ: How People and Machines Are Smarter Together
by Nick Polson and James Scott
Published 14 May 2018

Other NLP systems have improved rapidly, too, and for the same reason. Take machine translation. For many years, there was a cottage industry of internet memes devoted to errors made by Google Translate, of which you can find hundreds scattered across the web. For example, some wise guy realized back in 2011 that when translating from English to Vietnamese, “Will Justin Bieber ever reach puberty?” became “Justin Bieber will never reach puberty.”22 This kind of syntactical error was a classic failure mode of older machine-translation algorithms. They’d get the words mostly right, but they’d often bungle the word order in the target language, producing something that was either wrong or nonsensical.

Lowerre, “The HARPY Speech Recognition System,” Ph.D. thesis, Department of Computer Science, Carnegie Mellon University, 1976. 22.  “10 Inexplicable Google Translate Fails,” https://www.searchenginepeople.com/blog/10-google-translate-fails.html. 23.  For details of the method as well as extensive accuracy evaluations, see Yonghui Wu et al., “Google’s Neural Machine Translation System: Bridging the Gap Between Human and Machine Translation,” October 8, 2016, https://arxiv.org/abs/1609.08144. 24.  Peter Norvig, “On Chomsky and the Two Cultures of Statistical Learning,” http://norvig.com/chomsky.html. 25.  If you want to get really technical, these “probe words” are really called “context vectors.”

Once you’ve used a data set to find a good prediction rule, then any time you encounter a new input, you can plug it in to predict the corresponding output—just like you can plug your age into the equation “MHR = 220 − Age” and read off a prediction for your maximum heart rate. Here’s a bit of lingo. In AI, prediction rules are often referred to as “models”—for example, a “face-recognition model” for taking an input of an image and outputting a person’s identity, or a “machine-translation model” for taking an English sentence as an input and outputting a Spanish translation. The process of using data to find a good prediction rule is often called “training the model.” We like the word “training” here, because it evokes the incremental benefits in fitness that accrue with each new gym workout—or in the case of a model in AI, the incremental improvements in prediction that accrue with each new data point.

pages: 626 words: 167,836

The Technology Trap: Capital, Labor, and Power in the Age of Automation
by Carl Benedikt Frey
Published 17 Jun 2019

For example, the idea of artificial neural networks (that is, layers of computational units that mimic how neurons connect in the brain) has been around since the 1980s, but the networks performed poorly due to constraints imposed by computational resources. So up until recently, machine translations relied on algorithms that analyzed phrases word by word from millions of human translations. However, phrase-based machine translations suffered from some serious shortcomings. In particular, the narrow focus meant that the algorithm often lost the broader context. A solution to this problem has been found in so-called deep learning, which uses artificial neural networks with more layers. These advances allow machine translators to better capture the structure of complex sentences. Neural Machine Translation (NMT), as it is called, used to be computationally expensive both in training and in translation inference.

Cisco, 2018, “Cisco Visual Networking Index: Forecast and Trends, 2017–2022,” (San Jose, CA: Cisco), https://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/complete-white-paper-c11-481360.html. 9. P. Lyman and H. R. Varian, 2003, “How Much Information?,” berkeley.edu/research/projects/how-much-info-2003. 10. A. Tanner, 2007. “Google Seeks World of Instant Translations,” Reuters, March 27. 11. Y. Wu et al., 2016, “Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation,” preprint, submitted October 8, https://arxiv.org/pdf/1609.08144.pdf. 12. I. M. Cockburn, R. Henderson, and S. Stern, 2018, “The Impact of Artificial Intelligence on Innovation (Working Paper 24449, National Bureau of Economic Research, Cambridge, MA). 13.

A Study of War. Vol. 1. Chicago: University of Chicago Press. Wrigley, E.A. 2010. Energy and the English Industrial Revolution. Cambridge: Cambridge University Press. Wu, Y., M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, et al. 2016. “Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation.” Preprint, submitted September 26. https://arxiv.org/abs/1609.08144. Xiong, W., L. Wu, F. Alleva, J. Droppo, X. Huang, and A. Stolcke. 2017. “The Microsoft 2017 Conversational Speech Recognition System.” Microsoft AI and Research Technical Report MSR-TR-2017-39, August 2017.

pages: 1,079 words: 321,718

Surfaces and Essences
by Douglas Hofstadter and Emmanuel Sander
Published 10 Sep 2012

Altogether, then, through a slow process of carefully honed analogy-making and analogy-judging, we eventually managed to recreate some of the high-sounding flavor of Abraham Lincoln’s immortal phrase, while sidestepping various superficially enticing traps along the way. Potential Progress in Machine Translation The preceding anecdote confirms the pervasive thesis of Warren Weaver’s book Alice in Many Tongues, which is that to translate well, the use of analogies is crucial. In order to come up with possible analogies and then to judge their appropriateness, one must carefully exploit one’s full inventory of mental resources, including one’s storehouse of life experiences. Could machine translation possibly do anything of the sort? Is it conceivable that one day, computer programs will be able to carry out translation at a high level of artistry?

Is it conceivable that one day, computer programs will be able to carry out translation at a high level of artistry? A couple of decades ago, some machine-translation researchers, spurred by the low quality of what had then been achieved in their field, began to question the methods on which the field had been built (mostly word-matching and grammatical rules), and started exploring other avenues. What emerged with considerable vigor was the idea of statistical translation, which today has become a very important strategy used in tackling the challenge of machine translation. This approach is based on the use of statistically-based educated guesswork, where the data base in which all guesses are rooted consists of an enormous storehouse of bilingual texts, all of which have been carefully translated by human experts.

(The French word “narguer”, found near the end and roughly meaning “to flout”, was apparently not in the engine’s on-line dictionary, so it was simply left in French.) This example gives a sense for the quality of machine translation in the fall of 2004. But now let us fast-forward to the spring of the year 2009. At that point, Google’s translation-engine developers had radically switched strategies in favor of the new idea of statistical machine translation, so their new engine had little in common, other than its name, with its former incarnation. Given the inadequacy of the old method, which we have just witnessed, this would seem like a wise decision.

Text Analytics With Python: A Practical Real-World Approach to Gaining Actionable Insights From Your Data
by Dipanjan Sarkar
Published 1 Dec 2016

Next, we will be talking about some of the main applications of NLP. Machine Translation Machine translation is perhaps one of the most coveted and sought-after applications for NLP. It is defined as the technique that helps in providing syntactic, grammatical, and semantically correct translation between any two pair of languages. It was perhaps the first major area of research and development in NLP. On a simple level, machine translation is the translation of natural language carried out by a machine. By default, the basic building blocks for the machine translation process involve simple substitution of words from one language to another, but in that case we ignore things like grammar and phrasal structure consistency.

Hence, more sophisticated techniques have evolved over a period of time, including combining large resources of text corpora along with statistical and linguistic techniques. One of the most popular machine translation systems is Google Translate. Figure 1-19 shows a successful machine translation operation executed by Google Translate for the sentence What is the fare to the airport? from English to Italian. Figure 1-19. Machine translation performed by Google Translate Over time, machine translation systems are getting better providing translations in real time as you speak or write into the application. Speech Recognition Systems This is perhaps the most difficult application for NLP.

The Philosophy of Language Language Acquisition and Usage Linguistics Language Syntax and Structure Words Phrases Clauses Grammar Word Order Typology Language Semantics Lexical Semantic Relations Semantic Networks and Models Representation of Semantics Text Corpora Corpora Annotation and Utilities Popular Corpora Accessing Text Corpora Natural Language Processing Machine Translation Speech Recognition Systems Question Answering Systems Contextual Recognition and Resolution Text Summarization Text Categorization Text Analytics Summary Chapter 2:​ Python Refresher Getting to Know Python The Zen of Python Applications:​ When Should You Use Python?​ Drawbacks:​ When Should You Not Use Python?​

pages: 252 words: 74,167

Thinking Machines: The Inside Story of Artificial Intelligence and Our Race to Build the Future
by Luke Dormehl
Published 10 Aug 2016

Holmes’s creator, Arthur Conan Doyle, a trained doctor who graduated from Edinburgh (today the location of one of the UK’s top AI schools), would likely have been just as dazzled by Modernising Medicine, an AI designed to diagnose diseases more effectively than many human physicians. Finally, the miraculous World’s Fair Machine Translator is most familiar to us today as Google Translate: a free service which offers impressively accurate probabilistic machine translation between some fifty-eight different languages – or 3,306 separate translation services in total. If the World’s Fair imagined instantaneous translation between Russian and English, Google Translate goes further still by also allowing translation between languages like Icelandic and Vietnamese, or Farsi and Yiddish, which have had historically limited previous translations.

Perhaps most impressive of all, however, was a computer that bridged the seemingly unassailable gap between the United States and Soviet Union by translating effortlessly (or what appeared to be effortlessly) between English and Russian. This miraculous technology was achieved thanks to a dedicated data connection between the World’s Fair’s IBM exhibit and a powerful IBM mainframe computer 114 miles away in Kingston, New York, carrying out the heavy lifting. Machine translation was a simple, but brilliant, summation of how computers’ clear-thinking vision would usher us towards utopia. The politicians may not have been able to end the Cold War, but they were only human – and with that came all the failings one might expect. Senators, generals and even presidents were severely lacking in what academics were just starting to call ‘machine intelligence’.

To make it the ‘Worldwide Headquarters’ they thought it should be, they kitted it out with a few tables, three chairs, a turquoise shag rug, a folding ping-pong table and a few other items. The garage door had to be left open for ventilation. It must have seemed innocuous at the time, but over the next two decades, Larry Page and Sergey Brin’s company would make some of the biggest advances in AI history. These spanned fields including machine translation, pattern recognition, computer vision, autonomous robots and far more, which AI researchers had struggled with for half a century. Virtually none of it was achieved using Good Old-Fashioned AI. The company’s name, of course, was Google. CHAPTER 2 Another Way to Build AI IT IS 2014 and, in the Google-owned London offices of an AI company called DeepMind, a computer whiles away the hours by playing an old Atari 2600 video game called Breakout.

pages: 408 words: 105,715

Kingdom of Characters: The Language Revolution That Made China Modern
by Jing Tsu
Published 18 Jan 2022

At the time, however, the Cold War had begun and the United States and the Soviet Union were racing to make advances in cryptography research and machine translation, the automated translation of human languages by machines, one of the first areas of research in artificial intelligence. Both superpowers saw clearly that whoever controlled the computer would control the future of information. After Mergenthaler bought the rights from Lin, the U.S. Air Force acquired the keyboard in an effort to study machine translation and disk storage for rapid access to large quantities of information. Chinese had been identified as one of the priority languages of study.

Chinese had been identified as one of the priority languages of study. The USAF handed Lin’s keyboard to an engineer named Gilbert W. King, the director of research at the IBM research center in upstate New York. King later moved to Itek, a defense contractor in Massachusetts, where he coauthored a seminal scientific paper on machine translation. He also unveiled the machine they built as a result of studying Lin’s keyboard—the Sinowriter, a device for converting Chinese-character texts into machine input codes for processing Chinese into English. Lin’s keyboard provided pivotal evidence for how Chinese can be used in a photographic system of storage and optical retrieval.

The character 路 has the phonetic pronunciation of “lu” and, because it can be divided into two vertical halves, has a zuo you (left-right) structure. Both features can be indicated in the extended code KZPKLZ. The more precise you can be about encoding the information of a character, the more useful that code can be. These extensions of Zhi’s system would be important for Chinese-language applications in machine translation and retrieving information from stored data. Zhi formally introduced his “On-Sight” encoding system in the Chinese science journal Nature Magazine in 1978. He described his system as topological—extrapolated from the geometry of parts. With four-letter codes using all twenty-six letters of the alphabet, there were enough combinations to generate 456,976 possible unique codes.

pages: 370 words: 94,968

The Most Human Human: What Talking With Computers Teaches Us About What It Means to Be Alive
by Brian Christian
Published 1 Mar 2011

Roughly speaking, the semantic camp tries to program linguistic understanding, with the hope that the desired behavior will follow, and the empirical camp tries to directly program linguistic behavior, with the hope that “understanding” will either happen along the way or prove to be an unnecessary middleman. This divide also plays out in the history of computer translation. For many decades, machine translation projects attempted to understand language in a rule-based way, breaking down a sentence’s structure and getting down to the underlying, universal meaning, before re-encoding that meaning according to another language’s rules. In the 1990s, a statistical approach to machine translation—the approach that Google uses—came into its own, which left the question of meaning entirely out of it. Cleverbot, for instance, can know that “Scaramouche, Scaramouche” is best answered by “Will you do the fandango?”

A number of researchers feel that the attempt to break language down with thesauri and grammatical rules is simply not going to crack the translation problem. A new approach abandons these strategies, more or less entirely. For instance, the 2006 NIST machine translation competition was convincingly won by a team from Google, stunning a number of machine translation experts: not a single human on the Google team knew the languages (Arabic and Chinese) used in the competition. And, you might say, neither did the software itself, which didn’t give a whit about meaning or about grammar rules. It simply drew from a massive database of high-quality human translation17 (mostly from the United Nations minutes, which are proving to be the twenty-first century’s digital Rosetta stone), and patched phrases together according to what had been done in the past.

By “crowdsourcing” the task of writing a program’s responses to the users themselves, the program acquires an explosive growth in its behaviors, but these behaviors stop being internally consistent. Death of the Author; End of the Best Friend Do you need someone? Or do you need me? –SAY ANYTHING … Speaking of “writing a book”: this notion of style versus content, and of singularity and uniqueness of vision, is at the heart of recent debates about machine translation, especially of literature. Wolfram Alpha researcher and chatbot author Robert Lockhart describes the chatbot community as being split between two competing approaches, what he calls “pure semantics” and “pure empiricism.” Roughly speaking, the semantic camp tries to program linguistic understanding, with the hope that the desired behavior will follow, and the empirical camp tries to directly program linguistic behavior, with the hope that “understanding” will either happen along the way or prove to be an unnecessary middleman.

pages: 222 words: 53,317

Overcomplicated: Technology at the Limits of Comprehension
by Samuel Arbesman
Published 18 Jul 2016

They converted it into Russian, and then ran the resulting Russian translation back again through the machine into English. The result was something like “The whiskey is strong, but the meat is terrible.” Machine translation, as this computational task is more formally known, is not easy. Google Translate’s results can be imprecise, though interesting in their own way. But scientists have made great strides. What techniques are used by experts in machine translation? One early approach was to use the structured grammatical scaffolding of language I mentioned above. Linguists hard-coded the linguistic properties of language into a piece of software in order to translate from one language to another.

(Malden, MA: Wiley-Blackwell, 2008), 181. Peter Norvig, Google’s director of research: Peter Norvig, “On Chomsky and the Two Cultures of Statistical Learning,” accessed April 30, 2015, http://norvig.com/chomsky.html. great, though apocryphal, story: There seem to be many versions of this apocryphal machine translation tale. What techniques are used by experts: Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford, UK: Oxford University Press, 2014), 15. say, 99.9 percent of the time: I made these numbers up for effect, but if any linguist wants to chat, please reach out! “based on millions of specific features”: Alon Halevy et al., “The Unreasonable Effectiveness of Data,” IEEE Intelligent Systems 24, no. 2 (2009): 8–12.

(TV show), 142, 169 Jobs, Steve, 161 Jones, Benjamin, 90 July 8, 2015, system crashes on, 1, 4 Kant Generator, 74 Kasparov, Garry, 84 Katsuyama, Brad, 189 Kelly, Kevin, 83 Kelly, Sean Dorrance, 173 Kircher, Athanasius, 86 Kirk, Chris, 32–33 kluges: in biological systems, 119 definition of, 33 “good enough” in, 42 as inevitable in complex systems, 34–36, 62–66, 127, 128, 154, 173–74 in legal system, 33–34 and limits of human comprehension, 42 in software, 35 knowledge: burden of, 90, 212 explosion of, 86–88, 89–91, 142–43 generalists and, 142–49 limits to, 153–54 Renaissance man and, 86–89, 93, 144 specialization and, 85–86, 90–91 Knowledge, The, 78 Koopman, Philip, 10, 100, 201 kosmos, 139–40 language: cognitive processing of, 73–74 grammar in, 54, 57–58 hapax legomena in, 54–55, 206 machine translation and, 57–58, 207 power laws in, 55–56 recursion in, 71–72, 75 legacy code, legacy systems, 43, 223 accretion and, 39–40, 198–99 in biological systems, 118, 119–20 inducing new functions from, 126, 198 trauma of replacing, 39–42 legal system: accretion in, 40–41, 46 complexity in, 16, 85 edge cases in, 59–61 interaction in, 45–46 kluges in, 33–34 limits of comprehension and, 22 Leibniz, Gottfried, 89 Leidy, Joseph, 86 Lewis, Michael, 189 liberal arts, 145 Library of Congress, 90 limitative theorems, 175 Linus’s law, 102 logic, computer vs. human, 82–84 logistics, 84 London, cabdrivers in, 78 long-tailed distributions, 55–56, 206 “losing the bubble,” 70–71, 85 Lovecraft, H.

pages: 370 words: 112,809

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future
by Orly Lobel
Published 17 Oct 2022

Some surveys already show that nearly half of all general web searches are now done using voice.27 Crowdsourced projects and open-source products may be the single best way to achieve the level of diversity and inclusion that society needs and deserves. The Feminist Translator Machine translation is an extraordinary engine for development. It has also been a powerful case study in gendered language and how we can improve as a society. In a global market, trade is enabled by communication and trust. Language barriers have burdened developing countries striving to compete in global markets. Machine translators are now easily and freely available on the web, facilitating untold numbers of exchanges of knowledge, information, ideas, goods, and services. Nevertheless, machine translators have defaulted to a masculine gender for years.

Historically, men have been vastly more represented both as publishers and as the subjects of published works. So it makes perfect sense that machine translation has developed a male bias: the algorithms have learned from the data available to them. The quality of the output depends on the quality of the input, but when the input is biased, there are other ways to reach more equal outcomes. Instead of defaulting to the most commonly pervasive (male) pronouns, machine translators need to be programmed—taught—to identify more social cues and context. They could also default in equal rates to male and female when no context is provided.

I am often addressed as Mr. Orly Lobel in reply emails. When my research is quoted around the world, I am often attributed as male. But an algorithm can quite easily sort through existing databases of common names to discover that Orly is a common Hebrew female name meaning “my light.” When a machine translator is tasked to identify the entirety of the context throughout the text, its accuracy in identifying gender correctly will increase. Google Translate has already made some strides in this direction. In 2018, a product manager on the Google Translate team published an article explaining this new focus: “There’s been an effort across Google to promote fairness and reduce bias in machine learning.

pages: 400 words: 94,847

Reinventing Discovery: The New Era of Networked Science
by Michael Nielsen
Published 2 Oct 2011

They tried to do the translation using clever, relatively simple models based on the rules of grammar and other rules of language. This sounds like a good idea, but despite a lot of effort, it never worked very well. It turns out that human languages contain far too much complexity to be captured in such simple rules. In the 1990s researchers in machine translation began trying a new and radically different approach. They threw out the conventional rules of grammar and language, and instead started their work by gathering an enormous corpus of texts and translations—think, say, of all the documents from the United Nations. Their idea was to use data-driven intelligence to analyze those documents en masse, trying to infer a model of translation.

Once they had analyzed the corpus and built up their statistical model, they used that model to translate new texts. To translate a Spanish sentence, the idea was to find the English sentence that, according to the model, had the highest probability. That high-probability sentence would be output as the translation. Frankly, when I first heard about statistical machine translation I thought it didn’t sound very promising. I was so surprised by the idea that I thought I must be misunderstanding something. Not only do these models have no understanding of the meaning of “hola” or “hello,” they don’t even understand the most basic things about language, such as the distinction between nouns and verbs.

Not only do these models have no understanding of the meaning of “hola” or “hello,” they don’t even understand the most basic things about language, such as the distinction between nouns and verbs. And, it turns out, my skepticism is justified: the approach doesn’t work very well—if the starting corpus used to infer the model contains just a few million words. But if the corpus has billions of words, the approach starts to work very well indeed. Today, this is the way the best machine translation systems work. If you’ve ever done a Google search that returned a result in a foreign language, you’ll notice that Google offers to “translate this page.” These translations aren’t done by human beings, or by special algorithms handcrafted with a detailed knowledge of the languages involved.

pages: 72 words: 21,361

Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy
by Erik Brynjolfsson
Published 23 Jan 2012

GeoFluent takes words written in one language, such as an online chat message from a customer seeking help with a problem, and translates them accurately and immediately into another language, such as the one spoken by a customer service representative in a different country. GeoFluent is based on statistical machine translation software developed at IBM’s Thomas J. Watson Research Center. This software is improved by Lionbridge’s digital libraries of previous translations. This “translation memory” makes GeoFluent more accurate, particularly for the kinds of conversations large high-tech companies are likely to have with customers and other parties.

Bureau of Economic Analysis added “Information Technology” as a category of business investment in 1958, so let’s use that as our starting year. And let’s take the standard 18 months as the Moore’s Law doubling period. Thirty-two doublings then take us to 2006 and to the second half of the chessboard. Advances like the Google autonomous car, Watson the Jeopardy! champion supercomputer, and high-quality instantaneous machine translation, then, can be seen as the first examples of the kinds of digital innovations we’ll see as we move further into the second half—into the phase where exponential growth yields jaw-dropping results. Computing the Economy: The Economic Power of General Purpose Technologies These results will be felt across virtually every task, job, and industry.

CRM systems have been extended to smart phones so that salespeople can stay connected from the road, and tablet computers now provide much of the functionality of PCs. The innovations we’re starting to see in the second half of the chessboard will also be folded into this ongoing work of business invention. In fact, they already are. The GeoFluent offering from Lionbridge has brought instantaneous machine translation to customer service interactions. IBM is working with Columbia University Medical Center and the University of Maryland School of Medicine to adapt Watson to the work of medical diagnosis, announcing a partnership in that area with voice recognition software maker Nuance. And the Nevada state legislature directed its Department of Motor Vehicles to come up with regulations covering autonomous vehicles on the state’s roads.

pages: 416 words: 112,268

Human Compatible: Artificial Intelligence and the Problem of Control
by Stuart Russell
Published 7 Oct 2019

Then a supervised learning algorithm processes the examples to produce a complex rule that takes any French sentence as input and produces an English translation. The current champion learning algorithm for machine translation is a form of so-called deep learning, and it produces a rule in the form of an artificial neural network with hundreds of layers and millions of parameters.D Other deep learning algorithms have turned out to be very good at classifying the objects in images and recognizing the words in a speech signal. Machine translation, speech recognition, and visual object recognition are three of the most important subfields in AI, which is why there has been so much excitement about the prospects for deep learning.

In the first decade or so after the Dartmouth meeting, AI had several major successes, including Alan Robinson’s algorithm for general-purpose logical reasoning2 and Arthur Samuel’s checker-playing program, which taught itself to beat its creator.3 The first AI bubble burst in the late 1960s, when early efforts at machine learning and machine translation failed to live up to expectations. A report commissioned by the UK government in 1973 concluded, “In no part of the field have the discoveries made so far produced the major impact that was then promised.”4 In other words, the machines just weren’t smart enough. My eleven-year-old self was, fortunately, unaware of this report.

The seeds of today’s progress were sown during that AI winter, including early work on large-scale probabilistic reasoning systems and what later became known as deep learning. Beginning around 2011, deep learning techniques began to produce dramatic advances in speech recognition, visual object recognition, and machine translation—three of the most important open problems in the field. By some measures, machines now match or exceed human capabilities in these areas. In 2016 and 2017, DeepMind’s AlphaGo defeated Lee Sedol, former world Go champion, and Ke Jie, the current champion—events that some experts predicted wouldn’t happen until 2097, if ever.6 Now AI generates front-page media coverage almost every day.

pages: 238 words: 77,730

Final Jeopardy: Man vs. Machine and the Quest to Know Everything
by Stephen Baker
Published 17 Feb 2011

But their approach registered a dramatic breakthrough in 2005, when the U.S. National Institute for Standards and Technologies held one of its periodic competitions on machine translation. The government was ravenous for this translation technology. If machines could automatically monitor and translate Internet traffic, analysts might get a jump on trends in trade and technology and, even more important, terrorism. The competition that year focused on machine translation from Chinese and Arabic into English. And it drew the usual players, including a joint team from IBM and Carnegie Mellon and a handful of competitors from Europe.

And it drew the usual players, including a joint team from IBM and Carnegie Mellon and a handful of competitors from Europe. Many of these teams, with their blend of experts in linguistics, cognitive psychology, and computer science, had decades of experience working on translations. One new player showed up: Google. The search giant had been hiring experts in machine translation, but its team differed from the others in one aspect: No one was expert in Arabic or Chinese. Forget the nuances of language. They would do it with math. Instead of translating based on semantic and grammatical structure, the interplay of the verbs and objects and prepositional phrases, their computers were focusing purely on statistical relationships.

Without knowing what the words meant, their computers had learned to associate certain strings of words in Arabic and Chinese with their English equivalents. Since they had so very many examples to learn from, these statistical models caught nuances that had long confounded machines. Using statistics, Google’s computers won hands down. “Just like that, they bypassed thirty years of work on machine translation,” said Ed Lazowska, the chairman of the computer science department at the University of Washington. The statisticians trounced the experts. But the statistically trained machines they built, whether they were translating from Chinese or analyzing the ads that a Web surfer clicked, didn’t know anything.

pages: 561 words: 120,899

The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy
by Sharon Bertsch McGrayne
Published 16 May 2011

After that paper, several of the leading machine translation systems incorporated Bayes’ rule. In 1993, lured by lucre and the challenge, Mercer and Brown moved from IBM and machine translation to RenTech, where they became vice presidents and co–portfolio managers for technical trading. So many members of IBM’s speech recognition group joined them that critics complain they set back the field of machine translation five years. After the 9/11 disaster and the start of the war in Iraq, the military and the intelligence communities poured money into machine translation. DARPA, the U.S. Air Force, and the intelligence services want to ease the burden on human translators working with such little-studied languages as Uzbek, Pashto, Dari, and Nepali.

It allows its users to assess uncertainties when hundreds or thousands of theoretical models are considered; combine imperfect evidence from multiple sources and make compromises between models and data; deal with computationally intensive data analysis and machine learning; and, as if by magic, find patterns or systematic structures deeply hidden within a welter of observations. It has spread far beyond the confines of mathematics and statistics into high finance, astronomy, physics, genetics, imaging and robotics, the military and antiterrorism, Internet communication and commerce, speech recognition, and machine translation. It has even become a guide to new theories about learning and a metaphor for the workings of the human brain. One of the surprises is that Bayes, as a buzzword, has become chic. Stanford University biologist Stephen H. Schneider wanted a customized cancer treatment, called his logic Bayesian, got his therapy, went into remission, and wrote a book about the experience.

Air Force, and the intelligence services want to ease the burden on human translators working with such little-studied languages as Uzbek, Pashto, Dari, and Nepali. Machine translation got still another boost when Google trawled the Internet for more Rosetta Stone texts: news stories and documents published in both English and another language. United Nations documents alone contributed 200 billion words. By this time the web was churning out enormous amounts of text, free for the asking. Combing English words on the web, Google counted all the times that, for example, a two-word sequence in English meant “of the.”

pages: 586 words: 186,548

Architects of Intelligence
by Martin Ford
Published 16 Nov 2018

The kind of structure that we put in corresponds to the architecture of the neural net, and to fairly broad assumptions about the world and the kind of task that we’re trying to solve. When we put in a special structure and architecture that allows the network to have an attention mechanism, it’s putting in a lot of prior knowledge. It turns out that this is central to the success of things like machine translation. You need that kind of tool in your toolbox in order to solve some of those problems, in the same way that if you deal with images, you need to have something like a convolutional neural network structure in order to do a good job. If you don’t put in that structure, then performance is much worse.

YOSHUA BENGIO: Of course, we didn’t expect that. We’ve had a series of important and surprising breakthroughs with deep learning. I mentioned earlier that speech recognition came around 2010, and then computer vision around 2012. A couple of years later, in 2014 and 2015, we had breakthroughs in machine translation that ended up being used in Google Translate in 2016. 2016 was also the year we saw the breakthroughs with AlphaGo. All of these things, among a number of others, were really not expected. I remember back in 2014 I looked at some of our results in caption generation, where the computer is trying to come up with a caption for an image, and I was amazed that we were able to do that.

GEOFFREY HINTON: In the past when AI has been overhyped—including backpropagation in the 1980s—people were expecting it to do great things, and it didn’t actually do things as great as they hoped. Today, it’s already done great things, so it can’t possibly all be just hype. It’s how your cell phone recognizes speech, it’s how a computer can recognize things in photos, and it’s how Google does machine translation. Hype means you’re making big promises, and you’re not going to live up to them, but if you’ve already achieved them, that’s clearly not hype. I occasionally see an advertisement on the web that says it’s going to be a 19.9 trillion-dollar industry. That seems like rather a big number, and that might be hype, but the idea that it’s a multi-billion-dollar industry clearly isn’t hype, because multiple people have put billions of dollars into it and it’s worked for them.

pages: 504 words: 89,238

Natural language processing with Python
by Steven Bird , Ewan Klein and Edward Loper
Published 15 Dec 2009

Elles ont été trouvées plus tard. (the paintings) In all of these examples, working out the sense of a word, the subject of a verb, and the antecedent of a pronoun are steps in establishing the meaning of a sentence, things we would expect a language understanding system to be able to do. Machine Translation For a long time now, machine translation (MT) has been the holy grail of language understanding, ultimately seeking to provide high-quality, idiomatic translation between any pair of languages. Its roots go back to the early days of the Cold War, when the promise of automatic translation led to substantial government sponsorship, and with it, the genesis of NLP itself.

At the other extreme, NLP involves “understanding” complete human utterances, at least to the extent of being able to give useful responses to them. Technologies based on NLP are becoming increasingly widespread. For example, phones and handheld computers support predictive text and handwriting recognition; web search engines give access to information locked up in unstructured text; machine translation allows us to retrieve texts written in Chinese and read them in Spanish. By providing more natural human-machine interfaces, and more sophisticated access to stored information, language processing has come to play a central role in the multilingual information society. This book provides a highly accessible introduction to the field of NLP.

Computational techniques for tackling this problem include anaphora resolution—identifying what a pronoun or noun phrase refers to—and 28 | Chapter 1: Language Processing and Python semantic role labeling—identifying how a noun phrase relates to the verb (as agent, patient, instrument, and so on). Generating Language Output If we can automatically solve such problems of language understanding, we will be able to move on to tasks that involve generating language output, such as question answering and machine translation. In the first case, a machine should be able to answer a user’s questions relating to collection of texts: (5) a. Text: ... The thieves stole the paintings. They were subsequently sold. ... b. Human: Who or what was sold? c. Machine: The paintings. The machine’s answer demonstrates that it has correctly worked out that they refers to paintings and not to thieves.

pages: 484 words: 120,507

The Last Lingua Franca: English Until the Return of Babel
by Nicholas Ostler
Published 23 Nov 2010

Hakim Elnazarov, ably arranging the conference for the Foundation for Endangered Languages in Tajikistan in 2009, allowed me to see some of the modern reality in the valleys where Sogdian and Tajiki have long been spoken. Other stimuli for useful thinking, largely about the future of English, have come from John Timpane at the Philadelphia Inquirer, Elisabeth Eaves at Forbes.com, Harry Somers and Lluís Màrquez at the European Association for Machine Translation, and Richard Ishida at the Unicode Consortium. Koos du Toit and Hermann Gilliomee at Stellenbosch, Henry Thipa and S E Ngubane at Port Elizabeth, Wynn Chao at SOAS, Salikoko Mufwene and William Wimsatt at the University of Chicago, and John Alcorn at Trinity College Hartford have all generously provided invitations, and discussions with students and faculty, that have had an impact here.

Messages could be dictated in the sender’s language, written down in the Aramaic language and alphabet, and the letter would be presented for oral delivery by local heralds in what ever language the recipient understood. This process was known in Aramaic as para ‘declaration’, and in Persian as uzv ri n ‘understanding’. Aramaic was, in the precise sense now used in machine translation systems, an interlingua, a communication code familiar to all scribes and heralds of any language within the empire, but unnecessary for anyone else to recognize.11 Politically, after the fall of the Achaemenids to Alexander in 332 BC, the story is one of three successive empires. The Greek (Seleucid) kingdom founded after Alexander’s death persisted for two centuries; then, in the mid-second century BC, the dynasty of Ar ak (known to the west as the Arsacids, or the Parthian Empire) spread from the northeast to overwhelm the Greeks; it successfully, and rather gloriously, resisted the depredations of the Roman Empire over the next four centuries, but in AD 224 succumbed to one of its vassals, Ardashir of P rsa, the First of the Sassanians.

At the present, none of the above aspects is a fully solved problem, even for languages familiar to the engineers doing the research and development; and the greater the complexity required of the system (and the exoticism of the language), the further it is from even being presentable as a usable system. The application of computer technology to machine translation (MT) owed its First surge of development to the competition between the USA and the Soviet Union of the 1950s and ’60s, taking in the early Cold War and the space race. It was then adopted and extended by other sig-nificant scientific and technical powers of those days, seen for a time (in the 1980s and early 1990s) as an important enabling technology by governments such as those of Japan and the European Union.

Beautiful Data: The Stories Behind Elegant Data Solutions
by Toby Segaran and Jeff Hammerbacher
Published 1 Jul 2009

Similar techniques can be used to read the language of life: the Human Genome Project used a technique called shotgun sequencing to reassemble shreds of DNA. So-called “next generation sequencing” shifts even more of the burden away from the wet lab to large-scale parallel reassembly algorithms. Machine Translation The Google n-gram corpus was created by researchers in the machine translation group. Translating from a foreign language (f ) into English (e) is similar to correcting misspelled words. The best English translation is modeled as: best = argmaxe P(e | f ) = argmaxe P(f | e) P(e) where P(e) is the language model for English, which is estimated by the word n-gram data, and P(f | e) is the translation model, which is learned from a bilingual corpus: a corpus where pairs of documents are marked as translations of each other.

Using data collected from the API servers, user profiles, and activity data from the site itself, we were able to construct a model for scoring applications that allowed us to allocate invitations to the applications deemed most useful to users. The Unreasonable Effectiveness of Data In a recent paper, a trio of Google researchers distilled what they have learned from trying to solve some of machine learning’s most difficult challenges. When discussing the problems of speech recognition and machine translation, they state that, “invariably, simple models and a lot of data trump more elaborate models based on less data.” I don’t intend to debate their findings; certainly there are domains where elaborate models are successful. Yet based on their experiences, there does exist a wide class of problems for which more data and simple models are better.

N-gram counts have this property: we can easily harvest a trillion words of naturally occurring text from the Web. On the other hand, labeled spelling corrections do not occur naturally, and thus we found only 40,000 of them. It is not a coincidence that the two most successful applications of natural language—machine translation and speech recognition—enjoy large corpora of examples available in the wild. In contrast, the task of syntactic parsing of sentences remains largely unrealized, in part because there is no large corpus of naturally occurring parsed sentences. It should be mentioned that our probabilistic data-driven methodology—maximize the probability over all candidates—is a special case of the rational data-driven methodology— maximize expected utility over all candidates.

pages: 340 words: 97,723

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity
by Amy Webb
Published 5 Mar 2019

It seemed as though machine learning could provide a solution by way of a translation program. A collaboration between the Institute of Languages and Linguistics at Georgetown University and IBM produced a Russian-English machine translation system prototype that had a limited 250-word vocabulary and specialized only in organic chemistry. The successful public demonstration caused many people to leap to conclusions, and machine translation hit the front page of the New York Times—along with half a dozen other newspapers. Money was flowing—between government agencies, universities, and the big tech companies—and for a time, it didn’t look like anyone was monitoring the tap.

The National Academy of Sciences had established an advisory committee at the request of the National Science Foundation, the Department of Defense, and the Central Intelligence Agency. They found conflicting viewpoints on the viability of AI-powered foreign language translation and ultimately concluded that “there has been no machine translation of general scientific text, and none is in immediate prospect.”29 A subsequent report produced for the British Science Research Council asserted that the core researchers had exaggerated their progress on AI, and it offered a pessimistic prognosis for all of the core research areas in the field.

James Lighthill, a British applied mathematician at Cambridge, was the report’s lead author; his most damning criticism was that those early AI techniques—teaching a computer to play checkers, for example—would never scale up to solve bigger, real-world problems.30 In the wake of the reports, elected officials in the US and UK demanded answers to a new question: Why are we funding the wild ideas of theoretical scientists? The US government, including DARPA, pulled funding for machine translation projects. Companies shifted their priorities away from time-intensive basic research on general AI to more immediate programs that could solve problems. If the early years following the Dartmouth workshop were characterized by great expectations and optimism, the decades after those damning reports became known as the AI Winter.

Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps
by Valliappa Lakshmanan , Sara Robinson and Michael Munn
Published 31 Oct 2020

These offline models are small, around 40 to 50 megabytes, and come close in accuracy to the more complex online versions. Figure 5-12 shows a quality comparison of on-device and online translation models. Figure 5-12. A comparison of on-device phrase-based and (newer) neural-machine translation models and online neural machine translation (source: The Keyword). Another example of a standalone single-phase model is Google Bolo, a speech-based language learning app for children. The app works entirely offline and was developed with the intention of helping populations where reliable internet access is not always available.

You can use transfer learning for many prediction tasks in addition to image classification, so long as there is an existing pre-trained model that matches the task you’d like to perform on your dataset. For example, transfer learning is also frequently applied in image object detection, image style transfer, image generation, text classification, machine translation, and more. Note Transfer learning works because it lets us stand on the shoulders of giants, utilizing models that have already been trained on extremely large, labeled datasets. We’re able to use transfer learning thanks to years of research and work others have put into creating these datasets for us, which has advanced the state-of-the-art in transfer learning.

Pre-trained embeddings While we can load a pre-trained model on our own, we can also implement transfer learning by making use of the many pre-trained models available in TF Hub, a library of pre-trained models (called modules). These modules span a variety of data domains and use cases, including classification, object detection, machine translation, and more. In TensorFlow, you can load these modules as a layer, then add your own classification layer on top. To see how TF Hub works, let’s build a model that classifies movie reviews as either positive or negative. First, we’ll load a pre-trained embedding model trained on a large corpus of news articles.

pages: 301 words: 85,263

New Dark Age: Technology and the End of the Future
by James Bridle
Published 18 Jun 2018

Frederick Jelinek, the researcher who led IBM’s language efforts, famously stated that ‘every time I fire a linguist, the performance of the speech recogniser goes up’.21 The role of statistical inference was to remove understanding from the equation and replace it with data-driven correlation. In one sense, machine translation approaches the ideal described by Benjamin in his 1921 essay The Task of the Translator: that the most faithful translation ignores its original context to allow a deeper meaning to shine through. Benjamin insisted on the primacy of the word over the sentence, of the manner of meaning over its matter: ‘A real translation is transparent,’ he wrote.

Banks, Excession, London: Orbit Books, 1996. 28.Sanjeev Arora, Yuanzhi Li, Yingyu Liang, et al., ‘RAND-WALK: A Latent Variable Model Approach to Word Embeddings’, ARXIV, February 12, 2015, arxiv.org. 29.Alec Radford, Luke Metz, and Soumith Chintala, ‘Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks’, Nov 19, 2015, ARXIV, arxiv.org. 30.Robert Elliott Smith, ‘It’s Official: AIs are now re-writing history’, blog post, October 2014, robertelliottsmith.com. 31.Stephen Levy, ‘Inside Deep Dreams: How Google Made Its Computers Go Crazy’, Wired, November 12, 2015, wired.com. 32.Liat Clark, ‘Google’s Artificial Brain Learns to Find Cat Videos’, Wired, June 26, 2012, wired.com. 33.Melvin Johnson, Mike Schuster, Quoc V. Le, et al., ‘Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation’, ARXIV, November 14, 2016, arxiv.org. 34.Martín Abadi and David G. Andersen, ‘Learning to Protect Communications with Adversarial Neural Cryptography’, ARXIV, 2016, arxiv.org. 35.Isaac Asimov, I, Robot, New York: Doubleday, 1950. 36.Chris Baraniuk, ‘The cyborg chess players that can’t be beaten’, BBC Future, December 4, 2015, bbc.com. 7 Complicity 1.Nick Hopkins and Sandra Laville, ‘London 2012: MI5 expects wave of terrorism warnings before Olympics’, Guardian, June 2012, theguardian.com. 2.Jerome Taylor, ‘Drones to patrol the skies above Olympic Stadium’, Independent, November 25, 2011, independent.co.uk. 3.‘£13,000 Merseyside Police drone lost as it crashes into River Mersey’, Liverpool Echo, October 31, 2011, liverpoolecho.co.uk. 4.FOI Request, ‘Use of UAVs by the MPS’, March 19, 2013, available at whatdotheyknow.com. 5.David Robarge, ‘The Glomar Explorer in Film and Print’, Studies in Intelligence 56:1 (March 2012), 28–9. 6.Quoted in the majority opinion penned by Circuit Judge J.

(von Neumann), 28 Capital in the Twenty-First Century (Piketty), 112 carbon dioxide, 75 Catch-22 (Heller), 187–8 ‘cautious regulator’ theory, 94–5 CCTV, 181–2 centaur chess, 159 Chanarin, Oliver, 143 chaotic storage, 115–6 Chargaff, Erwin, 96–7 Charlie Hebdo attacks, 212 chemtrails, 192–5, 206–8, 214 children’s television, 216–7 children’s YouTube, 219, 238 Cirrus homogenitus, 196, 197 Civil Aviation Authority (CAA), 161–2 clear-air turbulence, 68 climate carbon dioxide, 75 global warming, 73, 193, 214 permafrost, 47–9, 56–7 seed banks, 52–6 turbulence, 65–9 climate change patterns disrupted by, 72–3 resilience against, 59 climate crisis, 56 Clinton, Bill, 243 Clinton, Hillary, 207, 232–3 cloning, 86–8 closed-circuit television, 181–2 cloud(s), 6–7, 8, 17, 195–6 ‘The Cloud Begins with Coal-Big Data, Big Networks, Big Infrastructure, and Big Power’ report, 64 ‘The Cloud of Unknowing,’ 9 cloudy thinking, 9 coal deposits, discovery of, 52 coastal installations, 62 Cocks, Clifford, 167 code/spaces, 37–9 code words, 175 cognition about, 135–6 artificial intelligence (AI), 139 facial recognition, 141 image recognition, 139–40 machine translation, 147 ‘predictive policing’ systems, 144–6 collectivism, totalitarianism vs., 139 Commission on Government Secrecy, 169 complex systems about, 2–3 aggregation of, 40 high-frequency trading, 14, 106–7, 108, 122, 124 complicity computational logic, 184–5 Freedom of Information, 161–2, 165, 192 global mass surveillance, 179–80 Glomar response, 165, 186 public key cryptography, 167–8 computation calculating machines, 27 Electronic Numerical Integrator and Computer (ENIAC), 27, 27–30, 33 flight trackers, 35–6, 36 IBM Selective Sequence Electronic Calculator (SSEC), 30, 30–2, 31, 146 opaqueness of, 40 computational logic, 184–5 computational thinking about, 4 evolution of, 248 importance of, 44–5 Concorde, 69, 70, 71 conspiracy chemtrails, 192–5, 206–8, 214 conspiracy theories, 195, 198–9, 205 contrails, 196–8, 197, 214 global warming, 73, 193, 214 9/11 terrorist attacks, 203–4, 206 ‘Conspiracy as Governance’ (Assange), 183 contrails, 196–8, 197, 214 Copenhagen Climate Change Conference (COP15), 199 Cowen, Deborah, 132 Credit Suisse, 109 cryptocurrency, 63 Cumulus homogenitus, 195–6 cyborg chess, 159 D Dabiq (online magazine), 212 Dallaire, Roméo, 243 darkness, 11–2 “Darkness” (poem), 201–2 dark pools, 108–9 DARPA (Defense Advanced Research Projects Agency), 33 Darwin, Charles, 78 data abundance of, 83–4, 131 big, 84 importance of, 245–6 realistic accounting of, 247 thirst for, 246 data dredging, 90–1 Debord, Guy, 103 DEC (Digital Equipment Corporation), 33 Decyben SAS, 110 Deep Blue, 148–9, 157–60 DeepDream, 153, 154–5 DeepFace software, 140 defeat devices, 120 Defense Advanced Research Projects Agency (DARPA), 33 de Solla Price, Derek, 91–2, 93 Diffie-Hellman key exchange, 167 digital culture, 64–5 Digital Equipment Corporation (DEC), 33 digital networks, mapping, 104 digitisation, 108 ‘Discussion of the Possibility of Weather Control’ lecture, 26 diurnal temperature range (DTR), 204 DNA sequencing, 93 D-Notices, 179 domain name system, 79 doomsday vault, 52–3 Dow Jones Industrial Average, 121–2 drones, 161–2 drug discovery/research, 94–5 DTR (diurnal temperature range), 204 Duffy, Carol Ann, 201 Dunne, Carey, 194–5 E Elberling, Bo, 57 electromagnetic networks, 104 Electronic Computer Project, 27 Electronic Frontier Foundation, 177 Electronic Numerical Integrator and Computer (ENIAC), 27, 27–30, 33 Elements of Chemistry (Lavoisier), 208–9 Elkins, Caroline, 183–4 Ellis, James, 167 encoded biases, 142 ‘End of Theory’ (Anderson), 83–4, 146 Engelbart, Douglas, 79 ENIAC (Electronic Numerical Integrator and Computer), 27, 27–30, 33 Enlightenment, 10 Environmental Protection Agency (EPA), 119–20 EPA (Environmental Protection Agency), 119–20 Epagogix, 130 epidemic type aftershock sequence (ETAS) model, 145–6 Epimetheus, 132–4 Equinix LD4, 104 Eroom’s law, 86, 93–6 ETAS (epidemic type aftershock sequence) model, 145–6 Euronext Data Center, 104, 105, 106 Evangelismos Hospital, 130–1 evolution, theory of, 78 exploitation, 229–30 Eyjafjallajökull, eruption of, 200–1, 202 F Facebook, 39–40, 156–7 facial recognition, 141 Fairchild Semiconductor, 80 Farage, Nigel, 194 Fat Man bomb, 25 Fermi, Enrico, 250 Ferranti Mark I, 78 fiat anima, 19–20 fiat lux, 19–20 Finger Family, 221–2, 224, 227 ‘Five Eyes,’ 174 Flash Boys (Lewis), 111–2 flash crash, 121–2, 130–1 FlightRadar24, 36, 189, 191 flight trackers, 35–6, 36 ‘Fourteen Eyes,’ 174 Fowler, R.H., 45 Frankenstein (Shelley), 201 fraud, 86–8, 91 Freedom of Information, 161–2, 165, 192 Friends’ Ambulance Unit, 20 Fuller, Buckminster, 71 Futurama exhibit, 30–1 ‘Future Uses of High Speed Computing in Meteorology’ lecture, 26 G Gail, William B., 72–3 Galton, Francis, 140 game developers, 130 Gates’s law, 83 GCHQ (Government Communications Headquarters), 167, 174, 176–9, 189 genocide, 243 ghost cars (Uber), 118–9 G-INFO, 190 global mass surveillance, 179–80 Global Positioning System (GPS), 36–7, 42–3 Global Seed Vault, 54 global warming, 73, 193, 214 Glomar response, 165, 186 Godard, Jean-Luc, 143 Google, 84, 139, 230, 242 Google Alerts, 190 Google Brain project, 139, 148, 149, 156 Google Earth, 35–6 Google Home, 128–9 Google Maps, 177 Google Translate, 147–8, 156 Government Communications Headquarters (GCHQ), 167, 174, 176–9, 189 GPS (Global Positioning System), 36–7, 42–3 Graves, Robert, 159 Gravity’s Rainbow (Pynchon), 128 gray zone, 212–4 Great Nōbi Earthquake, 145 Greenland, 57–8 Green Revolution, 53 Greyball programme, 119, 120 guardianship, 251–2 H Hankins, Thomas, 102 Haraway, Donna, 12 Harvard Mark I machine, 30 Hayek, Friedrich, 156–7 The Road to Serfdom, 139 The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology, 138–9 HealthyFoodHouse.com (website), 231–2 Heller, Joseph Catch-22, 187–8 Hermes, 134 Hersh, Seymour, 164 Hewlett-Packard, 143 hidden technological processes, 120 high-frequency trading, 14, 106–7, 108, 122, 124 high-throughput screening (HTS), 95–6 Hillingdon Hospital, 110–1, 111 Hippo programme, 32 Hofstadter, Douglas, 205–6 Hola Massacre, 170 homogenitus, 195, 196 Horn, Roni, 50, 201 How-Old.net facial recognition programme, 141 ‘How the World Wide Web Just Happened’ lecture, 78 HTS (high-throughput screening), 95–6 Hughes, Howard, 163 Hughes Glomar Explorer, 163–5 human genome project, 93 Human Interference Task Force, 251 human violence, 202 Humby, Clive, 245, 246 Hwang Woo-suk, 86–8 hyperobjects, 73, 75, 76, 194 hypertext, 79 I IBM Selective Sequence Electronic Calculator (SSEC), 30, 30–2, 31, 146 ICAO (International Civil Aviation Organisation), 68 ICARDA (International Center for Agricultural Research in the Dry Areas), 53–4, 55 ICT, 60–2 image recognition, 139–40 Infinite Fun Space, 149–50, 156 information networks, 62 information superhighway, 10 Infowars (Jones), 207 In Place of Fear (Bevan), 110 Institute of the Aeronautical Sciences, 26 integrated circuits, 79, 80 Intel, 80 International Center for Agricultural Research in the Dry Areas (ICARDA), 53–4, 55 International Civil Aviation Organisation (ICAO), 68 International Cloud Atlas, 195 Internet Research Agency, 235, 237 Inuit Knowledge and Climate Change, 199 The Invisibles (Morrison), 196–7 Isaksen, Ketil, 54 ISIL, 212–3 J Jameson, Fredric, 205 Jelinek, Frederick, 146–7 Jones, Alex Infowars, 207 Joshi, Manoj, 68–9 journalism, automated, 123–4 just-in-time manufacturing, 117 K K-129, 162–3 Karma Police operation, 175 Kasparov, Garry, 148–9, 157–8 Keeling Curve, 74, 74 Kennedy, John F., 169–70 Kinder Eggs, 215–6 Kiva robots, 114 Klein, Mark, 176–7 Kodak, 143 Krakatoa, eruption of, 202 Kunuk, Zacharias, 199, 200 Kuznets curve, 113 L Large Hadron Collider, 93 Lavoisier, Antoine, 78 Elements of Chemistry, 208–9 Lawson, Robert, 175–6 LD4, 104, 105 Leave Campaign, 194 Leibniz, Gottfried Wilhelm, 78 Levy, David, 158, 159 Lewis, Michael Flash Boys, 111–2 LifeSphere, 125 literacy in systems, 3–4 Lockheed Ocean Systems, 163 Logan, Walt (pseudonym), 165 Lombroso, Cesare, 140 London Stock Exchange, 110–1 Lovecraft, H.P., 11, 249 ‘low-hanging fruit,’ 93–4 M Macedonia, 233–4 machine learning algorithms, 222 machine thought, 146 machine translation, 147 magnetism, 77 Malaysian Airlines, 66 manganese noodles, 163–4 Manhattan Project, 24–30, 248 Mara, Jane Muthoni, 170 Mark I Perceptron, 136–8, 137 Maslow’s hierarchy of needs, 128–9 Matthews, James Tilly, 208–10, 209 Mauro, Ian, 199 McCarthy, Joe, 205 McGovern, Thomas, 57–8 McKay Brothers, 107, 110 memex, 24 Mercer, Robert, 236 Merkel, Angela, 174 metalanguage, 3, 5 middens, 56 migrated archive, 170–1 Minds, 150 miniaturisation principle, 81 Mirai, 129 mobile phones, 126 The Modern Prometheus (Shelley), 201 monoculture, 55–6 Moore, Gordon, 80, 80, 83 Moore’s law, 80–3, 92–4 Mordvintsev, Alexander, 154 Morgellons, 211, 214 Morrison, Grant The Invisibles, 196–7 Morton, Timothy, 73, 194 Mount Tambora, eruption of, 201 Moynihan, Daniel Patrick, 169 Munch, Edvard The Scream, 202 Mutua, Ndiku, 170 N NarusInsight, 177 NASA Ames Advanced Concepts Flight Simulator, 42 Natanz Nuclear Facility, 129 National Centre for Atmospheric Science, 68–9 National Geospatial-Intelligence Agency, 243 National Health Service (NHS), 110 National Mining Association, 64 National Reconnaissance Office, 168, 243 National Security Agency (NSA), 167, 174, 177–8, 183, 242–3, 249–50 National Security Strategy, 59 natural gas, 48 neoliberalism, 138–9 network, 5, 9 networks, 249 Newton, Isaac, 78 NewYorkTimesPolitics.com, 221 New York World’s Fair, 30–1 NHS (National Health Service), 110 9/11 terrorist attacks, 203–4, 206 ‘Nine Eyes,’ 174 1984 (Orwell), 242 NORAD (North American Air Defense Command), 33 North American Air Defense Command (NORAD), 33 ‘The Nor’ project, 104 Not Aviation, 190–1 NSA (National Security Agency), 167, 174, 177–8, 183, 242–3, 249–50 nuclear fusion, 97–8, 100 nuclear warfare, 28 Numerical Prediction (Richardson), 45 Nyingi, Wambugu Wa, 170 Nzili, Paulo Muoka, 170 O Obama, Barack, 180, 206, 231 Official Secrets Act, 189 Omori, Fusakichi, 145 Omori’s Law, 145 Operation Castle, 97 Operation Legacy, 171–2 Optic Nerve programme, 174 Optometrist Algorithm, 99–101, 160 O’Reilly, James, 185–6 Orwell, George 1984, 242 ‘Outline of Weather Proposal’ (Zworykin), 25–6 P Paglen, Trevor, 144 ‘paranoid style,’ 205–6 Patriot Act, 178 Penrose, Roger, 20 Perceptron, 136–8, 137 permafrost, 47–9, 56–7 p-hacking, 89–91 Phillippi, Harriet Ann, 165 photophone, 19–20 Pichai, Sundar, 139 Piketty, Thomas Capital in the Twenty-First Century, 112 Pincher, Chapman, 175–6 Pitt, William, 208 Plague-Cloud, 195, 202 Poitras, Laura, 175 Polaroid, 143 ‘predictive policing’ systems, 144–6 PredPol software, 144, 146 Priestley, Joseph, 78, 208, 209 prion diseases, 50, 50–1 PRISM operation, 173 product spam, 125–6 Project Echelon, 190 Prometheus, 132–4, 198 psychogeography, 103 public key cryptography, 167–8 pure language, 156 Putin, Vladimir, 235 Pynchon, Thomas Gravity’s Rainbow, 128 Q Qajaa, 56, 57 quality control failure of, 92–3 in science, 91 Quidsi, 113–4 R racial profiling, 143–4 racism, 143–4 ‘radiation cats,’ 251 raw computing, 82–3 Reagan, Ronald, 36–7 Reed, Harry, 29 refractive index of the atmosphere, 62 Regin malware, 175 replicability, 88–9 Reproducibility Project, 89 resistance, modes of, 120 Reuter, Paul, 107 Review Group on Intelligence and Communications Technologies, 181 Richardson, Lewis Fry, 20–1, 29, 68 Numerical Prediction, 45 Weather Prediction by Numerical Process, 21–3 Richardson number, 68 The Road to Serfdom (Hayek), 139 Robinson, Kim Stanley Aurora, 128 robots, workers vs., 116 ‘Rogeting,’ 88 Romney, Mitt, 206–7 Rosenblatt, Frank, 137 Roy, Arundhati, 250 Royal Aircraft Establishment, 188–9 Ruskin, John, 17–20, 195, 202 Rwanda, 243, 244, 245 S Sabetta, 48 SABRE (Semi-Automated Business Research Environment), 35, 38 SAGE (Semi-Automatic Ground Environment), 33, 34, 35 Samsung, 127 Scheele, Carl Wilhelm, 78 Schmidt, Eric, 241–5 The Scream (Munch), 202 Sedol, Lee, 149, 157–8 seed banks, 52–6 Seed Vault, 55 seismic sensors, 48 self-excitation, 145 ‘semantic analyser,’ 177 Semi-Automated Business Research Environment (SABRE), 35, 38 Semi-Automatic Ground Environment (SAGE), 33, 34, 35 semiconductors, 82 The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology (Hayek), 138–9 Shelley, Mary Frankenstein, 201 The Modern Prometheus, 201 SIGINT Seniors Europe, 174 simulation, conflating approximation with, 34–5 Singapore Exchange, 122–3 smart products, 127–8, 131 Smith, Robert Elliott, 152 smoking gun, 183–4, 186 Snowden, Edward, 173–5, 178 software about, 82–3 AlphaGo, 149, 156–8 Assistant, 152 AutoAwesome, 152 DeepFace, 140 Greyball programme, 119, 120 Hippo programme, 32 How-Old.net facial recognition programme, 141 Optic Nerve programme, 174 PredPol, 144, 146 Translate, 146 Solnit, Rebecca, 11–2 solutionism, 4 space telescopes, 168–9 speed of light, 107 Spread Networks, 107 SSEC (IBM Selective Sequence Electronic Calculator), 30, 30–2, 31, 146 Stapel, Diederik, 87–8 Stapledon, Olaf, 20 steam engines, 77 Stellar Wind, 176 Stewart, Elizabeth ‘Betsy,’ 30–1, 31 Steyerl, Hito, 126 stock exchanges, 108 ‘The Storm-Cloud of the Nineteenth Century’ lecture series, 17–9 Stratus homogenitus, 195–6 studios, 130 Stuxnet, 129–30 surveillance about, 243–4 complicity in, 185 computational excesses of, 180–1 devices for, 104 Svalbard archipelago, 51–2, 54 Svalbard Global Seed Vault, 52–3 Svalbard Treaty (1920), 52 Swiss National Bank, 123 Syed, Omar, 158–9 systemic literacy, 5–6 T Taimyr Peninsula, 47–8 Targeted Individuals, 210–1 The Task of the Translator (Benjamin), 147, 155–6 TCP (Transmission Control Protocol), 79 technology acceleration of, 2 complex, 2–3 opacity of, 119 Teletubbies, 217 television, children’s, 216–7 Tesco Clubcard, 245 thalidomide, 95 Thatcher, Margaret, 177 theory of evolution, 78 thermal power plants, 196 Three Guineas (Woolf), 12 Three Laws of Robotics (Asimov), 157 Tillmans, Wolfgang, 71 tools, 13–4 To Photograph the Details of a Dark Horse in Low Light exhibition, 143 totalitarianism, collectivism vs., 139 Toy Freaks, 225–6 transistors, 79, 80 Translate software, 146 translation algorithms, 84 Transmission Control Protocol (TCP), 79 Tri Alpha Energy, 98–101 Trinity test, 25 trolling, 231 Trump, Donald, 169–70, 194–5, 206, 207, 236 trust, science and, 91 trusted source, 220 Tuktoyaktuk Peninsula, 49 turbulence, 65–9 tyranny of techne, 132 U Uber, 117–9, 127 UberEats app, 120–1 unboxing videos, 216, 219 United Airlines, 66–7 Uniting and Strengthening America by Fulfilling Rights and Ending Eavesdropping, Dragnet-collection and Online Monitoring Act (USA FREEDOM Act), 178 USA FREEDOM Act (2015), 178 US Drug Efficacy Amendment (1962), 95 V van Helden, Albert, 102 Veles, objectification of, 235 Verizon, 173 VHF omnidirectional radio range (VOR) installations, 104 Vigilant Telecom, 110–1 Volkswagen, 119–20 von Neumann, John about, 25 ‘Can We Survive Technology?

pages: 666 words: 181,495

In the Plex: How Google Thinks, Works, and Shapes Our Lives
by Steven Levy
Published 12 Apr 2011

By the time Brin expressed his frustration with the email, Google had already identified a hiring target who would lead the company’s translations efforts—in a manner that solidified the artificial intelligence focus that Norvig saw early on at Google. Franz Och had focused on machine translations while earning his doctorate in computer science from the RWTH Aachen University in his native Germany and was continuing his work at the University of Southern California. After he gave a talk at Google in 2003, the company made him an offer. Och’s biggest worry was that Google was primarily a search company and its interest in machine translation was merely a flirtation. A conversation with Larry Page dissolved those worries. Google, Page told him, was committed to organizing all the information in the world, and translation was a necessary component.

“Now we have 506 language pairs, so it turned out it was worthwhile.” Earlier efforts at machine translation usually began with human experts who knew both languages that would be involved in the transformation. They would incorporate the rules and structure of each language so they could break down the original input and know how to recast it in the second tongue. “That’s very time-consuming and very hard, because natural language is so complex and diverse and there are so many nuances to it,” says Och. But in the late 1980s some IBM computer scientists devised a new approach, called statistical machine translation, which Och embraced. “The basic idea is to learn from data,” he explains.

One of the things high on Google’s to-do list was translation, rendering the billions of words appearing online into the native language of any user in the world. By 2001, Google.com was already available in twenty-six languages. Page and Brin believed that artificial barriers such as language should not stand in the way of people’s access to information. Their thoughts were along the lines of the pioneer of machine translation, Warren Weaver, who said, “When I look at an article in Russian, I say, ‘This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.’” Google, in their minds, would decode every language on the planet. There had been previous attempts at online translation, notably a service dubbed Babel Fish that first appeared in 1995.

pages: 584 words: 187,436

More Money Than God: Hedge Funds and the Making of a New Elite
by Sebastian Mallaby
Published 9 Jun 2010

Peter Brown, interview with the author, July 28, 2008. 24. An account of the reaction to the Brown-Mercer work is given in Andy Way “A Critique of Statistical Machine Translation.” In W. Daelemans and V. Hoste (eds.), Journal of Translation and Interpreting Studies: Special Issue on Evaluation of Translation Technology, Linguistica Antverpiensia, 2009, pp. 17–41. 25. See, for example, Pius Ten Hacken, “Has There Been a Revolution in Machine Translation?” Machine Translation 16, no. 1 (March 2001): pp. 1–19. This source erroneously attributes the quote on firing linguists to Peter Brown. 26. The initial versions of the IBM program included no linguistic rules at all.

To the code crackers at the Institute for Defense Analyses, this method would not have seemed surprising.22 Indeed, Brown and Mercer used a tool called the “expectations maximization algorithm,” and they cited its inventor, Leonard Baum—this was the same Leonard Baum who had worked for IDA and then later for Simons.23 But although the idea of “statistical machine translation” seemed natural to the code breakers, it was greeted with outrage by traditional translation programmers. A reviewer of the Brown-Mercer paper scolded that “the crude force of computers is not science,” and when the paper was presented at a meeting of translation experts, a listener recalled, “We were all flabbergasted….

Once the IBM team’s program had figured out the sample passages from the Canadian Hansard, it could translate other material too: If you presented it with an article in a French newspaper, it would zip through its database of parliamentary speeches, matching the article’s phrases with the decoded material. The results outclassed competing translation systems by a wide margin, and within a few years the advent of statistical machine translation was celebrated among computer scientists as something of an intellectual revolution.25 Canadian political rhetoric had proved more useful than suspected hitherto. And Brown and Mercer had reminded the world of a lesson about artificial intelligence. The lesson concerned the difference between human beings and computers.

Noam Chomsky: A Life of Dissent
by Robert F. Barsky
Published 2 Feb 1997

He tried to use the features of linguistic analysis for discourse analysis" (qtd. in R. A. Harris 83). From this project discourse analysis was born. Chomsky was in search of transformations "to model the linguistic knowledge in a native speaker's head," while Harris was interested in "such practical purposes as machine translation and automated information retrieval" (R. A. Harris 84). Their linguistic interests were irrevocably diverging. Chomsky's last communications with Harris were in the early 1960s, "when [Harris] asked me to [approach] contacts at the [National Science Foundation] for a research contract for him, which I did.

Chomsky, in his own words, "had no identifiable field or credentials in anything" (13 Dec. 1994), but MIT, "a scientific university which didn't care much about credentials," was willing to overlook his lack of certifiable "professional competence" (23 June 1994). Chomsky was made an assistant professor and assigned, ironically, to a machine translation project of the type he had often criticized. The project was directed by Victor Yngve and was being conducted at the MIT Research Laboratory of Electronics, which was subsidized by the U.S. military. While he was being interviewed by laboratory director Jerome Wiesner for the position, Chomsky stated that the project had "no intellectual interest and was also pointless."

The institute was a comfortable place for the twenty-seven-year-old Chomsky: "I also began to teach undergraduate philosophy courses there, and later was able to help establish what became a very distinguished philosophy department. The Massachusetts Institute of Technology has always been a pretty free and open place, open to experimentation and without rigid requirements. It was just perfect for someone of my idiosyncratic interests and work" (27 June 1995). This was a fruitful time for Chomsky. He writes that (machine translation project aside) "the Research laboratory of Electronics ... provided a most stimulating interdisciplinary environment for research of the sort that I wanted to pursue" (Logical Structure 2). Here, his Aspects of the Theory of Syntax was hatched. In the acknowledgments of that work, he describes the facility as "an interdepartmental laboratory in which faculty members and graduate students from numerous academic departments conduct research."

pages: 420 words: 100,811

We Are Data: Algorithms and the Making of Our Digital Selves
by John Cheney-Lippold
Published 1 May 2017

Translation and the Meaning of Everything (New York: Farrar, Straus and Giroux, 2012). 70. Franz Joseph Och, “Statistical Machine Translation: Foundations and Recent Advances,” Google, September 12, 2005, www.mt-archive.info; Christian Boitet, Hervé Blanchon, Mark Seligman, and Valérie Bellynck, “Evolution of MT with the Web,” Proceedings of the Conference “Machine Translation 25 Years On,” 2009, 1–13. 71. Franz Josef Och, “Selection and Use of Nonstatistical Translation Components in a Statistical Machine Translation Framework,” Google, 2014, www.google.com. 72. Additionally, some languages also neocolonially move through their “closest” language to get to “English.”

These statistical associations, calculated at immense speed, close the once-vast foreign-language gap. Figure 3.3. This graph shows how Google Translate connects different words, in this case the numbers one through five in English and Spanish. Source: Tomas Mikolov, Quoc Le, and Ilya Sutskever, “Exploiting Similarities among Languages for Machine Translation,” technical report, arXiv:1309.4168, 2. But the mathematical relationship between “manos” and “nigga” didn’t come from UN or EU documents. There is no pre-2014 Googleable document that publicly connects these two words together. If linguistic translation becomes, like visual designer Doug Bowman’s critique of Google from chapter 1, exclusive to the logic of engineering—in which every word or phrase is turned into a contextless logic “problem”—then the circumstances for the translation of “contigo, manos e pais” into “you, niggas and parents” is treated the same as the translation of “two” into “dos” (figure 3.3).

pages: 477 words: 75,408

The Economic Singularity: Artificial Intelligence and the Death of Capitalism
by Calum Chace
Published 17 Jul 2016

The third argument is the rapid fall in computer prices, which Hanson says has yet to cause any detectable unemployment. “And then there is Ford's fourth reason: all the impressive computing demos he has seen lately.” Hanson is referring, of course, to Google's self-driving cars, real-time machine translation systems, IBM's Watson and so on. Hanson is less impressed by these demonstrations of rapidly improving AI: “We do expect automation to take most jobs eventually, so we should work to better track the situation. But for now, Ford's reading of the omens seems to me little better than fortune telling with entrails or tarot cards.”

We have seen before with the relative decline of seemingly invincible goliaths like IBM and Microsoft how fierce and fast-moving the competition is within the technology industry. This is one of the dynamics which is pushing AI forward so fast and so unstoppably. Image and speech recognition Deep learning has accelerated progress at tasks like image recognition, facial recognition, natural speech recognition and machine translation faster than anyone expected. In 2012, Google announced that an assembly of 16,000 processors looking at 10 million YouTube videos had identified – without being prompted – a particular class of objects. We call them cats.[lxxxiii] Two years later, Microsoft researchers announced that their system – called Adam – could distinguish between the two breeds of corgi dogs.

One application of this is to help blind people know what they are “looking” at.[xc] You can download a similar app called Aipoly for free at iTunes.[xci] Speech recognition systems that exceed human performance will be available in your smartphone soon.[xcii] Microsoft-owned Skype introduced real-time machine translation in March 2014: it is not yet perfect, but it is improving all the time. Microsoft CEO Satya Nadella revealed an intriguing discovery which he called transfer learning: “If you teach it English, it learns English,” he said. “Then you teach it Mandarin: it learns Mandarin, but it also becomes better at English, and quite frankly none of us know exactly why.”

pages: 161 words: 39,526

Applied Artificial Intelligence: A Handbook for Business Leaders
by Mariya Yao , Adelyn Zhou and Marlene Jia
Published 1 Jun 2018

Deep learning, in combination with reinforcement learning, enabled Google DeepMind’s AlphaGo to defeat human world champions of Go in 2016, a feat that many experts had considered to be computationally impossible. Much media attention has been focused on deep learning, and an increasing number of sophisticated technology companies have successfully implemented deep learning for enterprise-scale products. Google replaced previous statistical methods for machine translation with neural networks to achieve superior performance.(4) Microsoft announced in 2017 that they had achieved human parity in conversational speech recognition.(5) Promising computer vision startups like Clarifai employ deep learning to achieve state-of-the-art results in recognizing objects in images and video for Fortune 500 brands.(6) While deep learning models outperform older machine learning approaches to many problems, they are more difficult to develop because they require robust training of data sets and specialized expertise in optimization techniques.

Retrieved from https://goo.gl/RtDL5” (2) “http://fivethirtyeight.com/features/the-real-story-of-2016/” (3) “Symbolic Artificial Intelligence. (n.d.). In Wikipedia. Retrieved November 16, 2017, from http://en.wikipedia.org/wiki/Symbolic_artificial_intelligence” (4) “Le, Q.V., & Schuster, M. (2016, September 27). A Neural Network for Machine Translation, at Production Scale [blog post]. Retrieved from: https://research.googleblog.com/2016/09/a-neural-network-for-machine.html” (5) “Huang, X.D. (2017, August 20). Microsoft researchers achieve new conversational speech recognition milestone [blog post]. Retrieved from http://www.microsoft.com/en-us/research/blog/microsoft-researchers-achieve-new-conversational-speech-recognition-milestone/” (6) “Customer Case Studies.

pages: 402 words: 110,972

Nerds on Wall Street: Math, Machines and Wired Markets
by David J. Leinweber
Published 31 Dec 2008

For the technically ambitious reader, Lucene (http://lucene.apache.org/), Lingpipe (http://alias-i.com/lingpipe/), and Lemur (www.lemurproject.org/) are popular open source language and information retrieval tools. 29. Anthony Oettinger, a pioneer in machine translation at Harvard going back to the 1950s, told a story of an early English-Russian-English system sponsored by U.S. intelligence agencies. The English “The spirit is willing but the flesh is weak” went in, was translated to Russian, which was then sent in again to be translated back into English. The result: “The vodka is ready but the meat is rotten.” Tony got out of the machine translation business. 30. This modern translator is found at www.systransoft.com. I tried Oettinger’s example again, 50 years later.

Some categories of news are much easier to interpret than others; think of earnings revisions compared to membership changes on the board of directors. A wide range of promising technologies are just being brought into play in this area. So far, English is the language for almost all of these systems. Machine translation, in general, has been difficult,29 but for literal, as opposed to artistic, content, as is found in most business and financial stories, it can do a passable job. Systran offers a translation system that you can experiment with online.30 The “as the world turns” time zone effect means that many stories will appear first in international sources in languages other than English.

See keep it simple, stupid strategy language model eAnalyst, 56–58, 214–215 predicting the market, 57–59 tag cloud, xl LeBaron, Blake, 48 Lewis, Kevin, xxi–xxiii Li, Feng, 218–219 Lichstein, Henry, 154, 155, 189 LISP, 152 language, xxviii, 159–160, 179 LISP based machines, xxvi–xxvii, 162–163 LISP based trading systems, xxvii–xxviii, 160–161 Macsyma, 159–160 LISP Machines, xxvi–xxvii, 153 Lo, Andrew, 82 maximizing predictability, 131 Optimal Control of Execution Costs, 74 on profits, 97 load duration curve, 329–330 London Stock Exchange, 7, 33, 72 long portfolio, 120–123 Long Term Capital Management, 197, 280, 323 LSE See London Stock Exchange machine translation, 55, 85 Macsyma, 159–160 Malkiel, Burton, 89, 109 Map of the Market, 46–47, 246 marked to market, 284, 301 market data graphics, 33, 34–35 market impact, 111, 129, 203 modeling, 74–76 market inefficiency, 124–128 “common factor” analysis, 127 earnings forecast, 126 earnings surprises, 126–127 insider trading, 127 mergers and acquisitions, 127 secondary equity offerings, 127 sector analysis, 127 stock buyback, 127 stock split, 124–126 market maker, 29, 67, 166, 237 and market manipulation, 255 and message activity, 56, 237–239 automated, 67–68, 101 See also ATD market manipulation bluffing, 258–259 cyber-manipulations, 261–270 elements of success, 260–261 message boards, 239, 254–255, 256–58, 261–269 painting the tape, 256 using communication technology, 259–260 market neutral investing, 120–124 Index market neutral portfolio, 120–124 market transparency, 61, 281–287 lack of, 298 NMS, 41,49 MarketMind data feeds and databases, 173 hardware, 161, 175 information flows and displays, 168–173 intelligent editor, assistant, 167 QuantEx, 175–176 rule language, 169, 176 top-level design, 164 virtual charting, 166–167 Marketocracy, 232–233 MarkeTrac, 45 maximizing predictability, 190–191 MBS.

pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
by Pedro Domingos
Published 21 Sep 2015

Stuck in the knowledge-engineering mire, computational linguistics had a near-death experience in the late 1980s. Since then, learning-based methods have swept the field, to the point where it’s hard to find a paper devoid of learning in a computational linguistics conference. Statistical parsers analyze language with accuracy close to that of humans, where hand-coded ones lagged far behind. Machine translation, spelling correction, part-of-speech tagging, word sense disambiguation, question answering, dialogue, summarization: the best systems in these areas all use learning. Watson, the Jeopardy! computer champion, would not have been possible without it. To this Chomsky might reply that engineering successes are not proof of scientific validity.

The result is complete gibberish, of course, but if we let each letter depend on several previous letters instead of just one, it starts to sound more like the ramblings of a drunkard, locally coherent even if globally meaningless. Still not enough to pass the Turing test, but models like this are a key component of machine-translation systems, like Google Translate, which lets you see the whole web in English (or almost), regardless of the language the pages were originally written in. PageRank, the algorithm that gave rise to Google, is itself a Markov chain. Larry Page’s idea was that web pages with many incoming links are probably more important than pages with few, and links from important pages should themselves count for more.

“Relevance weighting of search terms,”* by Stephen Robertson and Karen Sparck Jones (Journal of the American Society for Information Science, 1976), explains the use of Naïve Bayes–like methods in information retrieval. “First links in the Markov chain,” by Brian Hayes (American Scientist, 2013), recounts Markov’s invention of the eponymous chains. “Large language models in machine translation,”* by Thorsten Brants et al. (Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2007), explains how Google Translate works. “The PageRank citation ranking: Bringing order to the Web,”* by Larry Page, Sergey Brin, Rajeev Motwani, and Terry Winograd (Stanford University technical report, 1998), describes the PageRank algorithm and its interpretation as a random walk over the web.

pages: 366 words: 87,916

Fluent Forever: How to Learn Any Language Fast and Never Forget It
by Gabriel Wyner
Published 4 Aug 2014

Resources TRANSLATIONS (SPELLING): Appendix 5 is a list of 625 English words that show up frequently in every language: dog, car, city, and so on. You’ll want to find translations for all of these words in your target language. You could use Google Translate, but you’ll usually get a lot of weird, messed-up translations. Machine translation isn’t that good, especially when you’re translating lists of words, rather than sentences. If you use a standard dictionary, you may find too many results; you don’t need ten synonyms for house. Here’s your chance to use that pocket phrase book you bought. Phrase books are quick to peruse, and they’ll give you the most frequently used translations for each word.

Alternatively, just go to TinyURL.com/basicimage, and bookmark that page. You’ll see a wonderful page with twenty images and captions that look like this: Option 2 (Basic Version, Automatically Translated): These captions are great, but they’re all in your new language, and you don’t speak that language yet. What if all of those little captions were machine translated into English? You can stick this page into Google Translate. Now, instead of twenty captioned images in French, you’ll see this: The translations aren’t always great, but when you see twenty of them with pictures, you get a very clear sense of each word’s meaning. I can’t imagine a better resource than this for investigating words.

TinyURL.com/basicimage Google Images Basic Version, Translated: The captions under each image in Google Images Basic Version will be in your target language, which you may not understand yet. Fortunately, if you configure your browser just right, you can see side-by-side translations for all of those captions. This makes those captions much easier to use when you’re just starting out. Fluent-Forever.com/chapter4 GOOGLE TRANSLATE The best machine translator on the Internet. You can type in a sentence in any of its seventy-one languages, and it will translate it into any of its other languages. You can also type in a website address (say, a French monolingual dictionary), and it will translate that website. You can Google Translate in a few ways: 1.

pages: 307 words: 88,180

AI Superpowers: China, Silicon Valley, and the New World Order
by Kai-Fu Lee
Published 14 Sep 2018

“With the help of iFlyTek, I’ve learned Chinese,” Obama intoned to the White House press corps. “I think my Chinese is better than Trump’s. What do all of you think?” iFlyTek might say the same to its own competitors. The Chinese company has racked up victories at a series of prestigious international AI competitions for speech recognition, speech synthesis, image recognition, and machine translation. Even in the company’s “second language” of English, iFlyTek often beats teams from Google, DeepMind, Facebook, and IBM Watson in natural-language processing—that is, the ability of AI to decipher overall meaning rather than just words. This success didn’t come overnight. Back in 1999, when I started Microsoft Research Asia, my top-choice recruit was a brilliant young Ph.D. named Liu Qingfeng.

In the years since the Oxford experts made their predictions, computer vision has now surpassed human capabilities and dramatically expanded real-world use-cases for the technology. Those amped-up capabilities extend far beyond computer vision. New algorithms constantly set and surpass records in fields like speech recognition, machine reading, and machine translation. While these strengthened capabilities don’t constitute fundamental breakthroughs in AI, they do open the eyes and spark the imaginations of entrepreneurs. Taken together, these technical advances and emerging uses cause me to land on the higher end of task-based estimates, namely, PwC’s prediction that 38 percent of U.S. jobs will be at high risk of automatability by the early 2030s.

See risk-of-replacement graphs; unemployment, mass Johansson, Scarlett, 199 K Kaixin001, 42–43 Kasparov, Garry, 4 Ke Jiao, 113 Ke Jie, 1–2, 3, 5–6 Kennedy’s man-on-the-moon speech, 98 King, Martin Luther, Jr., 207 Kübler-Ross, Elisabeth, 188 Kurzweil, Ray, 140–41 L labor unions, decline of, 150 The Lean Startup, 44 lean startup methodology, 44–45 LeCun, Yann, 86, 88, 90, 93 Lee, Kai-Fu birth of first child, 177–79 cancer diagnosis, 176–77, 181–83, 225 epitaphs of, 180–81, 194 family of, 175–76, 177–79, 184–87, 193–94, 195, 225 Master Hsing Yun and, 187–90, 195 regrets of, 185–87, 188 research on lymphoma, 190–92 venture capital industry and, ix, xi, 3, 52 will of, 183–85 work obsession, 175–80 Lee Sedol, 3 legal decisions by judges, 115–16 Lenovo, 89 Li, Robin, 37 lifelong learning, 204 life purpose, loss of, 21 Li Keqiang, 62–63 LinkedIn, 39 Liu Qingfeng, 105 liveness algorithm, 118 love AI as opportunity to refocus on, 176–77, 196, 210 centrality of, in human experience, 198, 199, 225, 231–32 Lee’s cancer and refocus on, 193–96 Master Hsing Yun’s wisdom about, 189–90, 195 new social contract and, 200–201 regrets about not sharing, 185, 186–87, 195 service-focused impact investing and, 217 Luddite fallacy, 147–48, 151 Lyft, 79, 137 lymphoma, 176, 183, 190–92, 194 M Ma, Jack, 34–37, 60–61, 66–67, 137 machine learning advances in, recent, 160–61 algorithms, 40. See also algorithms, AI chips and, 96 data and, 56 deep learning as part of, 6, 94 economy driven by, 25, 84, 91, 94–95 social investment stipend and, 221–22 machine reading, 161 machine translation, 104, 161 Manhattan Project, 85 Manpower, 47–48 market-driven startups, 26–27, 45 mass entrepreneurship and mass innovation, 54, 62–68, 99 McAfee, Andrew, 148–49, 150 McCarthy, John, 7 McKinsey Global Institute, 159–60 medical diagnosis, 113–15, 167, 195, 211. See also healthcare Meituan (Groupon clone), 23–24, 46–49, 72 Meituan Dianping, 49, 69, 70, 72 Mercer, Robert, 108 Messenger, 70 Mi AI speaker, 127 micro-finance, 112–13, 138 Microsoft AI chips and, 96 antitrust policy and, 28 China at time of founding of, 33 as dominant AI player, 83, 91 Face++ and, 90 Lee at, 28, 33, 105, 184 speech recognition, 93 Tencent and, 93 top researchers at, 93 Microsoft Research, 91 Microsoft Research Asia (formerly Microsoft Research China), 89–90, 105 Middle East, 137, 139, 169 mini-iPhones, 32 Minsky, Marvin, 7 mission-driven startups, 26, 45 MIT, 30 Mobike, 77–78 mobile payments, 16, 54–55, 60–61, 73–78, 79, 110 Momenta, 135 monopolies, 20, 96, 168–69, 170–71, 172, 229 Moravec, Hans, 166 Moravec’s Paradox, 166–67 Musical.ly, 109 Musk, Elon, 49, 131, 141 N Nanjing, China, 99 narrow AI, 10, 142 National Aeronautics and Space Administration (NASA), 3 natural-language processing, 105, 108, 115 Netherlands, 229 neural networks approach to AI, 7, 8–10, 89 new world order, 18–19, 20–21, 138–39 Ng, Andrew, 13, 44, 88, 93, 113–14, 144 99 Taxi, 137 Nixon, Richard, 207 North Africa, 138 Nuance, 105 Nuomi (group buying affiliate), 48–49 Nvidia, 96, 97, 135 O Obama, Barack, 97–98, 100, 104 object recognition, 9, 90, 94, 117.

pages: 347 words: 97,721

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines
by Thomas H. Davenport and Julia Kirby
Published 23 May 2016

This might involve translating words across languages, understanding questions posed by people in plain language, and answering in kind, or “reading” a text with sufficient understanding to summarize it—or create new passages in the same style. Machine translation has been around for a while, and like everything else digital, it gets better all the time. Written language translation has progressed much faster than spoken language, since no speech recognition is necessary, but both are becoming quite useful. Google Translate, for example, does a credible job of it using “statistical machine translation,” or looking at a variety of examples of translated work and determining which translation is most likely. IBM’s Watson is the first tool to be broadly capable of ingesting, analyzing, and “understanding” text to a sufficient degree to answer detailed questions on it.

Watson, for example, can be fed more and more documents as they become available over time; that’s what makes it well suited for keeping track of cancer research, for example. Other systems in this category get better at their cognitive task by having more data for training purposes. As more documents that have been translated from Urdu to Hindi become available to Google Translate, for example, it gets better with its machine translations across those languages. What differentiates cognitive technologies in this category is “context awareness.” Most of the systems we’ve described above don’t have this yet, primarily because they are designed to perform a single cognitive task. Watson, for example, may be able to ingest and digest thousands of documents about leukemia (as it is doing, for example, at MD Anderson Cancer Center in Houston), but as yet it can’t combine that information with a patient’s smoking status or family history of leukemia (though IBM and the Cleveland Clinic are working on this capability).

On Language: Chomsky's Classic Works Language and Responsibility and Reflections on Language in One Volume
by Noam Chomsky and Mitsou Ronat
Published 26 Jul 2011

.: Morris Halle was already working on a generative phonology of Russian in the 1950s, and we also worked together on the generative phonology of English, at first jointly with Fred Lukoff. Together with Lees, Matthews, Klima, and Lukoff, I was, at least in principle, part of a research project on machine translation in the Research Laboratory of Electronics, headed by Victor Yngve. But the linguists, with the exception perhaps of Matthews, were not much interested in the applied problems of machine translation, as far as I can remember. At the end of the fifties, Matthews, who was a specialist in American Indian languages and had a good mathematica! background as well, produced a very important grammar of Hidatsa.

But actually he received an engineering degree. Klima, who worked with us, received his Ph.D. degree at Harvard on historical syntax. He also published a very important and influential article on negation. When the graduate program began, Jerry Fodor and Jerry Katz were here, as was Paul Postal. John Viertel, who was also on the machine translation program, was beginning his work on Humboldt and related topics at that time, and M.-P. Schützenberger was visiting from France. After that, things went very fast . . . M.R.: That was the birth of the Standard Theory . . . N.C.: Yes, it was at that period that what was called the Standard Theory was formulated, with major contributions by Fodor, Katz, and Postal, and many students in the new graduate program, a large number of whom are now among the most productive scholars in the field, which has really changed quite dramatically since the period we have just been discussing.

See also language; linguistic structure; linguistic theory; psycholinguistics; psychology of language; sociolinguistics literary criticism, 56–57 literature, 194 Locke, John, 92 logic, 156, 166–7; of conversation, 72; and language theory, 167; and universal grammar, 183 logical form, 145, 165–8, 189; derivational process, 166 Logical Structure of Linguistic Theory, The (Chomsky), 106, 108–11, 113–14, 126, 138, 139, 140, 151n, 170, 175, 182, 183; reception of, 131–2; transformation theory, 122–5 Lukoff, Fred, 135 Luxemburg, Rosa, 74 machine translation, 135 markedness, 118 Markov source models, 125–6, 127, 129 Marshall, George, 29 Marx, Karl, 70, 91 Marxism, 18, 74 ; Bolshevism, 74, 90; Chomsky on, 74; political economy, 58 Massachusetts Institute of Technology (MIT), 25, 131, 132, 185, 186; linguistics department, 134; research climate, 132–3; Research Laboratory of Electronics, 132, 134, 135 mass media, see American mass media ; press materialism, 94 mathematical linguistics, 6, 127 mathematical theory of communication, 125, 127, 128 mathematics, 60, 67–68; compared to political science, 7; and linguistics, 124–9; non-demonstrative inference, 71–72 Matthews, G.

pages: 387 words: 105,250

The Caryatids
by Bruce Sterling
Published 24 Feb 2009

Sonja had killed off Lucky’s parasites, filtered his blood, changed his skin flora, flushed out his dusty lungs and the squalid contents of his guts … She had cut his hair, trimmed his nails … He was a desert warlord, and every pore, duct, and joint in him required civilizing. “Lucky dear,” she said, “what would you like more than anything in this whole world?” “Death in battle,” said Lucky, heavy-lidded with pleasure. Lucky always said things like that. “How about a trip to Mars?” Lucky stoutly replied—according to their machine translation: “Yes, the warrior souls are bound for Heaven! But men must be honest with Heaven and rise from the front line of battle! For if we want to go to the garden of Heaven, yet we have not followed in the caravan of jihad, then we are like the boat that wants to sail on the dry desert!” “Mars is a planet, not Heaven.

He was a dismal, bloodstained creature from what was surely one of the worst areas on Earth, yet he radiated confidence and a sure sense of manly grace. This was not another impulsive fling, though Sonja had never lacked for those. This time was one of those serious times. Maybe she had fallen, somehow, for their quirky machine translation, for Lucky’s native tongue was an obscure pidgin of Chinese, Turkic, and Mongolian dialect, a desert lingo created by the roaming few who still survived in the world’s biggest dust bowl. It was the trouble of reaching him, of touching him, that made their pang of communion so precious to her.

taunted Lucky, as they suffered the tedious hissing and clicking of the airlock’s insane security. “Your demon mother, she who dwells in Heaven? You talk so much, Sonja, yet you never talk about her!” “My mother is a state secret. So: Don’t talk about my mother. Especially with this state machine translation.” Lucky was unimpressed. The prospect of the state surveilling him bothered him no more than the omniscience of God. “I, too, never talk about my mother.” Sonja lifted her sour, aching head. “What about your mother, Lucky? Why don’t you talk about your mother?” “My mother sold oil! She committed many crimes against the sky.

pages: 533

Future Politics: Living Together in a World Transformed by Tech
by Jamie Susskind
Published 3 Sep 2018

OUP CORRECTED PROOF – FINAL, 30/05/18, SPi РЕЛИЗ ПОДГОТОВИЛА ГРУППА "What's News" VK.COM/WSNWS Notes 371 2. I am grateful to Richard Susskind for his assistance in formulating this definition, although his preferred definition would be wider than mine (including manual and emotional tasks as well). 3. Yonghui Wu et al. ‘Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation’, arXiv, 8 October 2016 <https://arxiv.org/abs/1609.08144> (accessed 6 December 2017); Yaniv Taigman et al.,‘DeepFace: Closing the Gap to Human-Level Performance in Face Verification’, 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2014 <https://www.cs.toronto.edu/~ranzato/publications/taigman_ cvpr14.pdf> (accessed 11 December 2017); Aäron van den Oord et al., ‘WaveNet: A Generative Model for Raw Audio’, arXiv, 19 September 2016 <https://arxiv.org/abs/1609.03499> (accessed 6 December 2017). 4.

<http://wyss.harvard.edu/viewpage/457> (accessed 30 Nov. 2017). Wu,Tim. The Master Switch: The Rise and Fall of Information Empires. London: Atlantic, 2010. Wu, Yonghui, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammed Norouzi, Wolfgang Macherey, Maxim Krikun, et al. ‘Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation’. arXiv, 8 Oct. 2016 <https://arxiv.org/abs/ 1609.08144> (accessed 6 Dec. 2017). Xiong, Wei, Jasha Droppo, Xupeng Huang, Frank Seide, Michael Seltzer, Andreas Stolcke, Donghan Yu, and Geoffrey Zweig. ‘Achieving Human Parity in Conversational Speech Recognition’. arXiv, 17 Feb. 2017 <https://arxiv.org/abs/1610.05256> (accessed 28 Nov. 2017).

pages: 48 words: 12,437

Smarter Than Us: The Rise of Machine Intelligence
by Stuart Armstrong
Published 1 Feb 2014

In this test, a judge interacts via typed messages with a human being and a computer, and the judge has to determine which is which. The judge’s inability to do so indicates that the computer has reached a high threshold of intelligence: that of being indistinguishable from a human in conversation. As with machine translation, it is conceivable that some algorithm with access to huge databases (or the whole Internet) might be able to pass the Turing test without human-like common sense or understanding. And even if an AI possesses “common sense,”—even if it knows what we mean and correctly interprets sentences like “Cure cancer!”

pages: 481 words: 125,946

What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence
by John Brockman
Published 5 Oct 2015

Have all the doublings so far gotten us closer to true intelligence? Or just to narrow agents that can give us movie times? In option (2), Big Data and better learning algorithms have so far got us only to innovations like machine translations, which provide fast but mediocre translations piggybacking onto the prior work of human translators, without any semblance of thinking. The machine translation engines available today cannot, for example, answer basic queries about what they just translated. Think of them more as idiot savants than fluent thinkers. My bet is on option (3). Evolution seems to have endowed us with a powerful set of priors (or what Noam Chomsky or Steven Pinker might call innate constraints) that allow us to make sense of the world based on limited data.

More flexibility means a greater ability to capture the patterns appearing in data but a greater risk of finding patterns that aren’t there. In artificial intelligence research, this tension between structure and flexibility manifests in different kinds of systems that can be used to solve challenging problems like speech recognition, computer vision, and machine translation. For decades, the systems that performed best on those problems came down on the side of structure: They were the result of careful planning, design, and tweaking by generations of engineers who thought about the characteristics of speech, images, and syntax and tried to build into the system their best guesses about how to interpret those particular kinds of data.

pages: 743 words: 201,651

Free Speech: Ten Principles for a Connected World
by Timothy Garton Ash
Published 23 May 2016

Even displaying many languages on the same website can be difficult, especially when, like Arabic, Farsi and Urdu, they run from right to left. On freespeechdebate.com, our web developers created a new open-source tool so that you can flip seamlessly between scripts. They called it Babble.80 Machine translation technologies such as Google Translate—alongside one called Babelfish—have made remarkable progress. We use Google Translate and it often enables you to get the broad drift of a comment left in another language, especially if it is translating between two Western languages, as well as producing hilarious gobbledygook.

We may be virtual neighbours, but what use is that if I literally can’t understand what my neighbour is saying? Figure 12. Wikinations: Top 10 languages on the internet Chinese includes simplified, traditional and Wu Chinese. Source: Adapted from Liao 2013. Attempts to advance down the road from Babel to Pentecost range from conventional, professional translation to the automated machine translation of Google Translate, which machine-analyses a word hoard larger than that contained in all the books in the Library of Congress.80 Between these extremes, there are promising experiments in volunteer translation. Thus, for example, talks on the TED website have been subtitled in several languages, in a well-organised Open Translation Project, along with transcripts.81 One of the most ingenious and ambitious schemes is Duolingo, which allows people to learn a language by translating content already online.

See also North Korea; South Korea Koselleck, Reinhart, 107 Kovach, Bill, 202 Kozakiewicz, Władysław, 123–24 Kubrick, Stanley, 17 Kundera, Milan, 244 Küng, Hans, 112 Kurds, 122, 145, 189 Kurzweil, Ray, 16 Lady Chatterley’s Lover (Lawrence), 245 Lagarde, Christine, 157 Landmesser, August, 374 Lane, Shannon, 60 languages, 210; biggest barrier on internet, 175 (175f); freedom to speak, 122–25; on freespeechdebate.com, 176; Serbo-Croat as four, 207; translation, 175–76. See also individual languages; machine translation Lanier, Jaron, 50, 178, 284, 356 lapatilla.com, 193 La Repubblica, 144 la Rue, Frank, 361 ‘Last Temptation of Christ, The,’ 47 Latin America, 45, 47 Lau, D. C., 100 Lawrence, D. H., 245 Lawson, Nigella, 298–99, 336 Layton, Jack, 290 leakers/leaking, 339–40 Le Carré, John, 317 legal paternalism, legal moralism, 86–87, 218, 247 Leibniz, Gottfried Wilhelm, 98, 349 Leiter, Brian, 93 Lenihan, Brian, 291 Lenin, Vladimir, 160, 292, 363 lèse-majesté, 55, 293 Lessig, Lawrence, 26, 31, 164–65, 171, 367 Lester, Anthony, 221, 270, 300, 301 Leveson, Brian, 186, 192 Leviathan (Hobbes), 1 Lewinsky, Monica, 61 Lewis, Anthony, 76, 300, 372 Lewis, Bernard, 3, 159 Lewis, Eric L., 368 Leys, Simon, 54, 99–100, 110 LGBTIQ, 207, 223, 225, 250 Libby, I.

pages: 294 words: 81,292

Our Final Invention: Artificial Intelligence and the End of the Human Era
by James Barrat
Published 30 Sep 2013

Through several well-funded projects, IBM pursues AGI, and DARPA seems to be backing every AGI project I look into. So, again, why not Google? When I asked Jason Freidenfelds, from Google PR, he wrote: … it’s much too early for us to speculate about topics this far down the road. We’re generally more focused on practical machine learning technologies like machine vision, speech recognition, and machine translation, which essentially is about building statistical models to match patterns—nothing close to the “thinking machine” vision of AGI. But I think Page’s quotation sheds more light on Google’s attitudes than Freidenfelds’s. And it helps explain Google’s evolution from the visionary, insurrectionist company of the 1990s, with the much touted slogan DON’T BE EVIL, to today’s opaque, Orwellian, personal-data-aggregating behemoth.

Marvin Minsky, one of the fathers of artificial intelligence, pointed out that “any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine.” Translated, that means a computer of average memory, while running a program with a halting problem, would take a very long time to fall into a pattern of repetition, which could then be detected by a diagnostic program. How long? Longer than the universe will exist, for some programs. So, for practical purposes, the halting problem means it is impossible to know whether any given program will finish.

pages: 266 words: 80,273

Covid-19: The Pandemic That Never Should Have Happened and How to Stop the Next One
by Debora MacKenzie
Published 13 Jul 2020

Second, and most urgently, given that it did—and we saw it happen—why didn’t we put it out before it spread? We’ll look at the first question later in the book. Let’s look at the second now. What happened to unleash a Covid-19 pandemic on the world? The first inkling I, like many others, had of the gathering storm that became Covid-19 was a post on the online forum, ProMED. The machine-translated report from Finance Sina, a Chinese online news site, read: On the evening of [30 Dec 2019], an ‘urgent notice on the treatment of pneumonia of unknown cause’ was issued, which was widely distributed on the Internet by the red-headed document of the Medical Administration and Medical Administration of Wuhan Municipal Health Committee.

When I ducked into my office that day, hoping it was early enough that my family wouldn’t notice, the giant Sina Corp’s financial bulletin was reporting people with severe, undiagnosed pneumonia in the central Chinese city of Wuhan, in Hubei province. Many had connections to a seafood market. There were already 27 cases. A red-topped bulletin—rendered red-headed by the machine translation—must be an emergency alert, I guessed. The reporter from Finance Sina had verified it by calling the official hotline of Wuhan’s Municipal Health Committee the next morning. It was true. The story went out. And it was worrying enough to make someone send it to ProMED. It wasn’t hard to see why.

pages: 245 words: 83,272

Artificial Unintelligence: How Computers Misunderstand the World
by Meredith Broussard
Published 19 Apr 2018

Data-driven decisions rarely fit with these complex sets of rules. The same unreasonable effectiveness of data appears in translation, voice-controlled smart home gadgets, and handwriting recognition. Words and word combinations are not understood by machines the way that humans understand them. Instead, statistical methods for speech recognition and machine translation rely on vast databases full of short word sequences, or n-grams, and probabilities. Google has been working on these problems for decades and has the best scientific minds on these topics, and they have more data than anyone has ever before assembled. The Google Books corpus, the New York Times corpus, the corpus of everything everyone has ever searched for using Google: it turns out that when you load all of this in and assemble a massive database of how often words occur near each other, it’s unreasonably effective.

Given enough training data, algorithms indeed will do a good job at a variety of mundane tasks, and human ingenuity usually fills in the blanks. With search, most of us by now have learned how to use increasingly complex or specific search terms (or at least synonyms) to find the specific web pages we’re looking for when using a search box. Machine translation between languages is better than ever. It’s still not as good as human translation, but human brains are magnificent at figuring out the meaning of garbled sentences. A stilted, awkward translation of a web page is usually all the casual web surfer needs. GPS systems that provide directions from point A to point B are terribly handy.

pages: 263 words: 81,527

The Mind Is Flat: The Illusion of Mental Depth and the Improvised Mind
by Nick Chater
Published 28 Mar 2018

By the 1970s, serious doubts began to set in; by the 1980s, the programme of mining and systematizing knowledge started to grind to a halt. Indeed, the project of modelling human intelligence has since been quietly abandoned, in favour of specialist projects in computer vision, speech-processing, machine translation, game-playing, robotics and self-driving vehicles. Artificial intelligence since the 1980s has been astonishingly successful in tackling these specialized problems. This success has come, though, from completely bypassing the extraction of human knowledge into common-sense theories. Instead, over recent decades, AI researchers have made advances by building machines that learn not from people but from direct confrontation with the problem to be solved: much of AI has mutated into a distinct but related field: machine-learning.

Computational intelligence has instead taken a very different tack: focusing on problems, like chess or arithmetic, that require no free interpretation at all, but which can be reduced to vast sequences of calculations, performed at lightning speed. In addition it has proved to be invaluable for things like speech recognition, machine translation and general knowledge tests, hoovering up solutions to almost unimaginably vast quantities of past problems to enable the machine to solve new problems, which are only a little different.6 Yet what is astonishing about human intelligence, and perhaps biological intelligence more broadly, is its spectacular flexibility.

pages: 295 words: 84,843

There's a War Going on but No One Can See It
by Huib Modderkolk
Published 1 Sep 2021

The Netherlands played its trump card. Just send your intercepts to the Netherlands, said the MIVD; we can translate them for you. And so it happened that engineers laid a secure transatlantic cable connection to Holland and for a time the MIVD enjoyed exclusive access to America’s powerful intelligence apparatus. Machines translate faster than interpreters, however, and the intelligence community has long been working to automate the process. When I share Robin’s story with intelligence expert Constant Hijzen, he also immediately suspects intelligence involvement. ‘Data processing is a problem for larger agencies. It’s logical they’d want to develop software to mine their data faster.’

Index Abdeslam, Salah, here ABN Amro, here, here, here Aboutaleb, Ahmed, here advanced persistent threats (APTs), here, here Afghanistan, here, here, here, here AIVD, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here agency culture, here and author’s sources, here, here bugging of Iranian ambassador, here and DigiNotar, here, here digital capabilities, here, here, here, here and Gemalto, here and Huawei, here and Iran nuclear plant, here, here Joint Sigint Cyber Unit, here, here, here and Kaspersky Labs, here, here, here number of employees, here relations with Americans, here, here, here, here and Ronald Prins, here and Russian cyber espionage, here, here, here, here, here, here, here AJA (Iranian army), here Akerboom, Erik, here, here, here, here, here Alexander, Keith, here, here Alfa Group, here Al-Qaeda, here al-Shabaab, here, here Amdocs, here Amsterdam Internet Exchange (AMS-IX), here, here, here, here Android OS, here Anonymous (hacker group), here Apeldoorn hospital attack, here APM Terminals, here, here Appen, here, here, here Arab Spring, here Armada Sweep, here ASML, here, here Assange, Julian, here, here backdoors, here, here Bais, Erik, here, here bakabt.com, here Belgacom, here, here, here, here, here, here Belgian Ministry of Foreign Affairs, here Bellens, Didier, here Bertholee, Rob, here, here, here, here, here Biden, Joe, here Biesbrouck, Ralph, here Bijleveld, Ank, here bin Laden, Osama, here Bindt, Pieter, here, here, here Bits of Freedom, here BlackBerry messages, here, here Blok, Eelco, here Bloomberg, here, here, here BND, here Bogachev, Evgeniy (‘Umbro’), here, here, here Boon, Floor, here Bosch van Rosenthal, Eelco, here, here, here, here, here, here Bouman, Gerard, here Boxer armoured fighting vehicles, here Bromet, Frans, here, here Brussels bombings, here, here Bryan, Inge, here Bureau of Investigative Journalism, here Bush, George W., here, here ‘Business Club’, here BVD, here CareerBuilder.com, here Catal separatist movement, here Cellebrite, here certificate authorities (CAs), here Chaos Computer Club, here Chernobyl site, here, here child pornography, here, here Chinese cyber espionage, here, here, here, here, here, here, here Chinese Unit 61398, here, here CIA, here, here, here, here, here, here, here, here, here, here, here, here, here, here and Kaspersky Labs, here, here, here, here Cisco, here, here, here Clapper, James, here Clearview AI, here Clinton, Bill, here, here Clinton, Hillary, here, here, here CNN, here, here, here, here, here Combasca, here, here, here Common, here computer network exploitation (CNE), here, here Comverse, here Concord, here Cools, Ivo, here Covid-19 vaccines, here Cozy Bear, here, here, here, here, here, here, here, here, here, here, here Cyber Intelligence, here de Boer, Hans, here de Bos, Tony, here, here De Croo, Alexander, here De Groene Amsterdammer, here de Jong, Erik, here, here De Standaard, here De Telegraaf, here de Volkskrant, here, here, here, here, here, here, here, here, here de Vries, René, here deep packet inspection, here DEF CON, here Delft University of Technology, here Deloitte, here, here Denk party (Netherlands), here Der Spiegel, here, here, here Derix, Steven, here, here Di Rupio, Elio, here, here Die Hard, here, here DigiNotar, here, here, here, here, here, here, here, here, here, here digital certificates, here DIGIWS146 workstation, here, here distributed denial-of-service (DDoS) attacks, here, here, here Donner, Piet Hein, here, here, here Dorfman, Zach, here DPG Media, here Drake, Thomas, here drone strikes, and civilian casualties, here ‘Droppy’, here DuckDuckGo, here Durbin, Richard, here Dutch Bar Association, here Dutch Criminal Intelligence Unit (CIE), here Dutch Cyber Security Council, here, here Dutch elections, here Dutch House of Representatives, here, here, here, here Dutch Independent Post and Telecommunications Authority, here Dutch Investigatory Powers Commission (TIB), here Dutch Ministry of Foreign Affairs, here, here Dutch Ministry of General Affairs, viii, here Dutch National High Tech Crime Unit, here, here, here, here, here, here, here Dutch National Coordinator for Security and Counterterrorism (NCTV), here, here Dutch National Crime Squad, here Dutch National Crisis Centre, here, here Dutch National Cyber Security Centre (NCSC), here, here, here, here Dutch National Forensic Research Agency, here Dutch National Police Internal Investigation Department, here Dutch National Police Services Agency, here Dutch Safety Board, here Dutch surveillance law and referendum, here, here, here Dutch Tax and Customs Administration, here, here, here ‘Dwaan’, here, here, here Eastern Scheldt Storm Surge Barrier, here Ecatel, here, here Effting, Maud, here El Chapo (Joaquín Guzmán Loera), here Elbit Systems, here Elderenbosch, Onno, here encryption, here, here Ericsson, here EU Copyright Directive, here European Commission, here, here European Medicines Agency (EMA), here European Parliament, here, here, here European Space Agency, here Europol, here EvoSwitch, here Evraz, here exploits, here, here, here Fabrika Trollei, here facial recognition, here Fancy Bear, here, here, here, here, here FBI, here, here, here, here, here, here, here, here, here, here, here, here, here and Mexican drug cartel, here and Russian cyber espionage, here, here, here firewalls, here Five Eyes, here Fox-IT, here, here, here, here, here, here, here, here, here, here and Belgacom, here, here and DigiNotar, here, here, here, here Fridman, Mikhail, here FSB, here, here, here, here, here, here, here Gaddafi, Muammar, here, here Gallagher, Ryan, here, here, here Gamma Group, here Gemalto, here German CERT, here Glimmerglass, here Godane, Ahmed Abdi, here, here Google Maps, here, here GovCERT, here, here, here, here, here Government Communications Headquarters (GCHQ), here, here, here and Belgacom, here, here, here and Gemalto, here number of employees, here Grapperhaus, Ferdinand, here Greenberg, Andy, here Greenwald, Glenn, here, here, here Groenewegen, Frank, here, here GRU, here, here, here Gubarev, Aleksej, here, here, here Gurey, Nuur Osman, here hacking tools, online sales of, here Harvard Belfer Center, here Hayden, Michael, here Hennis, Jeanine, here, here, here, here Hentenaar, Joris, here Hermitage Museum, here Hijzen, Constant, here HNLMS Rotterdam, here Hoekstra, Pete, here HostExploit, here Howard, Philip, here HP Data Protector, here, here, here Huawei, here, here Hurricane Sandy, here iColumbo, here ICQ, here IMEI numbers, here in ’t Veld, Sophie, here ING, here Intellect Service, here Intercept, The, here, here, here Interfax press agency, here International Atomic Energy Agency (IAEA), here iPhones, here Iran, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here Iranian Revolutionary Guard (IRG), here Islamic State, here, here Israel, here, here, here, here, here, here, here, here, here, here, here Jochem, Aart, here, here, here, here Joint Strige Fighter programme, here Juniper, here Kaspersky, Eugene, here Kapersky Labs, here, here, here, here, here, here KGB, here Khabarovsk conference, here Khan, Abdul Qadeer, here Kim Jong-un, here King Servers, here, here Klijnsma, Yonathan, here, here KPN, here, here, here, here, here, here, here, here, here, here, here Kreling, Tom, here, here, here, here, here, here Leaseweb, here, here, here, here, here, here Ledgett, Richard, here Lewinsky, Monica, here Lockheed Martin, here LulzSec, here McCord, Mary, here McFaul, Michael, here machine translation, here McLaughlin, Jenna, here Maersk, here, here Malaysian Airlines flight MH17, here, here, here, here Mali, here, here Mandiant, here, here, here Marriott hotels, here Mastercard, here Máxima, Queen, here M.E.Doc, here, here Meeus, Jan, here, here Merck, here Merkel, Angela, here MI5, here, here, here Mikhailov, Sergei, here, here, here Millennium bug, here MIND CTI, here MIVD, vii, here, here, here, here, here, here, here, here, here, here, here, here agency culture, here and Belgacom, here digital capabilities, here, here, here and Farsi translators, here Joint Sigint Cyber Unit, here, here relations with Americans, here, here, here, here and Russian cyber espionage, here, here Somali surveillance, here, here Mohammad Ali, Omar, here, here, here, here Mondelez, here Montenegro coup attempt, here Morozov, Evgeny, here Mossad, here, here, here, here, here MSD, here MV BBC China, here, here MyHeritage, here MySQL, here N., here, here, here Naomi, here NASA, here, here, here Natanz, here, here, here National Health Service (NHS), here, here National Security Agency (NSA), here, here, here, here, here, here, here, here, here, here, here and Belgacom, here, here, here and Farsi translators, here ‘fishing with dynamite’, here and Gemalto, here and ‘Greek Watergate’, here hardware modifications, here number of employees, here relations with Netherlands, here, here, here Snowden files, here, here, here, here, here, here, here and SolarWinds attack, here and Somalia surveillance, here, here Tailored Access Operations, here NATO, here, here, here NCC Group, here ndsmovies.com, here New York City truck attack, here New York Times, here, here, here NICE Systems, here Nijmegen Queen’s Day festival, here Nixon, Richard, here North Korea, here, here, here, here, here, here, here, here Northwave, here Norwegian University of Science and Technology (NTNU), here NotPetya virus, here, here, here Novaja Gazeta, here NRC Handelsblad, here, here, here, here, here, here, here, here, here NSO Group, here Obama, Barack, here, here, here, here Ocean’s Eleven, here, here OHM2013, here Operation Moonlight Maze, here Operation Olympic Games, here Opstelten, Ivo, here OPTA, here Organisation for the Prohibition of Chemical Weapons (OPCW), here Oxford Internet Institute, here P10 filtering system, here Paauw, Frank, here Pakistan, here, here, here, here, here Pals, Ludo, here Paris bombings, here Paulissen, Wilbert, here PayPal, here, here, here Penn, Sean, here Pentagon, here, here Pérez Dolset, Javier, here Perlroth, Nicole, here Petri, Hans, here ‘Phed’, here phishing emails, here Plasterk, Ronald, here, here, here, here, here Pluijmers, René, here Poitras, Laura, here Politico, here PornHub, here Poroshenko Petro, here Port of Rotterdam, here, here, here, here, here Premier League, here Preneel, Bart, here PricewaterhouseCoopers, here Prigozhin, Yevgeny, here, here Prins, Ronald, here, here, here, here, here, here, here, here, here privacy, here, here, here, here, here Project Victor, here Proximus, here Public Prosecution Service Rotterdam, here Putin, Vladimir, here, here, here, here, here, here Q-Park, here Raiu, Costin, here Rajoy, Mariano, here Regin virus, here, here, here, here Regional Internet Registry for Europe, here Remarque, Philippe, here Renault, here Replay, here Rheinmetall, here, here, here Rid, Thomas, here Robbe, Edwin, here, here Robbe, José, here, here, here, here Robbe, Ruud, here, here Robin, here, here, here Rosneft, here RSA Conference, here Russia Today, here Russian–Dutch relations, here Russian Internet Research Agency (IRA), here Russian Unit 74455, here, here, here Rutte, Mark, vii, here, here, here ‘Sabu’, here Salisbury poisonings, here satellite communications, here, here, here Saudi Arabia, here, here Schiphol Airport, here, here, here, here, here Schneier, Bruce, here Schook, Dick, here SCM, here September 11 attacks, here ShimRatReporter, here Shymkiv, Dmytro, here SIGINT, here SIGINT Development Conference, here, here Signal, here Sinaloa Cartel, here Singapore Press Holdings, here Skripal, Sergei, here Smith, Brad, here SNAP, here Snowden, Edward, here, here, here, here, here, here, here, here, here, here, here, here, here Sochi Winter Olympics, here, here SolarWinds, here, here Somalia, here Sony PlayStation network, here speech transcription software, here Spetsnaz, here SPÖ Social Democratic Party (Austria), here SSL certificates, here Standaert, Geert, here Starr, Barbara, here Startpage.com, here Stasi, here Steman, Jochem, here, here Stone, Chris, here Stoyanov, Ruslan, here STRAP system, here Stuxnet virus, here, here, here, here, here, here, here submarines, here surveillance software, Israeli, here SVR, here, here, here Symbolon project, here Tails OS, here, here Tamene, Yared, here telecommunications billing, here Thomas, Gordon, here Tiger telephones, here TNT Express, here Triton virus, here Trouw, here TrueCrypt, here Trump, Donald, here, here, here, here, here, here, here Turkey, here, here, here, here Turksema, Hans, here TV5 Monde, here Tweakers website, here U., Etienne, here UCN, here Uijlenbroek, Jaap, here UK Home Office, here UK National Crime Agency, here Ukraine, here, here, here, here, here, here, here, here, here, here and annexation of Crimea, here, here, here, here, here, here ultracentrifuges, here, here United Arab Emirates, here United Nations, here University of Tromsø, here University of Twente, here US Democratic Party, here, here, here, here, here, here US Department of Defense, here, here US Department of Homeland Security, here, here US Joint Chiefs of Staff, here, here US presidential elections, here, here, here, here, here, here US State Department, here, here Utrecht child support services, here V., Johan, here van Bergen, Jurre, here van de Beek, Martijn, here van der Heijden, A.

pages: 288 words: 86,995

Rule of the Robots: How Artificial Intelligence Will Transform Everything
by Martin Ford
Published 13 Sep 2021

This technology—also known as deep learning—has, over the past decade, revolutionized the field of artificial intelligence and produced advances that just a short time ago would have been considered science fiction. Tesla drivers routinely let their cars navigate highways autonomously. Google Translate instantly produces usable text, even in obscure languages that few of us have heard of, and companies like Microsoft have demonstrated real-time machine translation that renders spoken Chinese into English. Children are growing up in a world where it is routine to converse with Amazon’s Alexa, and parents are worrying about whether these interactions are healthy. All of these advances—and a multitude of others—are powered by deep neural networks. The basic idea underlying deep learning has been around for decades.

In the next chapter, we’ll take a broader view of some of the risks that are inherently coupled with AI and discuss which dangers need our immediate attention and which are more speculative concerns likely to arise only in the far future. Footnote i If you have any doubts about the power of deep neural networks when applied to language translation, compare these two introductory sections from China’s “New Generation Artificial Intelligence Development Plan.” One is a Google machine translation of the original Chinese government document. The other was professionally translated by a team of four linguists. The first paragraph from each document is below. Can you tell which is which? A. The rapid development of artificial intelligence will profoundly change human society and the world.

pages: 315 words: 92,151

Ten Billion Tomorrows: How Science Fiction Technology Became Reality and Shapes the Future
by Brian Clegg
Published 8 Dec 2015

Some words sound identical and have to be interpreted this way. This is particularly important if attempting automated translation between languages. If I am just using a computer to dictate text, it’s easy enough for me to keep an eye out for such errors, but in an automatic translator I wouldn’t know if a mistake was being made. Yet such a machine translator is something that we often see in science fiction, whether it’s the universal telepathic translation provided by the TARDIS in Doctor Who, or the more realistic simultaneous computerized translation in Star Trek. Translation should, surely, be a major role for computers with language skills.

for me and I can be reasonably confident that I’m getting what I ask for. The phrase might not be perfectly idiomatic, but I will be understood. However, should the conversation get significantly more complex, it would become dangerous to trust Google’s software. We certainly aren’t at the stage yet where the United Nations can rely on a machine translation while negotiating treaties. Although Alexander Graham Bell and others played with mechanical ways of breaking down speech (and, of course, Roger Bacon’s brass head and its mythical predecessors were supposed to be able to understand as well as speak themselves), it took computers to make this a reality.

pages: 474 words: 87,687

Stealth
by Peter Westwick
Published 22 Nov 2019

Mitzner put him off for several weeks, thinking the Soviet work would just rehash existing American theory, but Locus kept nagging him. Mitzner finally gave in and read the report. As he put it, “the whole world changed. My eyes opened: ‘oh, this is what we need.’” Ufimtsev’s article cited his longer 1962 report, so the Air Force, at Northrop’s request, ran it through a computer translator in 1971. The machine translation was spotty but serviceable—and the most important part, the equations, was in the universal language of mathematics.26 Ufimtsev’s nonuniform currents filled the crucial gap in existing theory about radar scattering. No American scientist had ever met him, but the Soviet physicist, known only as a name and theory, became a legendary figure in Stealth design rooms.

Pyotr Ufimtsev (Raleigh, NC: Tech Science Press, 2009), v–x. 23 Ufimtsev interview; Ufimtsev, “50-Year Anniversary of the PTD,” 20. 24 Ufimtsev interview. 25 Michael Gordin, Scientific Babel: How Science Was Done before and after Global English (Chicago: University of Chicago Press, 2015), 213–66. 26 Kenneth Mitzner interview, January 25, 2016; John Cashen interview, December 16, 2010; Ufimtsev’s report is cited as a machine translation in Richard D. Moore, “Translator’s Note,” in Ufimtsev, Theory of Edge Diffraction in Electromagnetics, xiii–xiv. 27 Mitzner, foreword to Ufimtsev, Theory of Edge Diffraction in Electromagnetics, v–x, on v. On Soviet contributions to US military technology in the Cold War, such as Stealth and the x-ray laser for SDI, see Peter Westwick, “The International History of the Strategic Defense Initiative: Economic Competition in the Late Cold War,” Centaurus, 52 (2010), 338–351, and Mihir Pandya, “Security, Information, Infrastructure,” talk at American Anthropological Association annual meeting, 2016. 28 Moore, “Translator’s Note.” 29 Richard Scherrer to Westwick, November 24, 2015; David C.

pages: 336 words: 91,806

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Published 20 Mar 2024

Armin had moved to hilly Pittsburgh from Berkeley in 2019, to take a job at a self-driving car company Argo.ai, a start-up funded by Ford and Volkswagen. He led the team that designed the user interface between human passengers and the autonomous vehicle. He spent hours with drivers inside cars, observing their behaviours, their gripes, their decision-making, and used that psychology to design the AI system’s responses. He was a human–machine translator. As he developed software for self-driving cars, Armin became aware that he was working on a two-tonne moving death-machine being tested on real roads, and a wrong line of code could literally kill someone. Just the previous year, in 2018, an Uber self-driving prototype had killed a pedestrian in Arizona in error, when the human co-pilot or back-up driver had been distracted, possibly streaming The Voice on their mobile phone.1 Code, unlike the physical joists of a bridge, is not neutral.

Like many breakthroughs in scientific discovery, the one that spurred this latest artificial intelligence advance came from a moment of serendipity. In early 2017, two Google research scientists, Ashish Vaswani and Jakob Uszkoreit, were in a hallway of the search giant’s Mountain View campus, discussing a new idea for how to improve machine translation, the AI technology behind Google Translate.1 The AI researchers had been working with another colleague, Illia Polosukhin, on a concept they called ‘self-attention’ that could radically speed up and augment how computers understand language. Polosukhin, a science fiction fan from Kharkiv in Ukraine, believed self-attention was a bit like the alien language in the film Arrival, which had just recently been released.

pages: 625 words: 167,349

The Alignment Problem: Machine Learning and Human Values
by Brian Christian
Published 5 Oct 2020

In better coming to understand our own motivations and drives, we then, in turn, have a chance for complementary and reciprocal insights about how to build an artificial intelligence as flexible, resilient, and intellectually omnivorous as our own. Deepak Pathak looks at the success of deep learning and sees one glaring weakness: each system—be it for machine translation, or object recognition, or even game playing—is purpose-built. Training a huge neural network on a heap of manually labeled images was, as we have seen, the paradigm in which deep learning first truly showed its promise. Explicitly drilling a system to categorize images made a system that could categorize images.

As he put it in his 1784 essay “Idea for a Universal History with a Cosmopolitan Purpose” (“Idee zu einer allgemeinen Geschichte in weltbürgerlicher Absicht”), “Aus so krummem Holze, als woraus der Mensch gemacht ist, kann nichts ganz Gerades gezimmert werden.” The pithy English translation here is credited to Isaiah Berlin. 65. See, for instance, Mikolov, Le, and Sutskever, “Exploiting Similarities Among Languages for Machine Translation,” Le and Mikolov, “Distributed Representations of Sentences and Documents,” and Kiros et al., “Skip-Thought Vectors.” 66. There is substantial disagreement within the machine-learning community about how precisely these “analogies” should be computed, and within the cognitive science community about how closely they capture human notions of similarity.

In NIPS Deep Learning Workshop 2013, 2013. https://drive.google.com/file/d/0B7XkCwpI5KDYRWRnd1RzWXQ2TWc/edit. Mikolov, Tomáš, Kai Chen, Greg Corrado, and Jeffrey Dean. “Efficient Estimation of Word Representations in Vector Space.” arXiv Preprint arXiv:1301.3781, 2013. Mikolov, Tomáš, Quoc V. Le, and Ilya Sutskever. “Exploiting Similarities Among Languages for Machine Translation.” arXiv Preprint arXiv:1309.4168, 2013. Mikolov, Tomáš, Ilya Sutskever, and Quoc Le. “Learning the Meaning Behind Words.” Google Open Source Blog, August 14, 2013. https://opensource.googleblog.com/2013/08/learning-meaning-behind-words.html. Mikolov, Tomáš, Wen-tau Yih, and Geoffrey Zweig.

pages: 340 words: 101,675

A New History of the Future in 100 Objects: A Fiction
by Adrian Hon
Published 5 Oct 2020

Even with China’s rise, English remained the international language of trade, science, and politics. At the same time, the shift of every form of media from physical to digital was in full swing, with vast quantities of content indexed by search and semantic engines every day. This information became an ever-expanding corpus that was used to improve the performance of brute-force machine translation, where words and glyphs in unknown texts were correlated with those in human-translated text. Only a minute fraction of the content coming online had been translated by humans, though—mostly political statements, legal texts, news, and popular books, movies, TV shows, and games. That fraction still counted for a lot, but it wasn’t quite enough, leading companies such as Dragon and Babylon to partner with massively multiplayer online language education games to put players to work translating content in return for free access and virtual currency.

That fraction still counted for a lot, but it wasn’t quite enough, leading companies such as Dragon and Babylon to partner with massively multiplayer online language education games to put players to work translating content in return for free access and virtual currency. The process wasn’t perfect, but, combined with smarter forms of translation and speech recognition, it improved machine-translated speech such that it achieved over 99.8 percent accuracy within a single second; just about fast enough to be used in conversation if you had a bit of patience. Here’s what Alice Singh thinks: Babylon fascinated me. I remember being on holiday in Myanmar and just walking up to someone at a bus stop and talking to them.

pages: 305 words: 101,093

Who Owns This Sentence?: A History of Copyrights and Wrongs
by David Bellos and Alexandre Montagu
Published 23 Jan 2024

For substantially all ideas are second hand, consciously or unconsciously drawn from a million outside sources and daily use by the garnerer with a pride and satisfaction born of the superstition that he originated them; whereas there is not a rag of originality about them anywhere except the little discoloration they get from his mental and moral calibre and his temperament, which is revealed in characteristics of phrasing.155 Psychologists have coined the term “cryptomnesia” for such occurrences of involuntary copying, but it is hardly a mental disorder.156 All of us learn languages in just that way, by internalising and then forgetting the source of vocabulary, turns of phrase, clichés, proverbs and so on. In fact, all the sentences we ever utter are composed of material we have got from somewhere else. Today’s web-based machine translation devices rely on the fact that sentences can be decomposed into elements that have all been translated before, because, to quote the biblical adage one more time, there is nothing new under the sun. Plagiarism arising from laziness, carelessness or an intention to deceive is often dealt with harshly, but not under the law of copyright: journalists lose their jobs; writers have their books withdrawn and pulped by the publishers; academics, politicians and America’s former first lady are pilloried and shamed.

But when it used material from those movies in promotions and other works, the owners of the M.O.V.A. software sued for infringement, on the grounds that the output of its program was covered by its copyright in the program itself. M.O.V.A. lost this case in a California district court,238 but the very fact that it thought it might win should be raising an alarm. For example, a translation generated entirely by a machine translation device would not normally be copyrightable, since the user of the M.T. software did not engage in the minimal degree of “creativity” or “originality” required by copyright law. But to whom does the copyright in the translation belong? Nature abhors a vacuum, and so does the law. In music, the situation is equally murky.

pages: 344 words: 104,077

Superminds: The Surprising Power of People and Computers Thinking Together
by Thomas W. Malone
Published 14 May 2018

The jury is still out on how far this approach will go toward creating general AI, but in the meantime, it is already being used for projects like helping doctors at Cleveland Clinic find patients for clinical studies who have certain combinations of characteristics, such as a history of “bacteria after a pericardial window.”17 Big Data In recent years, significant progress toward developing effective AI has sometimes come from having massive amounts of data available in a far more accessible form than ever before. For example, machine translation of human languages (like English and Spanish) has long been one of the holy grails of AI research. For decades, researchers were consistently disappointed by how slow progress toward this goal was. But language-translation programs have recently become much better, in part because of the availability of vast amounts of translated documents.

Will Knight, “An AI with 30 Years’ Worth of Knowledge Finally Goes to Work,” MIT Technology Review, March 14, 2016, https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work. 18. Melvin Johnson, Mike Schister, Quoc V. Lee, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, et al., “Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation,” preprint, submitted November 14, 2016, https://arxiv.org/abs/1611.04558; Justin Bariso, “The Artificial Intelligence Behind Google Translate Recently Did Something Extraordinary,” Inc., November 28, 2016, https://www.inc.com/justin-bariso/the-ai-behind-google-translate-recently-did-something-extraordinary.html. 19.

pages: 321 words: 113,564

AI in Museums: Reflections, Perspectives and Applications
by Sonja Thiel and Johannes C. Bernhardt
Published 31 Dec 2023

Additionally, the Calamari OCR engine and model were integrated9 with the OCR-D framework (Neudecker/Baierer/Federbusch 2019). Since a few OCR errors may remain in a recognized text, the extent to which deep learning can be used for automated post-correction of OCR results was also explored. For this, the decision was made to follow the example of machine translation, which meant that a model10 was trained for the task of translating an OCR result containing errors into a perfectly correct text (Schaefer/Neudecker 2020). Finally, to complete the text recognition pipeline, a tool11 for Ground Truth-based quality evaluation of OCR results was implemented.

Interestingly, the bottleneck in the software pipeline for the number of languages the system can comprehend is not the auto-transcription of spoken language using Whisper but DeepL. Language translation is, however, necessary to provide the English-language model of Stable Diffusion with a suitable prompt. The DeepL API is a machine translation service that uses artificial neural networks and deep learning techniques to provide high-quality translations between various languages. As of April 2023, DeepL is capable of translating around 30 languages, mostly European, but include only three languages spoken in South America and four Asian languages.4 Notably, DeepL does not offer translations for any languages from the Global South.

pages: 137 words: 36,231

Information: A Very Short Introduction
by Luciano Floridi
Published 25 Feb 2010

It is likely that at least a number of these will prove sufficiently useful in certain applications to deserve further study and permanent recognition. It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field. (italics added) Indeed, Warren Weaver (1894-1978), one of the pioneers of machine translation and co-author with Shannon of The Mathematical Theory of Communication, supported a tripartite analysis of information in terms of 1) technical problems concerning the quantification of information and dealt with by Shannon's theory; 2) semantic problems relating to meaning and truth; and 3) what he called `influential' problems concerning the impact and effectiveness of information on human behaviour, which he thought had to play an equally important role.

pages: 144 words: 43,356

Surviving AI: The Promise and Peril of Artificial Intelligence
by Calum Chace
Published 28 Jul 2015

In the small talk at the start and the end of the call she was able to enquire about their partners and children by name. It didn’t bother her at all that their ability to do the same was probably thanks at least in part to their own digital assistants. Several of the participants in the call did not speak English as their first language, so their words were translated by a real-time machine translation system. The words Julia heard them speak did not exactly match the movement of their mouths, but the system achieved a very believable representation of their vocal characteristics and their inflections. A couple of times during the call Hermione advised Julia to slow down, or get to the point faster, using the same psychological evaluation software which had helped to craft the sales message earlier, and also using body language and facial expression evaluation software.

pages: 510 words: 120,048

Who Owns the Future?
by Jaron Lanier
Published 6 May 2013

The act of cloud-based translation shrinks the economy by pretending the translators who provided the examples don’t exist. With each so-called automatic translation, the humans who were the sources of the data are inched away from the world of compensation and employment. At the end of the day, even the magic of machine translation is like Facebook, a way of taking free contributions from people and regurgitating them as bait for advertisers or others who hope to take advantage of being close to a top server. In a world of digital dignity, each individual will be the commercial owner of any data that can be measured from that person’s state or behavior.

M., 129–30, 261, 328 “Forum,” 214 Foucault, Michel, 308n 4chan, 335 4′33″ (Cage), 212 fractional reserve system, 33 Franco, Francisco, 159–60 freedom, 13–15, 32–33, 90–92, 277–78, 336 freelancing, 253–54 Free Print Shop, 228 “free rise,” 182–89, 355 free speech, 223, 225 free will, 166–68 “friction,” 179, 225, 230, 235, 354 Friendster, 180, 181 Fukuyama, Francis, 165, 189 fundamentalism, 131, 193–94 future: chaos in, 165–66, 273n, 331 economic analysis of, 1–3, 15, 22, 37, 38, 40–41, 42, 67, 122, 143, 148–52, 153, 155–56, 204, 208, 209, 236, 259, 274, 288, 298–99, 311, 362n, 363 humanistic economy for, 194, 209, 233–351 361–367 “humors” of, 124–40, 230 modern conception of, 123–40, 193–94, 255 natural basis of, 125, 127, 128–29 optimism about, 32–35, 45, 130, 138–40, 218, 230n, 295 politics of, 13–18, 22–25, 85, 122, 124–26, 128, 134–37, 199–234, 295–96, 342 technological trends in, 7–18, 21, 53–54, 60–61, 66–67, 85–86, 87, 97–98, 129–38, 157–58, 182, 188–90, 193–96, 217 utopian conception of, 13–18, 21, 30, 31, 37–38, 45–46, 96, 128, 130, 167, 205, 207, 265, 267, 270, 283, 290, 291, 308–9, 316 future-oriented money, 32–34, 35 Gadget, 186 Gallant, Jack, 111–12 games, 362, 363 Gates, Bill, 93 Gattaca, 130 Gawker, 118n Gelernter, David, 313 “general” machines, 158 General Motors, 56–57 general relativity theory, 167n Generation X, 346 genetic engineering, 130 genetics, 109–10, 130, 131, 146–47, 329, 366 genomics, 109–10, 146–47, 366 Germany, 45 Ghostery, 109 ghost suburbs, 296 Gibson, William, 137, 309 Gizmodo, 117–18 Global Business Network (GBN), 214–15 global climate change, 17, 32, 53, 132, 133, 134, 203, 266, 295, 296–97, 301–2, 331 global economy, 33n, 153–56, 173, 201, 214–15, 280 global village, 201 God, 29, 30–31, 139 Golden Goblet, 121, 121, 175, 328 golden rule, 335–36 gold standard, 34 Google, 14, 15, 19, 69, 74, 75–76, 90, 94, 106, 110, 120, 128, 153, 154, 170, 171, 174, 176, 180, 181–82, 188, 191, 192, 193, 199–200, 201, 209, 210, 217, 225, 227, 246, 249, 265, 267, 272, 278, 280, 286, 305n, 307, 309–10, 322, 325, 330, 344, 348, 352 Google Goggles, 309–10 Googleplex, 199–200 goops, 85–89, 99 Gore, Al, 80n Graeber, David, 30n granularity, 277 graph-shaped networks, 241, 242–43 Great Britain, 200 Great Depression, 69–70, 75, 135, 299 Great Recession, 31, 54, 60, 76–77, 204, 311, 336–37 Greece, 22–25, 45, 125 Grigorov, Mario, 267 guitars, 154 guns, 310–11 Gurdjieff, George, 215, 216 gurus, 211–13 hackers, 14, 82, 265, 306–7, 345–46 Hardin, Garrett, 66n Hartmann, Thom, 33n Hayek, Friedrich, 204 health care, 66–67, 95, 98–99, 100, 132–33, 153–54, 249, 253, 258, 337, 346 health insurance, 66–67, 95, 98–99, 100, 153–54 Hearts and Minds, 353n heart surgery, 11–13, 17, 18, 157–58 heat, 56 hedge funds, 69, 106, 137 Hephaestus, 22, 23 high-dimensional problems, 145 high-frequency trading, 56, 76–78, 154 highways, 79–80, 345 Hinduism, 214 Hippocrates, 124n Hiroshima bombing (1945), 127 Hollywood, 204, 206, 242 holographic radiation, 11 Homebrew Club, 228 homelessness, 151 homeopathy, 131–32 Homer, 23, 55 Honan, Mat, 82 housing market, 33, 46, 49–52, 61, 78, 95–96, 99, 193, 224, 227, 239, 245, 255, 274n, 289n, 296, 298, 300, 301 HTML, 227, 230 Huffington Post, 176, 180, 189 human agency, 8–21, 50–52, 85, 88, 91, 124–40, 144, 165–66, 175–78, 191–92, 193, 217, 253–64, 274–75, 283–85, 305–6, 328, 341–51, 358–60, 361, 362, 365–67 humanistic information economy, 194, 209, 233–351 361–367 human reproduction, 131 humors (tropes), 124–40, 157, 170, 230 hunter-gatherer societies, 131, 261–62 hyperefficient markets, 39, 42–43 hypermedia, 224–30, 245 hyper-unemployment, 7–8 hypotheses, 113, 128, 151 IBM, 191 identity, 14–15, 82, 124, 173–74, 175, 248–51, 283–90, 305, 306, 307, 315–16, 319–21 identity theft, 82, 315–16 illusions, 55, 110n, 120–21, 135, 154–56, 195, 257 immigration, 91, 97, 346 immortality, 193, 218, 253, 263–64, 325–31, 367 imports, 70 income levels, 10, 46–47, 50–54, 152, 178, 270–71, 287–88, 291–94, 338–39, 365 incrementalism, 239–40 indentured servitude, 33n, 158 India, 54, 211–13 industrialization, 49, 83, 85–89, 123, 132, 154, 343 infant mortality rates, 17, 134 infinity, 55–56 inflation, 32, 33–34 information: age of, 15–17, 42, 166, 241 ambiguity of, 41, 53–54, 155–56 asymmetry of, 54–55, 61–66, 118, 188, 203, 246–48, 285–88, 291–92, 310 behavior influenced by, 32, 121, 131, 173–74, 286–87 collection of, 61–62, 108–9 context of, 143–44, 178, 188–89, 223–24, 225, 245–46, 247, 248–51, 338, 356–57, 360 correlations in, 75–76, 114–15, 192, 274–75 for decision-making, 63–64, 184, 266, 269–75, 284n digital networks for, see digital networks duplication of, 50–52, 61, 74, 78, 88, 223–30, 239–40, 253–64, 277, 317–24, 335, 349 economic impact of, 1–3, 8–9, 15–17, 18, 19–20, 21, 35, 60–61, 92–97, 118, 185, 188, 201, 207, 209, 241–43, 245–46, 246–48, 256–58, 263, 283–87, 291–303, 331, 361–67 in education, 92–97 encrypted, 14–15, 175, 239–40, 305–8, 345 false, 119–21, 186, 275n, 287–88, 299–300 filters for, 119–20, 200, 225, 356–57 free, 7–9, 15–16, 50–52, 61, 74, 78, 88, 214, 223–30, 239–40, 246, 253–64, 277, 317–24, 335, 349 history of, 29–31 human agency in, 22–25, 69–70, 120–21, 122, 190–91 interpretation of, 29n, 114–15, 116, 120–21, 129–32, 154, 158, 178, 183, 184, 188–89 investment, 59–60, 179–85 life cycle of, 175–76 patterns in, 178, 183, 184, 188–89 privacy of, see privacy provenance of, 245–46, 247, 338 sampling of, 71–72, 191, 221, 224–26, 259 shared, 50–52, 61, 74, 78, 88, 100, 223–30, 239–40, 253–64, 277, 317–24, 335, 349 signals in, 76–78, 148, 293–94 storage of, 29, 167n, 184–85; see also cloud processors and storage; servers superior, 61–66, 114, 128, 143, 171, 246–48 technology of, 7, 32–35, 49, 66n, 71–72, 109, 110, 116, 120, 125n, 126, 135, 136, 254, 312–16, 317 transparency of, 63–66, 74–78, 118, 190–91, 306–7 two-way links in, 1–2, 227, 245, 289 value of, 1–3, 15–16, 20, 210, 235–43, 257–58, 259, 261–63, 271–75, 321–24, 358–60 see also big data; data infrastructure, 79–80, 87, 179, 201, 290, 345 initial public offerings (IPOs), 103 ink, 87, 331 Inner Directeds, 215 Instagram, 2, 53 instant prices, 272, 275, 288, 320 insurance industry, 44, 56, 60, 66–67, 95, 98–99, 100, 153–54, 203, 306 intellectual property, 44, 47, 49, 60, 61, 96, 102, 183, 204, 205–10, 223, 224–26, 236, 239–40, 246, 253–64 intelligence agencies, 56, 61, 199–200, 291, 346 intelligence tests, 39, 40 interest rates, 81 Internet: advertising on, 14, 20, 24, 42, 66, 81, 107, 109, 114, 129, 154, 169–74, 177, 182, 207, 227, 242, 266–67, 275, 286, 291, 322–24, 347–48, 354, 355 anonymity of, 172, 248–51, 283–90 culture of, 13–15, 25 development of, 69, 74, 79–80, 89, 129–30, 159, 162, 190–96, 223, 228 economic impact of, 1–2, 18, 19–20, 24, 31, 43, 60–66, 79–82, 117, 136–37, 169–74, 181, 186 employment and, 2, 7–8, 56–57, 60, 71–74, 79, 117, 123, 135, 149, 178, 201, 257–58 file sharing on, 50–52, 61, 74, 78, 88, 100, 223–30, 239–40, 253–64, 277, 317–24, 335, 349 free products and services of, 7n, 10, 60–61, 73, 81, 82, 90, 94–96, 97, 128, 154, 176, 183, 187, 201, 205–10, 234, 246–48, 253–64, 283–88, 289, 308–9, 317–24, 337–38, 348–50, 366 human contributions to, 19–21, 128, 129–30, 191–92, 253–64 identity in, 14–15, 82, 173–74, 175, 283–90, 315–16 investment in, 117–20, 181 legal issues in, 63, 79–82, 204, 206, 318–19 licensing agreements for, 79–82 as network, 2–3, 9, 11, 12, 14, 15, 16, 17, 19–21, 31, 49, 50–51, 53, 54–55, 56, 57, 75, 92, 129–30, 143–48, 228–29, 259, 286–87, 308–9 political aspect of, 13–15, 205–10 search engines for, 51, 60, 70, 81, 120, 191, 267, 289, 293; see also Google security of, 14–15, 175, 239–40, 305–8, 345 surveillance of, 1–2, 11, 14, 50–51, 64, 71–72, 99, 108–9, 114–15, 120–21, 152, 177n, 199–200, 201, 206–7, 234–35, 246, 272, 291, 305, 309–11, 315, 316, 317, 319–24 transparency of, 63–66, 176, 205–6, 278, 291, 308–9, 316, 336 websites on, 80, 170, 200, 201, 343 Internet2, 69 Internet service providers (ISPs), 171–72 Interstate Highway System, 79–80, 345 “In-valid,” 130 inventors, 117–20 investment, financial, 45, 50, 59–67, 74–80, 115, 116–20, 155, 179–85, 208, 218, 257, 258, 277–78, 298, 301, 348, 350 Invisible Hand humor, 126, 128 IP addresses, 248 iPads, 267 Iran, 199, 200 irony, 130 Islam, 184 Italy, 133 Jacquard programmable looms, 23n “jailbreaking,” 103–4 Japan, 85, 97, 98, 133 Jeopardy, 191 Jeremijenko, Natalie, 302 jingles, 267 jobs, see employment Jobs, Steve, 93, 166n, 192, 358 JOBS Act (2012), 117n journalism, 92, 94 Kapital, Das (Marx), 136 Keynesianism, 38, 151–52, 204, 209, 274, 288 Khan Academy, 94 Kickstarter, 117–20, 186–87, 343 Kindle, 352 Kinect, 89n, 265 “Kirk’s Wager,” 139 Klout, 365 Kodak, 2, 53 Kottke, Dan, 211 KPFA, 136 Kurzweil, Ray, 127, 325, 327 Kushner, Tony, 165, 189 LaBerge, Stephen, 162 labor, human, 85, 86, 87, 88, 99–100, 257–58, 292 labor unions, 44, 47–48, 49, 96, 239, 240 Laffer curve, 149–51, 150, 152 Las Vegas, Nev., 296, 298 lawyers, 98–99, 100, 136, 184, 318–19 leadership, 341–51 legacy prices, 272–75, 288 legal issues, 49, 63, 74–82, 98–99, 100, 104–5, 108, 136, 184, 204, 206, 318–19 Lehman Brothers, 188 lemonade stands, 79–82 “lemons,” 118–19 Lennon, John, 211, 213 levees, economic, 43–45, 46, 47, 48, 49–50, 52, 92, 94, 96, 98, 108, 171, 176n, 224–25, 239–43, 253–54, 263, 345 leveraged mortgages, 49–50, 61, 227, 245, 289n, 296 liberal arts, 97 liberalism, 135–36, 148, 152, 202, 204, 208, 235, 236, 251, 253, 256, 265, 293, 350 libertarianism, 14, 34, 80, 202, 208, 210, 262, 321 liberty, 13–15, 32–33, 90–92, 277–78, 336 licensing agreements, 79–82 “Lifestreams” (Gelernter), 313 Lights in the Tunnel, The (Ford), 56n Linux, 206, 253, 291, 344 litigation, 98–99, 100, 104–5, 108, 184 loans, 32–33, 42, 43, 74, 151–52, 306 local advantages, 64, 94–95, 143–44, 153–56, 173, 203, 280 Local/Global Flip, 153–56, 173, 280 locked-in software, 172–73, 182, 273–74 logical copies, 223 Long-Term Capital Management, 49, 74–75 looms, 22, 23n, 24 loopholes, tax, 77 lotteries, 338–39 lucid dreaming, 162 Luddites, 135, 136 lyres, 22, 23n, 24 machines, 19–20, 86, 92, 123, 129–30, 158, 261, 309–11, 328 see also computers “Machine Stops, The” (Forster), 129–30, 261, 328 machine translations, 19–20 machine vision, 309–11 McMillen, Keith, 117 magic, 110, 115, 151, 178, 216, 338 Malthus, Thomas, 132, 134 Malthusian humor, 125, 127, 132–33 management, 49 manufacturing sector, 49, 85–89, 99, 123, 154, 343 market economies, see economies, market marketing, 211–13, 266–67, 306, 346 “Markets for Lemons” problem, 118–19 Markoff, John, 213 marriage, 167–68, 274–75, 286 Marxism, 15, 22, 37–38, 48, 136–37, 262 as humor, 126 mash-ups, 191, 221, 224–26, 259 Maslow, Abraham, 260, 315 Massachusetts Institute of Technology (MIT), 75, 93, 94, 96–97, 157–58, 184 mass media, 7, 66, 86, 109, 120, 135, 136, 185–86, 191, 216, 267 material extinction, 125 materialism, 125n, 195 mathematics, 11, 20, 40–41, 70, 71–72, 75–78, 116, 148, 155, 161, 189n, 273n see also statistics Matrix, The, 130, 137, 155 Maxwell, James Clerk, 55 Maxwell’s Demon, 55–56 mechanicals, 49, 51n Mechanical Turk, 177–78, 185, 187, 349 Medicaid, 99 medicine, 11–13, 17, 18, 54, 66–67, 97–106, 131, 132–33, 134, 150, 157–58, 325, 346, 363, 366–67 Meetings with Remarkable Men (Gurdjieff), 215 mega-dossiers, 60 memes, 124 Memex, 221n memories, 131, 312–13, 314 meta-analysis, 112 metaphysics, 12, 127, 139, 193–95 Metcalf’s Law, 169n, 350 Mexico City, 159–62 microfilm, 221n microorganisms, 162 micropayments, 20, 226, 274–75, 286–87, 317, 337–38, 365 Microsoft, 19, 89, 265 Middle Ages, 190 middle class, 2, 3, 9, 11, 16–17, 37–38, 40, 42–45, 47, 48, 49, 50, 51, 60, 74, 79, 91, 92, 95, 98, 171, 205, 208, 210, 224–25, 239–43, 246, 253–54, 259, 262, 263, 280, 291–94, 331, 341n, 344, 345, 347, 354 milling machines, 86 mind reading, 111 Minority Report, 130, 310 Minsky, Marvin, 94, 157–58, 217, 326, 330–31 mission statements, 154–55 Mixed (Augmented) Reality, 312–13, 314, 315 mobile phones, 34n, 39, 85, 87, 162, 172, 182n, 192, 229, 269n, 273, 314, 315, 331 models, economic, 40–41, 148–52, 153, 155–56 modernity, 123–40, 193–94, 255 molds, 86 monetization, 172, 176n, 185, 186, 207, 210, 241–43, 255–56, 258, 260–61, 263, 298, 331, 338, 344–45 money, 3, 21, 29–35, 86, 108, 124, 148, 152, 154, 155, 158, 172, 185, 241–43, 278–79, 284–85, 289, 364 monocultures, 94 monopolies, 60, 65–66, 169–74, 181–82, 187–88, 190, 202, 326, 350 Moondust, 362n Moore’s Law, 9–18, 20, 153, 274–75, 288 morality, 29–34, 35, 42, 50–52, 54, 71–74, 188, 194–95, 252–64, 335–36 Morlocks, 137 morning-after pill, 104 morphing, 162 mortality, 193, 218, 253, 263–64, 325–31, 367 mortgages, 33, 46, 49–52, 61, 78, 95–96, 99, 224, 227, 239, 245, 255, 274n, 289n, 296, 300 motivation, 7–18, 85–86, 97–98, 216 motivational speakers, 216 movies, 111–12, 130, 137, 165, 192, 193, 204, 206, 256, 261–62, 277–78, 310 Mozart, Wolfgang Amadeus, 23n MRI, 111n music industry, 11, 18, 22, 23–24, 42, 47–51, 54, 61, 66, 74, 78, 86, 88, 89, 92, 94, 95–96, 97, 129, 132, 134–35, 154, 157, 159–62, 186–87, 192, 206–7, 224, 227, 239, 253, 266–67, 281, 318, 347, 353, 354, 355, 357 Myspace, 180 Nancarrow, Conlon, 159–62 Nancarrow, Yoko, 161 nanopayments, 20, 226, 274–75, 286–87, 317, 337–38, 365 nanorobots, 11, 12, 17 nanotechnology, 11, 12, 17, 87, 162 Napster, 92 narcissism, 153–56, 188, 201 narratives, 165–66, 199 National Security Agency (NSA), 199–200 natural medicine, 131 Nelson, Ted, 128, 221, 228, 245, 349–50 Nelsonian systems, 221–30, 335 Nelson’s humor, 128 Netflix, 192, 223 “net neutrality,” 172 networked cameras, 309–11, 319 networks, see digital networks neutrinos, 110n New Age, 211–17 Newmark, Craig, 177n New Mexico, 159, 203 newspapers, 109, 135, 177n, 225, 284, 285n New York, N.Y., 75, 91, 266–67 New York Times, 109 Nobel Prize, 40, 118, 143n nodes, network, 156, 227, 230, 241–43, 350 “no free lunch” principle, 55–56, 59–60 nondeterministic music, 23n nonlinear solutions, 149–50 nonprofit share sites, 59n, 94–95 nostalgia, 129–32 NRO, 199–200 nuclear power, 133 nuclear weapons, 127, 296 nursing, 97–100, 123, 296n nursing homes, 97–100, 269 Obama, Barack, 79, 100 “Obamacare,” 100n obsolescence, 89, 95 oil resources, 43, 133 online stores, 171 Ono, Yoko, 212 ontologies, 124n, 196 open-source applications, 206, 207, 272, 310–11 optical illusions, 121 optimism, 32–35, 45, 130, 138–40, 218, 230n, 295 optimization, 144–47, 148, 153, 154–55, 167, 202, 203 Oracle, 265 Orbitz, 63, 64, 65 organ donors, 190, 191 ouroboros, 154 outcomes, economic, 40–41, 144–45 outsourcing, 177–78, 185 Owens, Buck, 256 packet switching, 228–29 Palmer, Amanda, 186–87 Pandora, 192 panopticons, 308 papacy, 190 paper money, 34n parallel computers, 147–48, 149, 151 paranoia, 309 Parrish, Maxfield, 214 particle interactions, 196 party machines, 202 Pascal, Blaise, 132, 139 Pascal’s Wager, 139 passwords, 307, 309 “past-oriented money,” 29–31, 35, 284–85 patterns, information, 178, 183, 184, 188–89 Paul, Ron, 33n Pauli exclusion principle, 181, 202 PayPal, 60, 93, 326 peasants, 565 pensions, 95, 99 Perestroika (Kushner), 165 “perfect investments,” 59–67, 77–78 performances, musical, 47–48, 51, 186–87, 253 perpetual motion, 55 Persian Gulf, 86 personal computers (PCs), 158, 182n, 214, 223, 229 personal information systems, 110, 312–16, 317 Pfizer, 265 pharmaceuticals industry, 66–67, 100–106, 123, 136, 203 philanthropy, 117 photography, 53, 89n, 92, 94, 309–11, 318, 319, 321 photo-sharing services, 53 physical trades, 292 physicians, 66–67 physics, 88, 153n, 167n Picasso, Pablo, 108 Pinterest, 180–81, 183 Pirate Party, 49, 199, 206, 226, 253, 284, 318 placebos, 112 placement fees, 184 player pianos, 160–61 plutocracy, 48, 291–94, 355 police, 246, 310, 311, 319–21, 335 politics, 13–18, 21, 22–25, 47–48, 85, 122, 124–26, 128, 134–37, 149–51, 155, 167, 199–234, 295–96, 342 see also conservatism; liberalism; libertarianism Ponzi schemes, 48 Popper, Karl, 189n popular culture, 111–12, 130, 137–38, 139, 159 “populating the stack,” 273 population, 17, 34n, 86, 97–100, 123, 125, 132, 133, 269, 296n, 325–26, 346 poverty, 37–38, 42, 44, 53–54, 93–94, 137, 148, 167, 190, 194, 253, 256, 263, 290, 291–92 power, personal, 13–15, 53, 60, 62–63, 86, 114, 116, 120, 122, 158, 166, 172–73, 175, 190, 199, 204, 207, 208, 278–79, 290, 291, 302–3, 308–9, 314, 319, 326, 344, 360 Presley, Elvis, 211 Priceline, 65 pricing strategies, 1–2, 43, 60–66, 72–74, 145, 147–48, 158, 169–74, 226, 261, 272–75, 289, 317–24, 331, 337–38 printers, 90, 99, 154, 162, 212, 269, 310–11, 316, 331, 347, 348, 349 privacy, 1–2, 11, 13–15, 25, 50–51, 64, 99, 108–9, 114–15, 120–21, 152, 177n, 199–200, 201, 204, 206–7, 234–35, 246, 272, 291, 305, 309–13, 314, 315–16, 317, 319–24 privacy rights, 13–15, 25, 204, 305, 312–13, 314, 315–16, 321–22 product design and development, 85–89, 117–20, 128, 136–37, 145, 154, 236 productivity, 7, 56–57, 134–35 profit margins, 59n, 71–72, 76–78, 94–95, 116, 177n, 178, 179, 207, 258, 274–75, 321–22 progress, 9–18, 20, 21, 37, 43, 48, 57, 88, 98, 123, 124–40, 130–37, 256–57, 267, 325–31, 341–42 promotions, 62 property values, 52 proprietary hardware, 172 provenance, 245–46, 247, 338 pseudo-asceticism, 211–12 public libraries, 293 public roads, 79–80 publishers, 62n, 92, 182, 277–78, 281, 347, 352–60 punishing vs. rewarding network effects, 169–74, 182, 183 quants, 75–76 quantum field theory, 167n, 195 QuNeo, 117, 118, 119 Rabois, Keith, 185 “race to the bottom,” 178 radiant risk, 61–63, 118–19, 120, 156, 183–84 Ragnarok, 30 railroads, 43, 172 Rand, Ayn, 167, 204 randomness, 143 rationality, 144 Reagan, Ronald, 149 real estate, 33, 46, 49–52, 61, 78, 95–96, 99, 193, 224, 227, 239, 245, 255, 274n, 289n, 296, 298, 300, 301 reality, 55–56, 59–60, 124n, 127–28, 154–56, 161, 165–68, 194–95, 203–4, 216–17, 295–303, 364–65 see also Virtual Reality (VR) reason, 195–96 recessions, economic, 31, 54, 60, 76–77, 79, 151–52, 167, 204, 311, 336–37 record labels, 347 recycling, 88, 89 Reddit, 118n, 186, 254 reductionism, 184 regulation, economic, 37–38, 44, 45–46, 49–50, 54, 56, 69–70, 77–78, 266n, 274, 299–300, 311, 321–22, 350–51 relativity theory, 167n religion, 124–25, 126, 131, 139, 190, 193–95, 211–17, 293, 300n, 326 remote computers, 11–12 rents, 144 Republican Party, 79, 202 research and development, 40–45, 85–89, 117–20, 128, 136–37, 145, 154, 215, 229–30, 236 retail sector, 69, 70–74, 95–96, 169–74, 272, 349–51, 355–56 retirement, 49, 150 revenue growth plans, 173n revenues, 149, 149, 150, 151, 173n, 225, 234–35, 242, 347–48 reversible computers, 143n revolutions, 199, 291, 331 rhythm, 159–62 Rich Dad, Poor Dad (Kiyosaki), 46 risk, 54, 55, 57, 59–63, 71–72, 85, 117, 118–19, 120, 156, 170–71, 179, 183–84, 188, 242, 277–81, 284, 337, 350 externalization of, 59n, 117, 277–81 risk aversion, 188 risk pools, 277–81, 284 risk radiation, 61–63, 118–19, 120, 156, 183–84 robo call centers, 177n robotic cars, 90–92 robotics, robots, 11, 12, 17, 23, 42, 55, 85–86, 90–92, 97–100, 111, 129, 135–36, 155, 157, 162, 260, 261, 269, 296n, 342, 359–60 Roman Empire, 24–25 root nodes, 241 Rousseau, Jean-Jacques, 129 Rousseau humor, 126, 129, 130–31 routers, 171–72 royalties, 47, 240, 254, 263–64, 323, 338 Rubin, Edgar, 121 rupture, 66–67 salaries, 10, 46–47, 50–54, 152, 178, 270–71, 287–88, 291–94, 338–39, 365 sampling, 71–72, 191, 221, 224–26, 259 San Francisco, University of, 190 satellites, 110 savings, 49, 72–74 scalable solutions, 47 scams, 119–21, 186, 275n, 287–88, 299–300 scanned books, 192, 193 SceneTap, 108n Schmidt, Eric, 305n, 352 Schwartz, Peter, 214 science fiction, 18, 126–27, 136, 137–38, 139, 193, 230n, 309, 356n search engines, 51, 60, 70, 81, 120, 191, 267, 289, 293 Second Life, 270, 343 Secret, The (Byrne), 216 securitization, 76–78, 99, 289n security, 14–15, 175, 239–40, 305–8, 345 self-actualization, 211–17 self-driving vehicles, 90–92, 98, 311, 343, 367 servants, 22 servers, 12n, 15, 31, 53–57, 71–72, 95–96, 143–44, 171, 180, 183, 206, 245, 358 see also Siren Servers “Sexy Sadie,” 213 Shakur, Tupac, 329 Shelley, Mary, 327 Short History of Progress, A (Wright), 132 “shrinking markets,” 66–67 shuttles, 22, 23n, 24 signal-processing algorithms, 76–78, 148 silicon chips, 10, 86–87 Silicon Valley, 12, 13, 14, 21, 34n, 56, 59, 60, 66–67, 70, 71, 75–76, 80, 93, 96–97, 100, 102, 108n, 125n, 132, 136, 154, 157, 162, 170, 179–89, 192, 193, 200, 207, 210, 211–18, 228, 230, 233, 258, 275n, 294, 299–300, 325–31, 345, 349, 352, 354–58 singularity, 22–25, 125, 215, 217, 327–28, 366, 367 Singularity University, 193, 325, 327–28 Sirenic Age, 66n, 354 Siren Servers, 53–57, 59, 61–64, 65, 66n, 69–78, 82, 91–99, 114–19, 143–48, 154–56, 166–89, 191, 200, 201, 203, 210n, 216, 235, 246–50, 258, 259, 269, 271, 272, 280, 285, 289, 293–94, 298, 301, 302–3, 307–10, 314–23, 326, 336–51, 354, 365, 366 Siri, 95 skilled labor, 99–100 Skout, 280n Skype, 95, 129 slavery, 22, 23, 33n Sleeper, 130 small businesses, 173 smartphones, 34n, 39, 162, 172, 192, 269n, 273 Smith, Adam, 121, 126 Smolin, Lee, 148n social contract, 20, 49, 247, 284, 288, 335, 336 social engineering, 112–13, 190–91 socialism, 14, 128, 254, 257, 341n social mobility, 66, 97, 292–94 social networks, 18, 51, 56, 60, 70, 81, 89, 107–9, 113, 114, 129, 167–68, 172–73, 179, 180, 190, 199, 200–201, 202, 204, 227, 241, 242–43, 259, 267, 269n, 274–75, 280n, 286, 307–8, 317, 336, 337, 343, 349, 358, 365–66 see also Facebook social safety nets, 10, 44, 54, 202, 251, 293 Social Security, 251, 345 software, 7, 9, 11, 14, 17, 68, 86, 99, 100–101, 128, 129, 147, 154, 155, 165, 172–73, 177–78, 182, 192, 234, 236, 241–42, 258, 262, 273–74, 283, 331, 347, 357 software-mediated technology, 7, 11, 14, 86, 100–101, 165, 234, 236, 258, 347 South Korea, 133 Soviet Union, 70 “space elevator pitch,” 233, 342, 361 space travel, 233, 266 Spain, 159–60 spam, 178, 275n spending levels, 287–88 spirituality, 126, 211–17, 325–31, 364 spreadsheet programs, 230 “spy data tax,” 234–35 Square, 185 Stalin, Joseph, 125n Stanford Research Institute (SRI), 215 Stanford University, 60, 75, 90, 95, 97, 101, 102, 103, 162, 325 Starr, Ringo, 256 Star Trek, 138, 139, 230n startup companies, 39, 60, 69, 93–94, 108n, 124n, 136, 179–89, 265, 274n, 279–80, 309–10, 326, 341, 343–45, 348, 352, 355 starvation, 123 Star Wars, 137 star (winner-take-all) system, 38–43, 50, 54–55, 204, 243, 256–57, 263, 329–30 statistics, 11, 20, 71–72, 75–78, 90–91, 93, 110n, 114–15, 186, 192 “stickiness,” 170, 171 stimulus, economic, 151–52 stoplights, 90 Strangelove humor, 127 student debt, 92, 95 “Study 27,” 160 “Study 36,” 160 Sumer, 29 supergoop, 85–89 supernatural phenomena, 55, 124–25, 127, 132, 192, 194–95, 300 supply chain, 70–72, 174, 187 Supreme Court, U.S., 104–5 surgery, 11–13, 17, 18, 98, 157–58, 363 surveillance, 1–2, 11, 14, 50–51, 64, 71–72, 99, 108–9, 114–15, 120–21, 152, 177n, 199–200, 201, 206–7, 234–35, 246, 272, 291, 305, 309–11, 315, 316, 317, 319–24 Surviving Progress, 132 sustainable economies, 235–37, 285–87 Sutherland, Ivan, 221 swarms, 99, 109 synthesizers, 160 synthetic biology, 162 tablets, 85, 86, 87, 88, 113, 162, 229 Tahrir Square, 95 Tamagotchis, 98 target ads, 170 taxation, 44, 45, 49, 52, 60, 74–75, 77, 82, 149, 149, 150, 151, 202, 210, 234–35, 263, 273, 289–90 taxis, 44, 91–92, 239, 240, 266–67, 269, 273, 311 Teamsters, 91 TechCrunch, 189 tech fixes, 295–96 technical schools, 96–97 technologists (“techies”), 9–10, 15–16, 45, 47–48, 66–67, 88, 122, 124, 131–32, 134, 139–40, 157–62, 165–66, 178, 193–94, 295–98, 307, 309, 325–31, 341, 342, 356n technology: author’s experience in, 47–48, 62n, 69–72, 93–94, 114, 130, 131–32, 153, 158–62, 178, 206–7, 228, 265, 266–67, 309–10, 325, 328, 343, 352–53, 362n, 364, 365n, 366 bio-, 11–13, 17, 18, 109–10, 162, 330–31 chaos and, 165–66, 273n, 331 collusion in, 65–66, 72, 169–74, 255, 350–51 complexity of, 53–54 costs of, 8, 18, 72–74, 87n, 136–37, 170–71, 176–77, 184–85 creepiness of, 305–24 cultural impact of, 8–9, 21, 23–25, 53, 130, 135–40 development and emergence of, 7–18, 21, 53–54, 60–61, 66–67, 85–86, 87, 97–98, 129–38, 157–58, 182, 188–90, 193–96, 217 digital, 2–3, 7–8, 15–16, 18, 31, 40, 43, 50–51, 132, 208 economic impact of, 1–3, 15–18, 29–30, 37, 40, 53–54, 60–66, 71–74, 79–110, 124, 134–37, 161, 162, 169–77, 181–82, 183, 184–85, 218, 254, 277–78, 298, 335–39, 341–51, 357–58 educational, 92–97 efficiency of, 90, 118, 191 employment in, 56–57, 60, 71–74, 79, 123, 135, 178 engineering for, 113–14, 123–24, 192, 194, 217, 218, 326 essential vs. worthless, 11–12 failure of, 188–89 fear of (technophobia), 129–32, 134–38 freedom as issue in, 32–33, 90–92, 277–78, 336 government influence in, 158, 199, 205–6, 234–35, 240, 246, 248–51, 307, 317, 341, 345–46, 350–51 human agency and, 8–21, 50–52, 85, 88, 91, 124–40, 144, 165–66, 175–78, 191–92, 193, 217, 253–64, 274–75, 283–85, 305–6, 328, 341–51, 358–60, 361, 362, 365–67 ideas for, 123, 124, 158, 188–89, 225, 245–46, 286–87, 299, 358–60 industrial, 49, 83, 85–89, 123, 132, 154, 343 information, 7, 32–35, 49, 66n, 71–72, 109, 110, 116, 120, 125n, 126, 135, 136, 254, 312–16, 317 investment in, 66, 181, 183, 184, 218, 277–78, 298, 348 limitations of, 157–62, 196, 222 monopolies for, 60, 65–66, 169–74, 181–82, 187–88, 190, 202, 326, 350 morality and, 50–51, 72, 73–74, 188, 194–95, 262, 335–36 motivation and, 7–18, 85–86, 97–98, 216 nano-, 11, 12, 17, 162 new vs. old, 20–21 obsolescence of, 89, 97 political impact of, 13–18, 22–25, 85, 122, 124–26, 128, 134–37, 199–234, 295–96, 342 progress in, 9–18, 20, 21, 37, 43, 48, 57, 88, 98, 123, 124–40, 130–37, 256–57, 267, 325–31, 341–42 resources for, 55–56, 157–58 rupture as concept in, 66–67 scams in, 119–21, 186, 275n, 287–88, 299–300 singularity of, 22–25, 125, 215, 217, 327–28, 366, 367 social impact of, 9–21, 124–40, 167n, 187, 280–81, 310–11 software-mediated, 7, 11, 14, 86, 100–101, 165, 234, 236, 258, 347 startup companies in, 39, 60, 69, 93–94, 108n, 124n, 136, 179–89, 265, 274n, 279–80, 309–10, 326, 341, 343–45, 348, 352, 355 utopian, 13–18, 21, 31, 37–38, 45–46, 96, 128, 130, 167, 205, 207, 265, 267, 270, 283, 290, 291, 308–9, 316 see also specific technologies technophobia, 129–32, 134–38 television, 86, 185–86, 191, 216, 267 temperature, 56, 145 Ten Commandments, 300n Terminator, The, 137 terrorism, 133, 200 Tesla, Nikola, 327 Texas, 203 text, 162, 352–60 textile industry, 22, 23n, 24, 135 theocracy, 194–95 Theocracy humor, 124–25 thermodynamics, 88, 143n Thiel, Peter, 60, 93, 326 thought experiments, 55, 139 thought schemas, 13 3D printers, 7, 85–89, 90, 99, 154, 162, 212, 269, 310–11, 316, 331, 347, 348, 349 Thrun, Sebastian, 94 Tibet, 214 Time Machine, The (Wells), 127, 137, 261, 331 topology, network, 241–43, 246 touchscreens, 86 tourism, 79 Toyota Prius, 302 tracking services, 109, 120–21, 122 trade, 29 traffic, 90–92, 314 “tragedy of the commons,” 66n Transformers, 98 translation services, 19–20, 182, 191, 195, 261, 262, 284, 338 transparency, 63–66, 74–78, 118, 176, 190–91, 205–6, 278, 291, 306–9, 316, 336 transportation, 79–80, 87, 90–92, 123, 258 travel agents, 64 Travelocity, 65 travel sites, 63, 64, 65, 181, 279–80 tree-shaped networks, 241–42, 243, 246 tribal dramas, 126 trickle-down effect, 148–49, 204 triumphalism, 128, 157–62 tropes (humors), 124–40, 157, 170, 230 trust, 32–34, 35, 42, 51–52 Turing, Alan, 127–28, 134 Turing’s humor, 127–28, 191–94 Turing Test, 330 Twitter, 128, 173n, 180, 182, 188, 199, 200n, 201, 204, 245, 258, 259, 349, 365n 2001: A Space Odyssey, 137 two-way links, 1–2, 227, 245, 289 underemployment, 257–58 unemployment, 7–8, 22, 79, 85–106, 117, 151–52, 234, 257–58, 321–22, 331, 343 “unintentional manipulation,” 144 United States, 25, 45, 54, 79–80, 86, 138, 199–204 universities, 92–97 upper class, 45, 48 used car market, 118–19 user interface, 362–63, 364 utopianism, 13–18, 21, 30, 31, 37–38, 45–46, 96, 128, 130, 167, 205, 207, 265, 267, 270, 283, 290, 291, 308–9, 316 value, economic, 21, 33–35, 52, 61, 64–67, 73n, 108, 283–90, 299–300, 321–22, 364 value, information, 1–3, 15–16, 20, 210, 235–43, 257–58, 259, 261–63, 271–75, 321–24, 358–60 Values, Attitudes, and Lifestyles (VALS), 215 variables, 149–50 vendors, 71–74 venture capital, 66, 181, 218, 277–78, 298, 348 videos, 60, 100, 162, 185–86, 204, 223, 225, 226, 239, 240, 242, 245, 277, 287, 329, 335–36, 349, 354, 356 Vietnam War, 353n vinyl records, 89 viral videos, 185–86 Virtual Reality (VR), 12, 47–48, 127, 129, 132, 158, 162, 214, 283–85, 312–13, 314, 315, 325, 343, 356, 362n viruses, 132–33 visibility, 184, 185–86, 234, 355 visual cognition, 111–12 VitaBop, 100–106, 284n vitamins, 100–106 Voice, The, 185–86 “voodoo economics,” 149 voting, 122, 202–4, 249 Wachowski, Lana, 165 Wall Street, 49, 70, 76–77, 181, 184, 234, 317, 331, 350 Wal-Mart, 69, 70–74, 89, 174, 187, 201 Warhol, Andy, 108 War of the Worlds, The (Wells), 137 water supplies, 17, 18 Watts, Alan, 211–12 Wave, 189 wealth: aggregate or concentration of, 9, 42–43, 53, 60, 61, 74–75, 96, 97, 108, 115, 148, 157–58, 166, 175, 201, 202, 208, 234, 278–79, 298, 305, 335, 355, 360 creation of, 32, 33–34, 46–47, 50–51, 57, 62–63, 79, 92, 96, 120, 148–49, 210, 241–43, 270–75, 291–94, 338–39, 349 inequalities and redistribution of, 20, 37–45, 65–66, 92, 97, 144, 254, 256–57, 274–75, 286–87, 290–94, 298, 299–300 see also income levels weather forecasting, 110, 120, 150 weaving, 22, 23n, 24 webcams, 99, 245 websites, 80, 170, 200, 201, 343 Wells, H.

pages: 221 words: 46,396

The Left Case Against the EU
by Costas Lapavitsas
Published 17 Dec 2018

To find answers it is important to focus on the Greek ‘historical bloc’, to use Gramsci’s well-known term: in other words the alliance of dominant sections of the capitalist class with lower classes that plays a hegemonic role in the economy, politics, and culture of a country.32 It is not necessary here to engage in a sociological description of the Greek historical bloc during the decades following the country’s accession to the EEC in 1981. Suffice it to state that the dominant capitalist elements have included ship-owning, banking, construction, and manufacturing, which also have widespread ownership and control over the mass media. These class interests have had great influence over the state machine, translating into privileged access to public procurement and institutionalized tax avoidance. The Greek state in the post-war decades followed its own long historical tradition of deploying the forms of a democratic polity, while in practice treating society as an occupied territory for the purposes of tax and welfare provision.33 The resulting mechanisms of integration and social control revolved around party patronage, a characteristic feature of the Greek social formation since the middle of the nineteenth century.

pages: 565 words: 151,129

The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism
by Jeremy Rifkin
Published 31 Mar 2014

“Lionbridge Language Solution Provider Expands Opportunities with Translation Technology,” Microsoft Case Studies, July 9, 2013, http://www.microsoft.com/casestudies/Bing/Lion bridge/Language-Solution-Provider-Expands-Opportunities-with-Translation-Technology/ 710000001102 (accessed September 4, 2013). 42. Niko Papula, “Are Translators Losing Their Jobs Because of Machine Translation?,” Multilizer Translation Blog, April 13, 2011, http://translation-blog.multilizer.com/are-translators-losing -their-jobs-because-of-machine-translation/ (accessed September 6, 2013). Chapter 9 1. Harold Hotelling, “The General Welfare in Relation to Problems of Taxation and of Railway and Utility Rates,” Econometrica 6(3) (July, 1938): 242. 2. Ibid., 258. 3.

pages: 194 words: 57,434

The Age of AI: And Our Human Future
by Henry A Kissinger , Eric Schmidt and Daniel Huttenlocher
Published 2 Nov 2021

While their efforts had some success in laboratory settings, they failed to yield good results in the real world. The variability and subtlety of language did not reduce to simple rules. All this changed when, in 2015, developers began to apply deep neural networks to the problem. Suddenly, machine translation leaped forward. But its improvement did not just derive from the application of neural networks or machine-learning techniques. Rather, it sprang from new and creative applications of these approaches. These developments underscore a key point: from the basic building blocks of machine learning, developers have the capacity to continue innovating in brilliant ways, unlocking new AIs in the process.

pages: 523 words: 61,179

Human + Machine: Reimagining Work in the Age of AI
by Paul R. Daugherty and H. James Wilson
Published 15 Jan 2018

Neural networks that convert audio signals to text signals in a variety of languages. Applications include translation, voice command and control, audio transcription, and more. Natural language processing (NLP). A field in which computers process human (natural) languages. Applications include speech recognition, machine translation, and sentiment analysis. AI Applications Component Intelligent agents. Agents that interact with humans via natural language. They can be used to augment human workers working in customer service, human resources, training, and other areas of business to handle FAQ-type inquiries. Collaborative robotics (cobots).

pages: 561 words: 157,589

WTF?: What's the Future and Why It's Up to Us
by Tim O'Reilly
Published 9 Oct 2017

In their 2009 paper, “The Unreasonable Effectiveness of Data,” (a homage in its title to Eugene Wigner’s classic 1960 talk, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”), Google machine learning researchers Alon Halevy, Peter Norvig, and Fernando Pereira explained the growing effectiveness of statistical methods in solving previously difficult problems such as speech recognition and machine translation. Much of the previous work had been grammar based. Could you construct what was in effect a vast piston engine that used its knowledge of grammar rules to understand human speech? Success had been limited. But that changed as more and more documents came online. A few decades ago, researchers relied on carefully curated corpora of human speech and writings that, at most, contained a few million words.

CHAPTER 8: MANAGING A WORKFORCE OF DJINNS 155 breakthroughs and business processes: Steve Lohr, “The Origins of ‘Big Data’: An Etymological Detective Story,” New York Times, February 1, 2013, https://bits.blogs.nytimes.com/2013/02/01/the-origins-of-big-data-an-etymological-detective-story/. 155 speech recognition and machine translation: Alon Halevy, Peter Norvig, and Fernando Pereira, “The Unreasonable Effectiveness of Data,” IEEE Intelligent Systems, 1541–1672/09, retrieved March 31, 2017, https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf. 156 “the sexiest job of the 21st century”: Thomas Davenport and D.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

Personal digital assistants, such as Apple’s Siri, respond to spoken commands and can answer simple questions and execute commands. Optical character recognition of handwritten and typewritten text is routinely used in applications such as mail sorting and digitization of old documents.66 Machine translation remains imperfect but is good enough for many applications. Early systems used the GOFAI approach of hand-coded grammars that had to be developed by skilled linguists from the ground up for each language. Newer systems use statistical machine learning techniques that automatically build statistical models from observed usage patterns.

13 K Kasparov, Garry 12 Kepler, Johannes 14 Knuth, Donald 14, 264 Kurzweil, Ray 2, 261, 269 L Lenat, Douglas 12, 263 Logic Theorist (system) 6 logicist paradigm, see Good Old-Fashioned Artificial Intelligence (GOFAI) Logistello 12 M machine intelligence; see also artificial intelligence human-level (HLMI) 4, 19–21, 27–35, 73–74, 207, 243, 264, 267 revolution, see intelligence explosion machine learning 8–18, 28, 121, 152, 188, 274, 290 machine translation 15 macro-structural development accelerator 233–235 malignant failure 123–126, 149, 196 Malthusian condition 163–165, 252 Manhattan Project 75, 80–87, 276 McCarthy, John 5–18 McCulloch–Pitts neuron 237 MegaEarth 56 memory capacity 7–9, 60, 71 memory sharing 61 Mill, John Stuart 210 mind crime 125–126, 153, 201–208, 213, 226, 297 Minsky, Marvin 18, 261, 262, 282 Monte Carlo method 9–13 Moore’s law 24–25, 73–77, 274, 286; see also computing power moral growth 214 moral permissibility (MP)218–220, 297 moral rightness (MR)217–220.296, 297 moral status 125–126, 166–169, 173, 202–205, 268, 288, 296 Moravec, Hans 24, 265, 288 motivation selection 29, 127–129, 138–144, 147, 158, 168, 180–191, 222 definition 138 motivational scaffolding 191, 207 multipolar scenarios 90, 132, 159–184, 243–254, 301 mutational load 41 N nanotechnology 53, 94–98, 103, 113, 177, 231, 239, 276, 277, 299, 300 natural language 14 neural networks 5–9, 28, 46, 173, 237, 262, 274 neurocomputational modeling 25–30, 35, 61, 301; see also whole brain emulation (WBE) and neuromorphic AI neuromorphic AI 28, 34, 47, 237–245, 267, 300, 301 Newton, Isaac 56 Nilsson, Nils 18–20, 264 nootropics 36–44, 66–67, 201, 267 Norvig, Peter 19, 264, 282 O observation selection theory, see anthropics Oliphant, Mark 85 O’Neill, Gerard 101 ontological crisis 146, 197 optimality notions 10, 186, 194, 291–293 Bayesian agent 9–11 value learner (AI-VL) 194 observation-utility-maximizer (AI-OUM) 194 reinforcement learner (AI-RL) 194 optimization power 24, 62–75, 83, 92–96, 227, 274 definition 65 oracle AI 141–158, 222–226, 285, 286 definition 146 orthogonality thesis 105–109, 115, 279, 280 P paperclip AI 107–108, 123–125, 132–135, 153, 212, 243 Parfit, Derek 279 Pascal’s mugging 223, 298 Pascal’s wager 223 person-affecting perspective 228, 245–246, 301 perverse instantiation 120–124, 153, 190–196 poker 13 principal–agent problem 127–128, 184 Principle of Epistemic Deference 211, 221 Proverb (program) 12 Q qualia, see consciousness quality superintelligence 51–58, 72, 243, 272 definition 56 R race dynamic, see technology race rate of growth, see growth ratification 222–225 Rawls, John 150 Reagan, Ronald 86–87 reasons-based goal 220 recalcitrance 62–77, 92, 241, 274 definition 65 recursive self-improvement 29, 75, 96, 142, 259; see also seed AI reinforcement learning 12, 28, 188–189, 194–196, 207, 237, 277, 282, 290 resource acquisition 113–116, 123, 193 reward signal 71, 121–122, 188, 194, 207 Riemann hypothesis catastrophe 123, 141 robotics 9–19, 94–97, 117–118, 139, 238, 276, 290 Roosevelt, Franklin D.85 RSA encryption scheme 80 Russell, Bertrand 6, 87, 139, 277 S Samuel, Arthur 12 Sandberg, Anders 265, 267, 272, 274 scanning, see whole brain emulation (WBE) Schaeffer, Jonathan 12 scheduling 15 Schelling point 147, 183, 296 Scrabble 13 second transition 176–178, 238, 243–245, 252 second-guessing (arguments) 238–239 seed AI 23–29, 36, 75, 83, 92–96, 107, 116–120, 142, 151, 189–198, 201–217, 224–225, 240–241, 266, 274, 275, 282 self-limiting goal 123 Shakey (robot) 6 SHRDLU (program) 6 Shulman, Carl 178–180, 265, 287, 300, 302, 304 simulation hypothesis 134–135, 143, 278, 288, 292 singleton 78–90, 95–104, 112–114, 115–126, 136, 159, 176–184, 242, 275, 276, 279, 281, 287, 299, 301, 303 definition 78, 100 singularity 1, 2, 49, 75, 261, 274; see also intelligence explosion social signaling 110 somatic gene therapy 42 sovereign AI 148–158, 187, 226, 285 speech recognition 15–16, 46 speed superintelligence 52–58, 75, 270, 271 definition 53 Strategic Defense Initiative (“Star Wars”) 86 strong AI 18 stunting 135–137, 143 sub-symbolic processing, see connectionism superintelligence; see also collective superintelligence, quality superintelligence and speed superintelligence definition 22, 52 forms 52, 59 paths to 22, 50 predicting the behavior of 108, 155, 302 superorganisms 178–180 superpowers 52–56, 80, 86–87, 91–104, 119, 133, 148, 277, 279, 296 types 94 surveillance 15, 49, 64, 82–85, 94, 117, 132, 181, 232, 253, 276, 294, 299 Szilárd, Leó 85 T TD-Gammon 12 Technological Completion Conjecture 112–113, 229 technology race 80–82, 86–90 203–205, 231, 246–252, 302 teleological threads 110 Tesauro, Gerry 12 TextRunner (system) 71 theorem prover 15, 266 three laws of robotics 139, 284 Thrun, Sebastian 19 tool-AI 151–158 definition 151 treacherous turn 116–119, 128 Tribolium castaneum 154 tripwires 137–143 Truman, Harry 85 Turing, Alan 4, 23, 29, 44, 225, 265, 271, 272 U unemployment 65, 159–180, 287 United Nations 87–89, 252–253 universal accelerator 233 unmanned vehicle, see drone uploading, see whole brain emulation (WBE) utility function 10–11, 88, 100, 110, 119, 124–125, 133–134, 172, 185–187, 192–208, 290, 292, 293, 303 V value learning 191–198, 208, 293 value-accretion 189–190, 207 value-loading 185–208, 293, 294 veil of ignorance 150, 156, 253, 285 Vinge, Vernor 2, 49, 270 virtual reality 30, 31, 53, 113, 166, 171, 198, 204, 300 von Neumann probe 100–101, 113 von Neumann, John 44, 87, 114, 261, 277, 281 W wages 65, 69, 160–169 Watson (IBM) 13, 71 WBE, see whole brain emulation (WBE) Whitehead, Alfred N.6 whole brain emulation (WBE) 28–36, 50, 60, 68–73, 77, 84–85, 108, 172, 198, 201–202, 236–245, 252, 266, 267, 274, 299, 300, 301 Wigner, Eugene 85 windfall clause 254, 303 Winston, Patrick 18 wire-heading 122–123, 133, 189, 194, 207, 282, 291 wise-singleton sustainability threshold 100–104, 279 world economy 2–3, 63, 74, 83, 159–184, 274, 277, 285 Y Yudkowsky, Eliezer 70, 92, 98, 106, 197, 211–216, 266, 273, 282, 286, 291, 299

pages: 855 words: 178,507

The Information: A History, a Theory, a Flood
by James Gleick
Published 1 Mar 2011

In 2008, Google created an early warning system for regional flu trends based on data no firmer than the incidence of Web searches for the word flu; the system apparently discovered outbreaks a week sooner than the Centers for Disease Control and Prevention. This was Google’s way: it approached classic hard problems of artificial intelligence—machine translation and voice recognition—not with human experts, not with dictionaries and linguists, but with its voracious data mining of trillions of words in more than three hundred languages. For that matter, its initial approach to searching the Internet relied on the harnessing of collective knowledge.

Journal of African Cultural Studies 16, no. 1 (2003): 107–17. Nagel, Ernest, and James R. Newman. Gödel’s Proof. New York: New York University Press, 1958. Napier, John. A Description of the Admirable Table of Logarithmes. Translated by Edward Wright. London: Nicholas Okes, 1616. Nemes, Tihamér. Cybernetic Machines. Translated by I. Földes. New York: Gordon & Breach, 1970. Neugebauer, Otto. The Exact Sciences in Antiquity. 2nd ed. Providence, R.I.: Brown University Press, 1957. ———. A History of Ancient Mathematical Astronomy. Studies in the History of Mathematics and Physical Sciences, vol. 1. New York: Springer-Verlag, 1975.

pages: 239 words: 70,206

Data-Ism: The Revolution Transforming Decision Making, Consumer Behavior, and Almost Everything Else
by Steve Lohr
Published 10 Mar 2015

By December of 2013, however, Krugman had become more impressed by advances in computing and he wrote an article, published on the Times’s Web site, explaining why he thinks Gordon is “probably wrong.” A decade ago, Krugman writes, “the field of artificial intelligence had marched from failure to failure. But something has happened—things that were widely regarded as jokes not long ago, like speech recognition, machine translation, self-driving cars, and so on, have suddenly become more or less working reality.” Data and software, Krugman observes, have forged the path to working artificial intelligence. “They’re using big data and correlations and so on,” he writes, “to implement algorithms—mindless algorithms, you might say.

pages: 245 words: 71,886

Spike: The Virus vs The People - The Inside Story
by Jeremy Farrar and Anjana Ahuja
Published 15 Jan 2021

The first signs of SARS were cases of undiagnosed pneumonia. Nobody knew it then but that single line marked the debut of a new disease, one that would come to be called Covid-19 and cause the biggest upheaval to the global order since the Second World War. The line clicks through to an imperfect machine translation of a story relating to ‘an urgent notice on treatment of pneumonia of unknown cause’ originally posted that evening by the Medical Administration of Wuhan Municipal Health Committee, concerning four patients with an unknown form of pneumonia. A report appended underneath that urgent notice adds worrying detail gleaned on 31 December: 27 people were in various hospitals in Wuhan with viral pneumonia or pulmonary (lung) infection.

pages: 253 words: 80,074

The Man Who Invented the Computer
by Jane Smiley
Published 18 Oct 2010

The machine was designed specifically to solve sets of linear simultaneous algebraic equations up to 30 × 30. All internal operations were carried on in binary arithmetic; the size of the numbers handled was up to 50 binary places (about 15 decimal places). Initial input of data was by means of standard IBM cards, with five 15-place numbers per card; the machine translated the numbers to binary numbers. The machine’s “memory” consisted of two rotating drums filled with small capacitors. The polarity of the charge on a given capacitor represented the binary digit standing in that position. A “clock” frequency of 60 cycles per second was used, the mechanical parts of the machine being driven with a synchronous motor.

pages: 268 words: 75,850

The Formula: How Algorithms Solve All Our Problems-And Create More
by Luke Dormehl
Published 4 Nov 2014

Facial recognition, for instance, was once considered to be a trait performable only by a select few higher-performing animals—humans among them. Today algorithms employed by Facebook and Google regularly recognize individual faces out of the billions of personal images uploaded by users. Much the same is true of language and automated translation. “There is no immediate or predictable prospect of useful machine translation,” concluded a U.S. National Academy of Sciences committee in 1965. Leap forward half a century and Google Translate is used on a daily basis, offering two-way translation between 58 different languages: 3,306 separate translation services in all. “The service that Google provides appears to flatten and diversify inter-language relations beyond the wildest dreams of even the E.U.’s most enthusiastic language parity proponents,” writes David Bellos, author of Is That a Fish in Your Ear?

Mastering Structured Data on the Semantic Web: From HTML5 Microdata to Linked Open Data
by Leslie Sikos
Published 10 Jul 2015

Watson’s cognitive computing algorithms are used in health care, to provide insight and decision support, and give personalized and instant responses to any inquiry or service issue in customer service. Developers can use the IBM Watson Developers Cloud, a collection of REST APIs and SDKs for cognitive computing tasks, such as natural language classification, concept expansion, machine translation, relationship extraction, speech to text, and visual recognition. Among other technologies, Watson uses triplestores such as Sesame, ontologies, and inference. The information resources of DeepQA consist of unstructured, semistructured, and structured data, of which the last one plays an important role, mainly by implementing RDF in IBM’s DB2 database platform [4].

pages: 287 words: 80,180

Blue Ocean Strategy, Expanded Edition: How to Create Uncontested Market Space and Make the Competition Irrelevant
by W. Chan Kim and Renée A. Mauborgne
Published 20 Jan 2014

It eliminated interoperability with a host of operating systems, ranging from SCO UNIX to OS/3 to DOS, which were extraneous to these basic functions. The new PC server gave buyers twice a minicomputer’s file and print sharing capability and speed at one-third the price. As for Compaq, the dramatically simplified machines translated into much lower manufacturing costs. Compaq’s creation of the ProSignia, and three subsequent offerings in the PC server industry, not only fueled PC sales but also grew the PC server industry into a $3.8 billion industry in less than four years.22 Dell Computer In the mid-1990s, Dell Computer Corporation created another blue ocean in the computer industry.

Paper Knowledge: Toward a Media History of Documents
by Lisa Gitelman
Published 26 Mar 2014

Berkeley: University of California Press, 1981. DeJean, Joan. The Reinvention of Obscenity: Sex, Lies, and Tabloids in Early Modern France. Chicago: University of Chicago Press, 2002. Denning, Michael. The Cultural Front: The Laboring of American Culture in the Twentieth Century. London: Verso, 1996. Derrida, Jacques. Paper Machine. Translated by Rachel Bowlby. Stanford, CA: Stanford University Press, 2005. ———. The Postcard: From Socrates to Freud and Beyond. Translated by Alan Bass. Chicago: University of Chicago Press, 1987. Dessauer, John. My Years with Xerox: The Billions Nobody Wanted. Garden City, NY: Doubleday, 1971. DeVinne, Theodore Low.

pages: 296 words: 78,112

Devil's Bargain: Steve Bannon, Donald Trump, and the Storming of the Presidency
by Joshua Green
Published 17 Jul 2017

Outside IBM, their unorthodox approach to translation was greeted with hostility (“the crude force of computers is not science,” huffed one linguist at a professional conference who reviewed their work). But pattern-hunting worked. A computer could learn to recognize patterns without regard for the rules of grammar and still produce a successful translation. “Statistical machine translation,” as the process became known, soon outpaced the old method and went on to become the basis of modern speech-recognition software and tools such as Google Translate. At Renaissance, Mercer and Brown applied this approach broadly to the markets, feeding all kinds of abstruse data into their computers in a never-ending hunt for hidden correlations.

pages: 317 words: 84,400

Automate This: How Algorithms Came to Rule Our World
by Christopher Steiner
Published 29 Aug 2012

He met Kahler in San Diego, where he proved a quick learner of the theory Kahler, McGuire, and NASA had developed. 5. Sebastian Mallaby, More Money Than God: Hedge Funds and the Making of a New Elite (New York: Penguin Press, 2010). 6. Peter Brown, Robert Mercer, Stephen Della Pietra, and Vincent J. Della Pietra, “The Mathematics of Statistical Machine Translation: Parameter Estimation,” Journal of Computational Linguistics 19, no. 2 (1993): 263–311. 7. Ingfei Chen, “Scientist at Work: Nick Patterson,” New York Times, December 12, 2006. CHAPTER 8: WALL STREET VERSUS SILICON VALLEY 1. Rana Foroohar, “Wall Street: Aiding the Economic Recovery, or Strangling It?”

pages: 294 words: 82,438

Simple Rules: How to Thrive in a Complex World
by Donald Sull and Kathleen M. Eisenhardt
Published 20 Apr 2015

Kay, The Molecular Vision of Life: Caltech, the Rockefeller Foundation, and the Rise of the New Biology (New York: Oxford University Press, 1996); Raymond Fosdick, The Story of the Rockefeller Foundation (New York: Harper & Brothers, 1952); Linus Pauling, “How My Interest in Proteins Developed,” Protein Science 2 (1993): 1060–63; and Barbara Marianacci, Linus Pauling in His Own Words (New York: Simon & Schuster, 1995). [>] Over his three: Fosdick, The Story of the Rockefeller Foundation, 159. [>] Eighteen scientists: Weaver, Scene of Change, 73. [>] He wrote a seminal: Warren Weaver, “Translation” (unpublished memorandum, Rockefeller Foundation, July 15, 1949), available at Machine Translation Archive, http://www.mt-archive.info/Weaver-1949.pdf; and Matt Novak, “The Cold War Origins of Google Translate,” BBC Online, May 30, 2012, http://www.bbc.com/future/story/20120529-a-cold-war-google-translate. Weaver also coauthored, with Claude E. Shannon, The Mathematical Theory of Communication (Champaign: University of Illinois Press, 1949), which laid out the principles required to build modern telecommunications networks, including the Internet. [>] When India and: Justin Gillis, “Norman Borlaug, Plant Scientist Who Fought Famine, Dies at 95,” New York Times, September 13, 2009. [>] In his 1948 article: Weaver, “Science and Complexity,” 536–44.

pages: 244 words: 81,334

Picnic Comma Lightning: In Search of a New Reality
by Laurence Scott
Published 11 Jul 2018

Irby (London: Penguin Classics, 2000). 2 ‘I always think …’, see David Marchese, ‘In Conversation: Sarah Silverman’, Vulture.com, 10th October 2017; ‘I want to …’, see Claire Fallon, ‘Zadie Smith Thinks We Should “Retain The Right To Be Wrong”’, www.huffingtonpost.co.uk, 19th September 2017. 3 ‘All [these essays] …’, Zadie Smith, Feel Free (London: Hamish Hamilton, 2018). 4 ‘thankfully do not …’, Zadie Smith, ‘Revenge of the Real’, Guardian, 21st November 2009; ‘ideological inconsistency is …’, Zadie Smith, Changing My Mind (London: Hamish Hamilton, 2009). 5 ‘Blushing is the …’, Charles Darwin, The Expression of the Emotions in Man and Animals (Oxford: Oxford University Press, 1998 [1872]). 6 ‘a blush is …’, George Eliot, Daniel Deronda (London: Penguin Classics, 1995 [1876]). 7 ‘the novel task …’, Lotem Peled and Roi Reichart, ‘Sarcasm SIGN: Interpreting Sarcasm with Sentiment Based Monolingual Machine Translation’, eprint arXiv:1704.06836, April 2017. 8 ‘There are countless …’, for a valuation forecast of emotional data, see www.marketsandmarkets.com, ‘Emotion Detection and Recognition Market by Technology …’, December 2016. 9 ‘glue’, see ‘The Future of Digital – The Segment of One’, D&AD YouTube Channel, published 29th August 2014; ‘emotional stimuli’, John B.

pages: 328 words: 84,682

The Business of Platforms: Strategy in the Age of Digital Competition, Innovation, and Power
by Michael A. Cusumano , Annabelle Gawer and David B. Yoffie
Published 6 May 2019

The market was still like the Wild West—more chaos than order. Between 2017 and 2018, improvements in machine learning and deep learning were creating better voice experiences across all competitors. Google appeared to be the technical leader in AI, with many applications in search, advertisements, and machine translation, among others. Apple, which lagged behind in early benchmarks, was improving quickly, as were the benchmarks for Microsoft’s Cortana and Amazon’s Alexa.4 In 2018, Google had the advantage of hundreds of millions of devices (Android smartphones) that have Google’s voice capabilities embedded.

The Fractalist
by Benoit Mandelbrot
Published 30 Oct 2012

His manager was the physicist Michael Watanabe (1910–93), whose Ph.D. chairman in Paris had been Louis de Broglie. The manager above that was Nathaniel Rochester (1919–2001), a career IBM engineer credited with a near replica of von Neumann’s pioneering Princeton computer. Staff was needed for a machine translation project—very premature but well supported—and I was a rare warm body with good name recognition for my work in linguistics. I told them that a very nice job was waiting for me in Lille, and that my interests had shifted. Their answer was that they needed good people in every area, and they reluctantly changed their offer to regular summer visits, beginning in 1958.

pages: 292 words: 94,324

How Doctors Think
by Jerome Groopman
Published 15 Jan 2007

The shape of the aorta was not distorted, so the tear was not visualized in the absence of contrast material that would enter the torn part of the wall. "The old idea based on a static study—that an image is an image is an image—is obsolete now with technology that is dynamic, that can show us active changes in blood flow and other aspects of physiology," Kressel said. "How you use the machine translates into what you get to see." The problem with applying a methodical and rigorous approach to every image, Kressel said, is that with a CT scan or an MRI, "there is just so much data." There may be more than a thousand images per CT scan or per MRI, so that a single radiologist's entire day could well be occupied with looking at just one of these studies.

pages: 323 words: 90,868

The Wealth of Humans: Work, Power, and Status in the Twenty-First Century
by Ryan Avent
Published 20 Sep 2016

Processing power is not productivity growth, and cheap supercomputers in our pockets will not be economically transformative if we can’t come up with economically transformative things to do with them. But it would be surprising if exponential advance in computing didn’t generate dramatic economic change, given the general-purpose nature of the technology. Most of what humans do when they are working boils down to computing. Sceptics regarding the possibility of instant machine translation didn’t argue that it was impossible because language was about more than computing; they argued that it was impossible because it required really hard computing. But really hard computing is precisely where exponential advance in information processing comes in handy. If driverless vehicles were all the revolution managed to produce, the economic and social impact would be stunning.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

An obscure Soviet physicist had discovered an interesting property of radar waves shedding off flat planes. There being no obvious national security use for the finding, the incautious Soviets published his paper in a Russian language technical journal, one read by the machines of the National Security Agency.16 But machine translation using traditional symbolic AI usually made for extremely clunky transliterations that were sometimes barely comprehensible. Radical improvements to these problems arrived in the twenty first century with the reboot of connectionism. The modern AI revolution is still only a few years old, and it continues to amaze.

pages: 339 words: 94,769

Possible Minds: Twenty-Five Ways of Looking at AI
by John Brockman
Published 19 Feb 2019

One was based on understanding, at a formal level, symbolically, how the world works; and the other was based on doing statistics and probabilistic kinds of things. With regard to symbolic AI, one of the test cases was, Can we teach a computer to do something like integrals? Can we teach a computer to do calculus? There were tasks like machine translation, which people thought would be a good example of what computers could do. The bottom line is that by the early seventies, that approach had crashed. Then there was a trend toward devices called expert systems, which arose in the late seventies and early eighties. The idea was to have a machine learn the rules that an expert uses and thereby figure out what to do.

The Internet Trap: How the Digital Economy Builds Monopolies and Undermines Democracy
by Matthew Hindman
Published 24 Sep 2018

Google’s investments in deep learning have been massive and multifaceted, including (among other things) major corporate acquisitions and the development of the TensorFlow high-level programming toolkit.24 But one critical component has been the development of a custom computer chip built specially for machine learning. Google’s Tensor Processing Units (tpus) offer up to eighty times more processing power per watt for tasks like image processing or machine translation, another advantage Google has over competitors.25 These investments in the data center would mean little, of course, without similarly massive investments tying them to the outside world. Google’s data centers are connected with custom-built high-radix switches that provide terabits per second of bandwidth.

pages: 332 words: 93,672

Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy
by George Gilder
Published 16 Jul 2018

It improves the efficiency and liquidity of markets at the cost of creating Siren Servers that lure the unwary into sterile fields of algorithmic finance. At IBM, by contrast, Mercer and his colleagues under Jelinek achieved a permanent advance in computer science, information theory, and speech recognition. Their discoveries are behind the Siri system in your iPhone, hands-free calling in your car, and the growing success of machine translation. They enabled the ever-improving responsiveness of voice interfaces to the cloud computing technologies in the new generation of Internet progress. In the process, Mercer and his team pioneered the field of big data, which dominates the current computer paradigm. Competing with Kurzweil and other pioneers of AI-based systems attempting to duplicate human experts—from chess to translation—the IBM team faced the possibility of refutation and failure.

pages: 826 words: 231,966

GCHQ
by Richard Aldrich
Published 10 Jun 2010

Brian Stewart, Secretary of the JIC, created a joint team on Automatic Data Processing which also comprised MI5, SIS, the Defence Intelligence Staff and the Foreign Office. Teddy Poulden from GCHQ was given the job of chairing it.47 GCHQ and NSA had just completed a shared computer project to standardise the spelling of geographical locations in Russian.48 What GCHQ really thirsted for was progress on machine translation that would do some of the jobs currently undertaken by linguists, but so far this had failed on grounds of high costs and complexity.49 The Defence Intelligence Staff had looked at storing more of its material on computer, but had been horrified by the sheer labour required to keep such databases current.

As a result of the growth of satellite collection, GCHQ was working more closely with NSA than ever before, notably by processing about 15 per cent of the ‘overhead’ material in its highly secret J-Ops section. Yet, paradoxically, GCHQ was also being left behind, and the underlying feeling was one of growing ‘unspecialness’. Indeed, with the possibility of machine translation beckoning, there was even a danger that the Americans might eventually view GCHQ as expendable. Therefore, in the 1980s both GCHQ and NSA were reconsidering their intelligence alliances, not only with each other, but also with the long-established ‘second party’ members of the original UKUSA agreements such as New Zealand, and even with the ‘third party’ sigint services in Western Europe, such as Sweden’s FRA and Germany’s BND.

pages: 385 words: 99,985

Pattern Recognition
by William Gibson
Published 2 Jan 2003

In any case, et unpleasant cetera, I took the opportunity to exit F:F:F, made additionally unbearable by the pomo bellowings of fat cow A., and get together netwise with Darryl, to do further work on the result of some kanji-cruising we did while I was in California. Darryl, AKA Musashi, is a California footagehead fluent in Japanese. The Japanese footage sites, resisting machine translation, are an area that fascinates Parkaboy. With Musashi as translator, Parkaboy has made several forays already, posting the results of his research on F:F:F. Cayce has looked at these sites, but, aside from being incomprehensible, the text, which comes up on non-kanji screens as a frantic-looking slaw of Romanic symbols, reminds her too much of the archaic cartoon convention for swearing; it looks like fizzing, apoplectic rage.

pages: 319 words: 95,854

You Are What You Speak: Grammar Grouches, Language Laws, and the Politics of Identity
by Robert Lane Greene
Published 8 Mar 2011

Liberman explains the genesis of the LDC, going into more detail than a casual visitor needs but showing the pride of its director. DARPA is famous for funding blue-sky research into technologies so far ahead of the curve that its projects either fail spectacularly or create technological great leaps forward that the market would never deliver. DARPA’s interest in linguistics is obvious; machine translation and “defined-item recognition” (such as finding the name “Osama bin Laden” on blogs and television broadcasts and in wiretaps) are clearly interesting to the Pentagon. But the center does no classified research; by the terms of its grant, the LDC must share its work. Over the course of lunch, I find Liberman open-minded about everything we discuss, though he is plenty opinionated.

pages: 326 words: 103,170

The Seventh Sense: Power, Fortune, and Survival in the Age of Networks
by Joshua Cooper Ramo
Published 16 May 2016

To suddenly switch the world’s airline pilots, bond traders, and computer programmers to Chinese or Spanish would hardly be worth the immense cost. But it’s here where that Seventh Sense axiom, that connection changes the nature of an object, comes into play. For the first time, as a result of constant connectivity, a once-unimagined possibility exists: real-time machine translation. Fast, ubiquitous networks mean that the central role of English will be boiled away someday not by another language but by an intelligent translation computer, available anytime, anywhere. You’ll say, “Good morning,” as you climb into a taxi in Barcelona and your taxi driver will hear Buenos días.

Language and Mind
by Noam Chomsky
Published 1 Jan 1968

The interdisciplinary conferences on speech analysis of the early 1950s make interesting reading today. There were few so benighted as to question the possibility, in fact the immediacy, of a final solution to the problem of converting speech into writing by available engineering technique. And just a few years later, it was jubilantly discovered that machine translation and automatic abstracting were also just around the corner. For those who sought a more mathematical formulation of the basic processes, there was the newly developed mathematical theory of communication, which, it was widely believed in the early 1950s, had provided a fundamental concept – the concept of “information” – that would unify the social and behavioral sciences and permit the development of a solid and satisfactory mathematical theory of human behavior on a probabilistic base.

pages: 337 words: 103,522

The Creativity Code: How AI Is Learning to Write, Paint and Think
by Marcus Du Sautoy
Published 7 Mar 2019

His point was that provided the things had the relationship expressed by the axioms, then the deductions would make as much sense for chairs and beer mugs as for geometric lines and planes. This allows the computer to follow rules and create mathematical deductions without really knowing what the rules are about. This will be relevant when we come later to the idea of the Chinese room experiment devised by John Searle. This thought experiment explores the idea of machine translation and tries to illustrate why following rules doesn’t show intelligence or understanding. Nevertheless, follow the rules of the mathematical game and you get mathematical theorems. But where did this urge to establish proof in mathematics come from? A little bit of experimenting will reveal that every number can be written as prime numbers multiplied together and there always seems to be only one way to break down the number.

pages: 346 words: 97,330

Ghost Work: How to Stop Silicon Valley From Building a New Global Underclass
by Mary L. Gray and Siddharth Suri
Published 6 May 2019

The types of micro-tasks available to workers on UHRS are not surprising if you think about the products that Microsoft sells. Workers review voice recordings, rating the sound quality of the recorded clip. They check written text to ensure it’s not peppered with adult content. Another popular task is translation. Microsoft’s strength in speech recognition and machine translation comes from the ghost work of people training algorithms with accurate data sets. They create them by listening to short audio recordings of one sentence in one language, typically English, and entering the translation of the sentence in their mother tongue in an Excel file. Other common types of work on UHRS are market surveys—often restricted by demographics like age, gender, and location—and a task called “sentiment analysis.”

Calling Bullshit: The Art of Scepticism in a Data-Driven World
by Jevin D. West and Carl T. Bergstrom
Published 3 Aug 2020

Venture capital firms are throwing money at anyone who can say “deep learning” with a straight face. Here Rosenblatt deserves credit because many of his ambitious predictions have come true. The algorithms and basic architecture behind modern AI—machines that mimic human intelligence—are pretty much the same as he envisioned. Facial recognition technology, virtual assistants, machine translation systems, and stock-trading bots are all built upon perceptron-like algorithms. Most of the recent breakthroughs in machine learning—a subdiscipline of AI that studies algorithms designed to learn from data—can be ascribed to enormous leaps in the amount of data available and the processing power to deal with it, rather than to a fundamentally different approach.*1 Indeed, machine learning and artificial intelligence live and die by the data they employ.

pages: 345 words: 104,404

Pandora's Brain
by Calum Chace
Published 4 Feb 2014

It’s ridiculous to say that AI has made no progress. Self-driving cars are legal on public roads in parts of the US, and they will be legal over here soon too. Computers can recognise faces as well as you and I can: a lot of people said that would be in the ‘too-hard’ box for decades. Real-time machine translation is getting seriously impressive. This is all driven by the hugely increased processing power at researchers’ disposal, so they are going back to their original goal of developing a human-level intelligence which will pass a robust version of the Turing Test. A conscious machine.’ Carl wrinkled his nose and shook his head dismissively.

pages: 371 words: 108,317

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future
by Kevin Kelly
Published 6 Jun 2016

Google’s translation AI turns a phone into a personal translator. Speak English into the microphone and it immediately repeats what you said in understandable Chinese, or Russian, or Arabic, or dozens of other languages. Point the phone to the recipient and the app will instantly translate their reply. The machine translator does Turkish to Hindi, or French to Korean, etc. It can of course translate any text. High-level diplomatic translators won’t lose their jobs for a while, but day-to-day translating chores in business will all be better done by machines. In fact, any job dealing with reams of paperwork will be taken over by bots, including much of medicine.

pages: 331 words: 104,366

Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins
by Garry Kasparov
Published 1 May 2017

Using Google again you can find some obscure government papers in which ЧЯТ is indeed used as an acronym for чувствительных ядерных технологий, or “sensitive nuclear technologies.” This is unlikely to cause a panic because the humans seeing it likely have enough common sense of their own to know something strange is going on and to blame the machine translation instead of raising the nuclear alert level to DEFCON 2. But what if military AI algorithms are making that decision, not humans? What about the security agencies that rely on computer acquisition and analysis of terrorist “chatter”? They aren’t going to show each tweet to a human to double-check; that would be too slow to be useful.

pages: 392 words: 104,760

Babel No More: The Search for the World's Most Extraordinary Language Learners
by Michael Erard
Published 10 Jan 2012

I say this not to glibly dismiss the issue but to point out the full scope of the problem. Also, exciting new technologies for translating speech and text between languages don’t eliminate the need for people to learn languages. But they might enable multilingual transactions—for instance, by using free machine translation tools, I can get a rough gist of a web page in a language I don’t know. The fragmentary, improvised, simultaneous use of several languages all at once that I witnessed in the noodle kitchen doesn’t occur only in New York City, London (named in 1999 as the most multilingual city in the world), Mumbai, Rio de Janeiro, and other major world cities.

pages: 416 words: 106,582

This Will Make You Smarter: 150 New Scientific Concepts to Improve Your Thinking
by John Brockman
Published 14 Feb 2012

She might discover some small variation, but mostly the information will appear to be confirmed, and she will find an apparent verification of a falsity. Another delightful pastime is overtransforming an information artifact through digital algorithms—useful, if used sparingly—until it turns into something quite strange. For instance, you can use one of the online machine-translation services to translate a phrase through a ring of languages back to the original and see what you get. The sentence “The edge of knowledge motivates intriguing online discussions” transforms into “Online discussions in order to stimulate an attractive national knowledge” in four steps on Google’s current translator (English→German→Hebrew→Simplified Chinese→English).

pages: 484 words: 104,873

Rise of the Robots: Technology and the Threat of a Jobless Future
by Martin Ford
Published 4 May 2015

The sheer number of documents used to train the system dwarfed anything that had come before. Franz Och, the computer scientist who led the effort, noted that the team had built “very, very large language models, much larger than anyone has ever built in the history of mankind.”8 In 2005, Google entered its system in the annual machine translation competition held by the National Bureau of Standards and Technology, an agency within the US Commerce department that publishes measurement standards. Google’s machine learning algorithms were able to easily outperform the competition—which typically employed language and linguistic experts who attempted to actively program their translation systems to wade through the mire of conflicting and inconsistent grammatical rules that characterize languages.

pages: 413 words: 106,479

Because Internet: Understanding the New Rules of Language
by Gretchen McCulloch
Published 22 Jul 2019

So this glorious variety masks a digital divide: people who switch between languages or who speak a less written linguistic variety run into difficulties with many of the automated linguistic tools that internet residents rely on, such as search, voice recognition, automatic language detection, and machine translation. These tools are trained on large corpora, often from formal sources like books, newspapers, and radio, which are biased towards the forms of language that are already well documented. One method of bridging this gap uses public social media writing itself as training input—a promising avenue, considering that the quantity of informal writing produced on the internet exceeds the volume of formal writing many times over.

pages: 419 words: 109,241

A World Without Work: Technology, Automation, and How We Should Respond
by Daniel Susskind
Published 14 Jan 2020

That’s what brought AI out of its winter—what I call the pragmatist revolution. In the decades since Deep Blue’s victory, a generation of machines has been built in this pragmatist spirit: crafted to function very differently from human beings, judged not by how they perform a task but how well they perform it. Advances in machine translation, for instance, have come not from developing a machine that mimics a talented translator, but from having computers scan millions of human-translated pieces of text to figure out interlingual correspondences and patterns on their own. Likewise, machines have learned to classify images, not by mimicking human vision but by reviewing millions of previously labeled pictures and hunting for similarities between those and the particular photo in question.

pages: 374 words: 111,284

The AI Economy: Work, Wealth and Welfare in the Robot Age
by Roger Bootle
Published 4 Sep 2019

But the rate of improvement has been immense, even though they do not function at a level that many people find acceptable. Soon, though, at the click of a mouse, it should be possible to translate anything to any language at a very high level of competence, indeed beyond the capability of most human translators. In March 2014, Skype introduced real-time machine translation. In June 2013, Hugo Barra, Google’s top Android executive, said that within several years he expects a workable “universal translator” that could be used either in person or over the phone.26 Mind you, even if these improvements continue and routine translation work is all done by machine, there will still be people who will make their careers out of being language experts.

pages: 298 words: 43,745

Understanding Sponsored Search: Core Elements of Keyword Advertising
by Jim Jansen
Published 25 Jul 2011

For sponsored search, we are primarily interested in this visual input, in that we want the potential customer to see our ads. These ads appear due to the searcher entering a query into the search engine. So, instead of saying a word, the searcher types it into a search engine or speaks and a machine translates the word into text. As such, we are more interested in the actual behaviors of the searchers, especially in their development of the query and the terms they select for the query. This is not to say the cognitive processes are not important (they certainly are) but by necessity we focus on the effect that these cognitive processes have on actual behavior.

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence
by George Zarkadakis
Published 7 Mar 2016

If you put the initials of the title together (IOOI), and transform them to binary numbers (1001), you get the decimal result 9 (whatever that means …)! 25Shannon, C. E., and Weaver W. (1948), The Mathematical Theory of Communication. Champaign: University of Illinois Press. Shannon co-wrote the book with Warren Weaver, a pioneer in machine translation. 26I am rephrasing here an example given by Katherine Hayles in her 1999 book, How We Became Posthuman. 27The number of cells in our body is estimated between 5 billion and 200 billion. See: Bianconi, E., et al. (2013), ‘An estimation of the number of cells in the human body’, in: Annals of Human Biology, Nov–Dec 2013, Vol. 50, No. 6, pp. 463–71. 10 Peering into the mind 1Miller, G.

pages: 523 words: 112,185

Doing Data Science: Straight Talk From the Frontline
by Cathy O'Neil and Rachel Schutt
Published 8 Oct 2013

format=json&query=nytd_section_facet: [%s]&fields=url,title,body&rank=newest&offset=%s &api-key=Your_Key_Here" # create an empty list to hold 3 result sets resultsSports <- vector("list", 3) ## loop through 0, 1 and 2 to call the API for each value for(i in 0:2) { # first build the query string replacing the first %s with Sport and the second %s with the current value of i tempCall <- sprintf(theCall, "Sports", i) # make the query and get the json response tempJson <- fromJSON(file=tempCall) # convert the json into a 10x3 data.frame and save it to the list resultsSports[[i + 1]] <- ldply(tempJson$results, as.data.frame) } # convert the list into a data.frame resultsDFSports <- ldply(resultsSports) # make a new column indicating this comes from Sports resultsDFSports$Section <- "Sports" ## repeat that whole business for arts ## ideally you would do this in a more eloquent manner, but this is just for illustration resultsArts <- vector("list", 3) for(i in 0:2) { tempCall <- sprintf(theCall, "Arts", i) tempJson <- fromJSON(file=tempCall) resultsArts[[i + 1]] <- ldply(tempJson$results, as.data.frame) } resultsDFArts <- ldply(resultsArts) resultsDFArts$Section <- "Arts" # combine them both into one data.frame resultBig <- rbind(resultsDFArts, resultsDFSports) dim(resultBig) View(resultBig) ## now time for tokenizing # create the document-term matrix in english, removing numbers and stop words and stemming words doc_matrix <- create_matrix(resultBig$body, language="english", removeNumbers=TRUE, removeStopwords=TRUE, stemWords=TRUE) doc_matrix View(as.matrix(doc_matrix)) # create a training and testing set theOrder <- sample(60) container <- create_container(matrix=doc_matrix, labels=resultBig$Section, trainSize=theOrder[1:40], testSize=theOrder[41:60], virgin=FALSE) Historical Context: Natural Language Processing The example in this chapter where the raw data is text is just the tip of the iceberg of a whole field of research in computer science called natural language processing (NLP). The types of problems that can be solved with NLP include machine translation, where given text in one language, the algorithm can translate the text to another language; semantic analysis; part of speech tagging; and document classification (of which spam filtering is an example). Research in these areas dates back to the 1950s. Chapter 5. Logistic Regression The contributor for this chapter is Brian Dalessandro.

pages: 424 words: 114,905

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
by Eric Topol
Published 1 Jan 2019

FIGURE 4.7: Over time, deep learning AI has exceeded human performance for voice recognition. Source: Adapted from Y. Shoham et al., “Artificial Intelligence Index 2017 Annual Report,” CDN AI Index (2017): http://cdn.aiindex.org/2017-report.pdf. One of the most striking areas of progress in AI is machine translation. Fernando Pereira, Google’s VP and head of Translate, characterized the jump as “something I never thought I’d see in my working life. We’d been making steady progress. This is not steady progress. This is radical.”51 Akin to the AlphaGo Zero algorithm being deployed for many games besides Go, Google, in 2017, published a single translation system capable of transfer learning, a step toward “universal interlingua.”

pages: 407 words: 119,073

Sparks: China's Underground Historians and Their Battle for the Future
by Ian Johnson
Published 26 Sep 2023

Still, it would be amiss to omit them from this list as they are hugely ambitious and influential works. 3. Articles Almost all the articles cited in this book are in Chinese. For a start, I recommend the articles by Jiang Xue listed in the bibliography. I use the Internet Archive addresses to avoid dead links. Machine translating the pieces in your browser (for example: Google Translate has a browser extension that allows this) is far from ideal, but it will give a flavor of the works. Acknowledgments This book was the result of more than twenty years of thinking and writing about China, making it impossible to do justice to everyone who contributed to it.

pages: 481 words: 121,669

The Invisible Web: Uncovering Information Sources Search Engines Can't See
by Gary Price , Chris Sherman and Danny Sullivan
Published 2 Jan 2003

Two popular engines were the World Wide Web Worm, created by Oliver McBryan at the University of Colorado, and WWW JumpStation, by Jonathon Fletcher at the University of Stirling in the U.K. Neither lasted long: Idealab purchased WWWWorm and transformed it into the first version of the GoTo search engine. JumpStation simply faded out of favor as two other search services launched in 1994 gained popularity: Lycos and Yahoo!. Michael Mauldin and his team at the Center for Machine Translation at Carnegie Mellon University created Lycos (named for the wolf spider, Lycosidae lycosa, which catches its prey by pursuit, rather than in a web). Lycos quickly gained acclaim and prominence in the Web community, for the sheer number of pages it included in its index (1.5 million documents by January 1995) and the quality of its search results.

Why Things Bite Back: Technology and the Revenge of Unintended Consequences
by Edward Tenner
Published 1 Sep 1997

Extrapolation doesn't work, because neither nature nor human society is guaranteed to act reasonably. Some things like computer processor power and data storage get better and cheaper more quickly than the optimists expected; on the other hand, the tasks that they are supposed to perform, like machine translation, turn out to be more difficult than most people had thought. What is almost a constant, though, is that the real benefits usually are not the ones that we expected, and the real perils are not those we feared. What prevail are sets of loosely calculable factors and ranges of outcomes, with no accepted procedure for choosing among them.

pages: 451 words: 125,201

What We Owe the Future: A Million-Year View
by William MacAskill
Published 31 Aug 2022

There have also been breakthroughs in generating and recognising speech, images, art, and music; in real-time strategy games like StarCraft; and in a wide variety of tasks associated with understanding and generating humanlike text.37 You probably use artificial intelligence every day, for example in a Google search.38 AI has also driven significant improvements in voice recognition, email text completion, and machine translation.39 The ultimate achievement of AI research would be to create artificial general intelligence, or AGI: a single system, or collection of systems working together, that is capable of learning as wide an array of tasks as human beings can and performing them to at least the same level as human beings.40 Once we develop AGI, we will have created artificial agents—beings (not necessarily conscious) that are capable of forming plans and executing on them in just the way that human beings can.

pages: 523 words: 129,580

Eternity
by Greg Bear
Published 2 Jan 1988

If Earth had to survive a crisis among its saviors, it would have to be weaned … Karen spoke Chinese, English, French, Russian and Spanish, having brushed up on her Russian with Hexamon devices, and having learned Spanish the same way. That was enough to communicate directly with most of the delegates. Those few whose language she didn’t speakm including three whose dialects had arisen since the Death could usually communicate with others in the group through a shared second language. No outside human or machine translators diluted this early stage of their interaction; they were being taught to rely on each other. Before the week was out, they would all speak each other’s languages—having acquired them within the third chamber’s city memory—and many more, besides. For the first time in years, Karen felt on the edge of fulfillment.

pages: 742 words: 137,937

The Future of the Professions: How Technology Will Transform the Work of Human Experts
by Richard Susskind and Daniel Susskind
Published 24 Aug 2015

It can do this not because it understands the context of the usage of these words as human beings do, but because it can determine, statistically, that ‘to’ is much more likely immediately to precede ‘the office’ than ‘two’ or ‘too’. And this probability is established, effectively, by very fast searching and sorting across a huge database of documents. This was an early example of Big Data, and a similar approach was taken in developing machine translation (now commonly used in the form of Google Translate). Likewise, in many other areas of AI, brute-force processing and massive storage capacity, rather than simulation of human thought processes, are enabling machines to perform tasks that we would traditionally have expected to require some form of intelligence.

pages: 492 words: 141,544

Red Moon
by Kim Stanley Robinson
Published 22 Oct 2018

“Fang Fei,” the old one said, and the words appeared in writing on Fred’s glasses. “Fred Fredericks,” Fred replied. They nodded in a similar way, possibly acknowledging the coincidence of their FF initials. Qi said something to Fang Fei in Chinese. Fred’s glasses scrolled the red words I am afraid to be water. Fred concluded that the machine translation of the glasses was imperfect, but this was always true. Now he just had to do his best to interpret what he read. Fang Fei said, or was imputed to say, Water is life. Qi shrugged. Why is it here? What are you doing? When young I was three withouts. Sanwu. Fred heard this word and remembered Qi defining it during one of their talks in the apartment: it referred to people without residence permit, job, and something else.

pages: 496 words: 131,938

The Future Is Asian
by Parag Khanna
Published 5 Feb 2019

In India, Google Maps powers a $20 billion location services and marketing industry and is also aiding the country’s goals of improving public sanitation by mapping the locations of public toilets. Google’s Tez (“fast”) leverages India’s Unified Payments Interface (UPI) platform to allow the use of audio commands to make mobile payments, while its machine translation and speech-to-text apps will accelerate business communication in polyglot India and ASEAN. Google is also training 100,000 Indonesian software developers and translating Udacity courses into Bahasa. The United States’ Internet and social media giants have long been focused on the greater Asia because of their lack of access to mainland China.

pages: 611 words: 130,419

Narrative Economics: How Stories Go Viral and Drive Major Economic Events
by Robert J. Shiller
Published 14 Oct 2019

Literary scholars run the risk of focusing on details of the stories that are common just because the events are familiar in everyday life. They also face the difficulty of accounting for changes through time in the list of stories. Fortunately, research in semantic information and semiotics is advancing. For example, machine translation allows a computer to select the meaning of a word by looking at context, at adjacent words. The user asks, “What is the longest river in South Africa?” and Siri provides a direct verbal answer (“The longest river in South Africa is the Orange River”). Such search is now becoming well established around the world.

pages: 496 words: 154,363

I'm Feeling Lucky: The Confessions of Google Employee Number 59
by Douglas Edwards
Published 11 Jul 2011

If Urs was Google's architect, Jeff and Sanjay were the master carpenters who raised the roof beams and pounded the nails that held together the load-bearing walls. Wherever problems needed to be solved, "JeffnSanjay" were there*—from devising the Google file system to developing advertising technology, from accelerating machine translation to building breakthrough tools like MapReduce.† Jeff pumped out elegant code like a champagne fountain at a wedding. It seemed to pour from him effortlessly in endless streams that flowed together to form sparkling programs that did remarkable things. He once wrote a two-hundred-thousand-line application to help the Centers for Disease Control manage specialized statistics for epidemiologists.

pages: 489 words: 148,885

Accelerando
by Stross, Charles
Published 22 Jan 2005

"Stick that on your wrist, sign the three top copies, put them in the envelope, and let's get going. We've got a flight to catch, slave." Sadeq is eating his dinner when the first lawsuit in Jupiter orbit rolls in. Alone in the cramped humming void of his station, he considers the plea. The language is awkward, showing all the hallmarks of a crude machine translation: The supplicant is American, a woman, and – oddly – claims to be a Christian. This is surprising enough, but the nature of her claim is, at face value, preposterous. He forces himself to finish his bread, then bag the waste and clean the platter, before he gives it his full consideration. Is it a tasteless joke?

pages: 519 words: 142,646

Track Changes
by Matthew G. Kirschenbaum
Published 1 May 2016

“Computers also amplify creative imagination,” we read just after the urgent prognostications about civil liberties.58 Computers, they assert, will be used by writers and artists, by journalists and doctors, by account executives and their secretaries, and by architects and government clerks. We find early anticipations of assistive and adaptive applications for the disabled, such as screen readers and self-driving cars. Computers can be used to control the environment in one’s house, and even (with “appropriate sensors”) can be used for pest control. Machine translation, electronic voting, and sensors to detect spoiled food or contaminated water are also all on offer. Nonetheless, Herbert and Barnard are careful to emphasize, computers are only tools. More precisely they are assemblies of switches, on or off, wired in sequence. And without us they are nothing.

pages: 688 words: 147,571

Robot Rules: Regulating Artificial Intelligence
by Jacob Turner
Published 29 Oct 2018

Selbst and Julia Powles, “Meaningful Information and the Right to Explanation ”, International Data Privacy Law, Vol. 7, No. 4 (1 November 2017), 233–242, https://​doi.​org/​10.​1093/​idpl/​ipx022, accessed 1 June 2018. 22“ DARPA Website”, https://​www.​darpa.​mil/​, accessed 1 June 2018. 23David Gunning, “Explainable Artificial Intelligence (XAI)”, DARPA Website, https://​www.​darpa.​mil/​program/​explainable-artificial-intelligence, accessed 1 June 2018. 24David Gunning, DARPA XAI Presentation, DARPA, https://​www.​cc.​gatech.​edu/​~alanwags/​DLAI2016/​(Gunning)%20​IJCAI-16%20​DLAI%20​WS.​pdf, accessed 1 June 2018. 25Will Knight, “The Dark Secret at the Heart of AI”, MIT Technology Review, 11 April 2017, https://​www.​technologyreview​.​com/​s/​604087/​the-dark-secret-at-the-heart-of-ai/​, accessed 1 June 2018. 26Bryce Goodman and Seth Flaxman, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation ’,” arXiv:1606.08813v3 [stat.ML], 31 August 2016, https://​arxiv.​org/​pdf/​1606.​08813.​pdf, accessed 1 June 2018. 27Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms”, Big Data & Society (January–June 2016), 1–12 (2). 28Hui Cheng et al. “Multimedia Event Detection and Recounting”, SRI-Sarnoff AURORA at TRECVID 2014 (2014) http://​www-nlpir.​nist.​gov/​projects/​tvpubs/​tv14.​papers/​sri_​aurora.​pdf, accessed 1 June 2018. 29Upol Ehsan, Brent Harrison, Larry Chan, and Mark Riedl, “Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations”, arXiv:1702.07826v2 [cs.AI], 19 Dec 2, https://​arxiv.​org/​pdf/​1702.​07826.​pdf, accessed 1 June 2018. 30Daniel Whitenack, “Hold Your Machine Learning and AI Models Accountable”, Medium, 23 November 2017, https://​medium.​com/​pachyderm-data/​hold-your-machine-learning-and-ai-models-accountable-de887177174c, accessed 1 June 2018. 31Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016], OJ L119/1 (GDPR). 32See, for example, “Overview of the General Data Protection Regulation (GDPR)” (Information Commissioner’s Office 2016), 1.1, https://​ico.​org.​uk/​for-organisations/​data-protection-reform/​overview-of-the-gdpr/​individuals-rights/​rights-related-to-automated-decision-making-and-profiling/​, accessed 1 June 2018; House of Commons Science and Technology Committee, ‘Robotics and Artificial Intelligence’ (House of Commons 2016) HC 145, http://​www.​publications.​parliament.​uk/​pa/​cm201617/​cmselect/​cmsctech/​145/​145.​pdf, accessed 1 June 2018. 33GDPR, art. 83. 34Ibid., art. 3. 35 Equivalent wording is found in art. 14(2)(g) and art. 15(1)(h). 36“Profiling” is defined at art. 4(4) as “automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”.

pages: 471 words: 147,210

Children of Ruin
by Adrian Tchaikovsky
Published 13 May 2019

Her death is instant, explosive. For the projectile itself, her presence does not affect its course – she is incidental to its path, that takes it into and out of the Lightfoot in the flash of an eye. For Meshner, the moment is experienced only in retrospect. Bianca had been at her post, stamping out orders that the machines translated only sluggishly for the Human crew – Kern was concentrating too much on their defence to spend too much of herself on the niceties of interspecies communication. Then Bianca was . . . all around them, without any transitional state, the fluid-filled sack of her body burst asunder. Everyone is engaged in the fight, all the Lightfoot’s small crew.

pages: 665 words: 159,350

Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else
by Jordan Ellenberg
Published 14 May 2021

“Colorless green ideas sleep furiously” gets a much higher sentenciness score than “Furiously sleep ideas green colorless,” even without a formal model of grammar, even though neither of those two sentences, if you train on data gathered pre-Chomsky, are ever encountered in the observed data. Even the component pieces, like “Colorless green,” are seen rarely if at all. Norvig observes that, when it comes to real-world machine translation or autocompletion, statistical methods like this decisively outperform all attempts to reverse-engineer the underlying mechanisms of human language production.* Chomsky retorts that, be that as it may, methods like Google’s provide no insight into what language really is; they’re like Galileo observing the parabolic arc of a projectile before Newton stepped in to lay down the laws.

The Singularity Is Nearer: When We Merge with AI
by Ray Kurzweil
Published 25 Jun 2024

BACK TO NOTE REFERENCE 172 Tech at Meta, “Imagining a New Interface: Hands-Free Communication Without Saying a Word,” Facebook Reality Labs, March 30, 2020, https://tech.fb.com/imagining-a-new-interface-hands-free-communication-without-saying-a-word; Tech at Meta, “BCI Milestone: New Research from UCSF with Support from Facebook Shows the Potential of Brain-Computer Interfaces for Restoring Speech Communication,” Facebook Reality Labs, July 14, 2021, https://tech.fb.com/ar-vr/2021/07/bci-milestone-new-research-from-ucsf-with-support-from-facebook-shows-the-potential-of-brain-computer-interfaces-for-restoring-speech-communication; Joseph G. Makin et al., “Machine Translation of Cortical Activity to Text with an Encoder–Decoder Framework,” Nature Neuroscience 23, no. 4 (March 30, 2020): 575–82, https://doi.org/10.1038/s41593-020-0608-8. BACK TO NOTE REFERENCE 173 Antonio Regalado, “Facebook Is Ditching Plans to Make an Interface that Reads the Brain,” MIT Technology Review, July 14, 2021, https://www.technologyreview.com/2021/07/14/1028447/facebook-brain-reading-interface-stops-funding.

pages: 893 words: 199,542

Structure and interpretation of computer programs
by Harold Abelson , Gerald Jay Sussman and Julie Sussman
Published 25 Jul 1996

The interpreter traverses this data structure, analyzing the source program. As it does so, it simulates the intended behavior of the source program by calling appropriate primitive subroutines from the library. In this section, we explore the alternative strategy of compilation. A compiler for a given source language and machine translates a source program into an equivalent program (called the object program) written in the machine's native language. The compiler that we implement in this section translates programs written in Scheme into sequences of instructions to be executed using the explicit-control evaluator machine's data paths.34 Compared with interpretation, compilation can provide a great increase in the efficiency of program execution, as we will explain below in the overview of the compiler.

pages: 677 words: 206,548

Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It
by Marc Goodman
Published 24 Feb 2015

The device is inherently of no value to us (internal memo at Western Union, 1878). Somehow, the impossible always seems to become the possible. In the world of artificial intelligence, that next phase of development is called artificial general intelligence (AGI), or strong AI. In contrast to narrow AI, which cleverly performs a specific limited task, such as machine translation or auto navigation, strong AI refers to “thinking machines” that might perform any intellectual task that a human being could. Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing.

pages: 1,387 words: 202,295

Structure and Interpretation of Computer Programs, Second Edition
by Harold Abelson , Gerald Jay Sussman and Julie Sussman
Published 1 Jan 1984

The interpreter traverses this data structure, analyzing the source program. As it does so, it simulates the intended behavior of the source program by calling appropriate primitive subroutines from the library. In this section, we explore the alternative strategy of compilation. A compiler for a given source language and machine translates a source program into an equivalent program (called the object program) written in the machine’s native language. The compiler that we implement in this section translates programs written in Scheme into sequences of instructions to be executed using the explicit-control evaluator machine’s data paths.319 Compared with interpretation, compilation can provide a great increase in the efficiency of program execution, as we will explain below in the overview of the compiler.

pages: 706 words: 202,591

Facebook: The Inside Story
by Steven Levy
Published 25 Feb 2020

After that first step, Facebook might open up the process of translating the terminology to everyone. Sometimes it would prompt people for hints: a native speaker might get a pop-up saying, “Hey do you speak this language, can you help us with these ads?” Sometimes the crowdsourcing would be used to refine a machine translation. The hurdle was getting good translations. In order to be put into production, a translated version of Facebook needed verification that the translation was accurate and nuanced, not to mention avoiding any ugly American faux pas. And often even translating simple words could be difficult in certain languages.

pages: 703 words: 196,052

Cage of Souls
by Adrian Tchaikovsky
Published 4 Apr 2019

It could even tell, in some mysterious way, who had recorded each transaction: frauds could be tracked down. Perhaps it read their hands as they touched its mirror. It should be obvious that, glowing frozen giant or not, the Coming Man was not what the Temple was about. The machine was all. Sergei had a name in his own tongue for the Temple machine, translating as “something which reckons” – a poor phrase for such a marvel. According to Sergei they had these machines where he came from, albeit of a complexity several orders of magnitude less. Sergei had a lot to say about where he was from. There, he claimed, everyone was as tall as he, and many were as pale (although none so thin, I think).

pages: 789 words: 213,716

The uplift war
by David Brin
Published 1 Jun 1987

As he panted on the edge of the next level, trying to see in all directions at once, Fiben slowly became aware that a public address system was muttering over the noise of the crowd, repeating over and over again, in clipped, mechanical tones. . . . more enlightened approach to Uplift . . . appropriate to the background of the client race . . . offering opportunity to all . . . unbiased by warped human standards . . . Up in its box, the invader chirped into a small microphone. Its machine-translated words boomed out over the music and the excited jabber of the crowd. Fiben doubted one in ten of the chims below were even aware of the E.T.’s monologue in the state they were in. But that probably didn’t matter. They were being conditioned! No wonder he had never heard of Sylvie’s dance-mound striptease before, nor this crazy obstacle course.

pages: 496 words: 174,084

Masterminds of Programming: Conversations With the Creators of Major Programming Languages
by Federico Biancuzzi and Shane Warden
Published 21 Mar 2009

It may be that consciousness arises in the communication between these, not in the operation of any one of them. Legacy software is an unappreciated but serious problem. It will only get worse—not only in banking but in aerospace and other technical industries. The problem is the millions of lines of code. Those could be recoded, say in thousands of lines of Forth. There’s no point in machine translation, which would only make the code bigger. But there’s no way that code could be validated. The cost and risk would be horrendous. Legacy code may be the downfall of our civilization. It sounds like you’re betting that in the next 10 to 20 years we’ll see more and more software arise from the loose joining of many small parts.

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
by Martin Kleppmann
Published 17 Apr 2017

For example, if you are building machine learning and recommendation systems, or full-text search indexes with relevance ranking models, or performing image anal‐ ysis, you most likely need a more general model of data processing. These kinds of processing are often very specific to a particular application (e.g., feature engineering for machine learning, natural language models for machine translation, risk estima‐ tion functions for fraud prediction), so they inevitably require writing code, not just queries. MapReduce gave engineers the ability to easily run their own code over large data‐ sets. If you have HDFS and MapReduce, you can build a SQL query execution engine on top of it, and indeed this is what the Hive project did [31].

pages: 1,237 words: 227,370

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
by Martin Kleppmann
Published 16 Mar 2017

For example, if you are building machine learning and recommendation systems, or full-text search indexes with relevance ranking models, or performing image analysis, you most likely need a more general model of data processing. These kinds of processing are often very specific to a particular application (e.g., feature engineering for machine learning, natural language models for machine translation, risk estimation functions for fraud prediction), so they inevitably require writing code, not just queries. MapReduce gave engineers the ability to easily run their own code over large datasets. If you have HDFS and MapReduce, you can build a SQL query execution engine on top of it, and indeed this is what the Hive project did [31].

pages: 1,799 words: 532,462

The Codebreakers: The Comprehensive History of Secret Communication From Ancient Times to the Internet
by David Kahn
Published 1 Feb 1963

Barthel, “The ‘Talking Boards’ of Easter Island,” Scientific American, CXCVIII (June, 1958), 61-68. 917 Phaistos Disk: Diringer, 78; Gelb, 155-157; Aalto, 6. A very full bibliography in E. Grumach, Bibliographic der kretisch-mykenischen Epigraphik (Munich and Berlin, 1963). 917 Maya: E. V. Yevreinov, Yu. G. Kosarev, and V. A. Ustinov, three 1961 articles from different Russian sources translated and published as Foreign Developments in Machine Translation and Information Processing, No. 40, by the United States, Department of Commerce, Office of Technical Services, Joint Publications Research Service, No. 10508; and criticism by Yu. V. Knorozov, Ibid., No. 102, same publisher, No. 14318; Felix Shirokov, “Computer Deciphers Maya Hieroglyphics,” The UNESCO Courier, XV (March, 1962), 26-32. 917 Linear B: Unless otherwise specified, all decipherment details are from John Chadwick, The Decipherment of Linear B (Cambridge: University Press, 1958), and Michael Ventris and John Chadwick, Documents in Mycenaean Greek (Cambridge: University Press, 1956), 11-25.

The principles of his statistical-combinatory method, as applied to human languages, may be found in his paper “Algorithms for the Statistical-Combinatory Modeling of Syntax, Word-Formation and Semantics,” published in Materialy po mate-maticheskoy lingvistika i mashinnomu perevodu, II (Leningrad University, 1963), which has been translated as Foreign Developments in Machine Translation and Information Processing, No. 161, United States, Department of Commerce, Office of Technical Services, Joint Publications Research Service, No. 26209. Dr. Andreyev has proposed an intermediary computer language for intercommunication on earth, and an intermediary language of the second order for communication with extraterrestrial beings, in his “Linguistic Aspects of Translation,” Proceedings of the Ninth International Congress of Linguists, Cambridge, Massachusetts, August 27-31, 1962, ed.

The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal
by M. Mitchell Waldrop
Published 14 Apr 2001

However, says Miller, he can definitely remember when and how the computational idea at last began to sink in. It was sometime in the sum- mer of 1956, he recalls, and a former colleague, Walter Rosenblith, was insisting that there was this wonderfully bright guy over in MIT's linguistics group whom he should know about-one of the people Jerry Wiesner had just brought in for an RLE project on machine translation. His name was Noam Chomsky. Never heard of him. "So I kept saying, 'Sure, sure,' and putting it off/' Miller says with a laugh. But Rosenblith was persistent. So eventually, remembers Miller, he and his Harvard colleagues invited Chomsky over to give a talk. They didn't regret it. Chomsky, it turned out, had been staging his own revolt against behaviorism, and doing so with all the intellectual ferocity that he would later make famous.