combinatorial explosion

back to index

52 results

pages: 304 words: 84,396

Bounce: Mozart, Federer, Picasso, Beckham, and the Science of Success
by Matthew Syed
Published 19 Apr 2010

But relating the entirety of the information is impossible because the cues being processed by experts—in sport or elsewhere—are so subtle and relate to each other in such complex ways that it would take forever to codify them in their mind-boggling totality. This is known as combinatorial explosion, a concept that will help to nail down many of the insights of this chapter. The best way to get a sense of the strange power of combinatorial explosion is to imagine folding a piece of paper in two, making the paper twice as thick. Now repeat the process a hundred times. How thick is the paper now? Most people tend to guess in the range of a few inches to a few yards.

All of which helps to explain a qualification that was made earlier in the chapter: you will remember that the ten-thousand-hour rule was said to apply to any complex task. What is meant by complexity? In effect, it describes those tasks characterized by combinatorial explosion; tasks where success is determined, first and foremost, by superiority in software (pattern recognition and sophisticated motor programs) rather than hardware (simple speed or strength). Most sports are characterized by combinatorial explosion: tennis, table tennis, soccer, hockey, and so on. Just try to imagine, for a moment, designing a robot capable of solving the real-time spatial, motor, and perceptual challenges necessary to defeat Roger Federer on a tennis court.

His retrieval structure is rooted within the fabric of the game. This is the most powerful type of knowledge, and is precisely the kind possessed by firefighters, top athletes, and other experts. By now it should be obvious why Deep Blue’s gigantic advantage in processing speed was not sufficient to win—combinatorial explosion. Even in a game as simple as chess, the variables rapidly escalate beyond the capacity of any machine to compute. There are around thirty ways to move toward the beginning of a game, and thirty ways in which to respond. That amounts to around 800,000 possible positions after two moves each.

pages: 346 words: 97,890

The Road to Conscious Machines
by Michael Wooldridge
Published 2 Nov 2018

Ludicrously, unimaginably fast. This problem is called combinatorial explosion, and it is the single most important practical problem in AI, because search is such a ubiquitous requirement in AI problems.12 If you could find a foolproof recipe for solving search problems quickly – to get the same result as exhaustive search, without all that effort – then you would make yourself very famous, and many problems that are currently very difficult for AI would suddenly become easy. But you won’t, I’m afraid. We can’t get around combinatorial explosion: we need to work with it. Combinatorial explosion was recognized as a fundamental problem from the earliest days of AI – it was identified by McCarthy as one of the topics to be studied at his 1956 summer school on AI.

Waiting for the clever people at Intel to deliver a faster chip won’t help you with this problem: no conceivable improvement in conventional computer technology will yield a machine that could check all these alternatives in any reasonable amount of time. The phenomenon we are witnessing here is another example of combinatorial explosion. We were introduced to combinatorial explosion when looking at search trees, where each successive layer in the search tree multiplies the size of the tree. Combinatorial explosion occurs in situations where you must make a series of successive choices: each choice multiplies the total number of possible outcomes. In our team-building case, the choices we must make are whether to include an individual in the team: just adding one person to our pool of candidates can double the number of potential teams that we have to consider.

We could use heuristics, but they aren’t guaranteed to work. Is there a smarter way that is guaranteed to find an appropriate team, which doesn’t involve exhaustively checking all the alternatives? No! You might find a technique that improves things marginally, but, ultimately, you won’t be able to get around that combinatorial explosion. Any recipe you find that is guaranteed to solve this problem is not going to be feasible for most cases of interest. The reason for this is that our problem is an example of what is called an NP-complete problem. I’m afraid the acronym is not helpful for the uninitiated: ‘NP’ stands for ‘non-deterministic polynomial time’, and the technical meaning is rather complex.

Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps
by Valliappa Lakshmanan , Sara Robinson and Michael Munn
Published 31 Oct 2020

PCA, Solution, Autoencoders PDE, Problem-Solution, Data-driven discretizations, Unbounded domains PDF, Why It Works, Capturing uncertainty, Precision of predictions physics-based model, Problem pipeline, Solution pixel values, Images as pixel values post hoc explainability method, Solution posterior probability distribution, Why It Works precision, Choosing an evaluation metric prediction, Data and Feature Engineering, Problem, Problem(see also batch prediction, inference, online prediction) predictive modeling, Predictive Analytics principal components analysis (see PCA) probability density function (see PDF) problematic bias, Problem-Solution productionizing models, Deployment progressive fine-tuning, Fine-tuning versus feature extraction proxy bias, Problem Pusher component, Solution PyTorch, Solution, Problem, Synchronous training, Fully managed hyperparameter tuning Q quantile regression, Capturing uncertainty, Other ways of capturing uncertainty quantization, Problem, Phase 1: Building the offline model, Phase 1: Building the offline model quantization aware training, Standalone single-phase model R random forest, Decreased model interpretability, Grid search and combinatorial explosion random search, Grid search and combinatorial explosion, Why It Works random seed, Problem-Solution RandomForestRegressor, Grid search and combinatorial explosion RandomizedSearchCV, Grid search and combinatorial explosion ratings, representation of, Tabular data multiple ways ray-tracing model, Solution Rebalancing design pattern, Problem, Internal consistency, Design Pattern 10: Rebalancing -Importance of explainability, Before training, Pattern Interactions recall, Choosing an evaluation metric recommendation systemsreframing as regression, Changing the objective uses for, Solution, Label bias, Recommendation Systems Redis, Solution, Alternative implementations reducible error, Problem reframing , Solution, Reframing and Cascade-Reframing and Cascade Reframing design pattern, Problem Representation Design Patterns-Multitask learning, Model baseline, Pattern Interactions-Pattern Interactions regression models, Models and Frameworks, Solution regularization, Dropout as bagging, Design Pattern 11: Useful Overfitting, Monte Carlo methods, Overfitting a batch, Regularization-Two splits, Augmented data relative frequency, Array of categorical variables repeatability, Reproducibility Repeatable Splitting design pattern, Reproducibility Design Patterns, Design Pattern 22: Repeatable Splitting-Unstructured data, Responsible AI, Pattern Interactions, Development reporting bias, Problem reproducibility, Reproducibility-Reproducibility research scientists, Roles ResNet, Image embeddings responsible AI, Responsible AI, Fairness versus explainability, Development REST API, for model serving, Prediction library retraining trigger, Triggers for retraining roles, Roles-Roles roles, impact of team size, Roles Runge-Kutta methods, Data-driven discretizations runs, definition of, Building the TFX pipeline S SageMaker, Fully managed, Running the pipeline on Cloud AI Platform, Model versioning with a managed service salt, Cryptographic hash Sampled Shapley, Explanations from deployed models SavedModel, Model export, Online prediction, Lambda architecture(see also saved_model_cli) saved_model_cli, Inference in Python scaling, Scale, Why scaling is desirable-Why scaling is desirable SchemaGen, Solution scikit-learn, Reproducibility, Why scaling is desirable, Text data multiple ways, Increased training and design time, Choosing a model architecture, Grid search and combinatorial explosion, Grid search and combinatorial explosion sentence embeddings, Embeddings of words versus sentences Sequential API, Images as pixel values, Images as tiled structures serverless, Data and Model Tooling, Trade-Offs and Alternatives serverless triggers, Triggers for retraining serving, definition of, The Machine Learning Process SGD, Stochastic Gradient Descent-Keras Training Loop, Synchronous training SHAP, Importance of explainability, SHAP-Explanations from deployed models Shapley Value, Solution, SHAP sigmoid, Sigmoid output for models with two classes, One versus rest-One versus rest(see also sigmoid activation) sigmoid activation, Solution-Parsing sigmoid results Six-Foot Balcony pattern, What Are Design Patterns?

Gaussian process, Bayesian optimization genetic algorithms, Trade-Offs and Alternatives, Genetic algorithms-Genetic algorithms GitHub Actions, Integrating CI/CD with pipelines GitLab Triggers, Integrating CI/CD with pipelines GKE, Solution, Running the pipeline on Cloud AI Platform GLoVE, Context language models Google App Engine, Create web endpoint Google Bolo, Standalone single-phase model Google Cloud Functions, Create web endpoint Google Cloud Public Datasets, Data and Model Tooling Google Container Registry, Running the pipeline on Cloud AI Platform Google Kubernetes Engine (see GKE) Google Translate, Standalone single-phase model GPU, Problem, Problem-Synchronous training, ASICs for better performance at lower cost, Minimizing I/O waits, Problem, Running the pipeline on Cloud AI Platform, Transformational phase: Fully automated processes Gradient Boosting Machines, Boosting gradient descent (see SGD) graphics processing unit (see GPU) grid search, Grid search and combinatorial explosion-Grid search and combinatorial explosion, Why It Works Grid-SearchCV, Grid search and combinatorial explosion ground truth label, Data and Feature Engineering, Data Quality, Capturing ground truth-Why It Works H hash bucketscollisions, Bucket collision empty, Empty hash buckets heuristic to choose numbers, Out-of-vocabulary input Hashed Feature design pattern, Data Representation Design Patterns, Design Pattern 1: Hashed Feature-Empty hash buckets, Pattern Interactions Helm, Richard, What Are Design Patterns?

, Data and Model Tooling, Concept, Saving predictions Cloud AI Platform Pipelines, Solution, Running the pipeline on Cloud AI Platform Cloud AI Platform Predictions, Lambda architecture Cloud AI Platform Training, Solution, Running the pipeline on Cloud AI Platform Cloud Build, Integrating CI/CD with pipelines Cloud Composer/Apache Airflow, Scheduled retraining Cloud Dataflow, Lambda architecture Cloud Functions, Triggers for retraining, Integrating CI/CD with pipelines Cloud Run, Create web endpoint, Other serverless versioning tools Cloud Spanner, Cached results of batch serving clustering, Models and Frameworks clustering models, Models and Frameworks CNN, Images as tiled structures, Why It Works-Why It Works cold start, Problem, Cold start combinatorial explosion, Grid search and combinatorial explosion completeness, Data Quality components, definition of, Solution computer vision, Computer Vision concept drift, Problem, Estimating retraining interval confidence, Inputs with overlapping labels, When human experts disagree, Saving predictions confusion matrix, Problem, Evaluating model performance consistency, Data Quality-Data Quality containers, Design Pattern 25: Workflow Pipeline, Solution, Why It Works context language models, Context language models-Context language models(see also BERT, Word2Vec) Continued Model Evaluation design pattern, Design Patterns for Resilient Serving, Design Pattern 18: Continued Model Evaluation-Estimating retraining interval, Model versioning with a managed service, Responsible AI, Automating data evaluation, Pattern Interactions Continuous Bag of Words (see CBOW) continuous evaluation, Continuous evaluation-Continuous evaluation continuous integration and continuous delivery (see CI/CD) convolutional neural network (see CNN) Coral Edge TPU, Phase 1: Building the offline model counterfactual analysis, Counterfactual analysis and example-based explanations-Counterfactual analysis and example-based explanations counterfactual reasoning, Capturing ground truth cryptographic algorithms, Cryptographic hash custom serving function, Custom serving function D DAG, Why It Works, Apache Airflow and Kubeflow Pipelines Darwin, Charles, Genetic algorithms data accuracy, Data Quality data analysts, Roles data augmentation, Data augmentation data collection bias, Before training, Before training data distribution bias, Problem data drift, Data Drift-Data Drift, Problem, Estimating retraining interval, Continuous evaluation for offline models, Problem data engineers, Roles, Scale, Solution data parallelism, Solution-Solution, Synchronous training, Why It Works, Model parallelism data preprocessing, Data and Feature Engineering(see also data transformation, feature engineering) data representation, Data Representation Design Patterns-Data Representation Design Patterns data representation bias, Before training data scientistsrole of, Roles, Multiple Objectives-Multiple Objectives, Why It Works, Problem tasks of, Problem, Problem, Solution data transformation, Data and Feature Engineering data validation, Data and Feature Engineering, Data validation with TFX data warehouses, Embeddings in a data warehouse-Embeddings in a data warehouse dataset-level transformations, Efficient transformations with tf.transform datasets, definition of, Data and Feature Engineering Datastore, Cached results of batch serving decision trees, Models and Frameworks, Data Representation Design Patterns-Data Representation Design Patterns, Decreased model interpretability, Choosing a model architecture, Typical Training Loop, Solution Deep Galerkin Method, Data-driven discretizations-Unbounded domains deep learning, Models and Frameworks-Models and Frameworks, Multimodal feature representations and model interpretability deep neural network (see DNN model) default, definition of, Model versioning with a managed service Dense layers, Solution, Using images with metadata design patterns, definition of, What Are Design Patterns?

pages: 398 words: 86,855

Bad Data Handbook
by Q. Ethan McCallum
Published 14 Nov 2012

–Guessing Text Encoding, Normalizing Text–Normalizing Text, Problem: Application-Specific Characters Leaking into Plain Text–Problem: Application-Specific Characters Leaking into Plain Text, Problem: Application-Specific Characters Leaking into Plain Text–Problem: Application-Specific Characters Leaking into Plain Text, Getting Reviews–Sentiment Classification, Sentiment Classification, Polarized Language–Polarized Language, Corpus Creation–Corpus Creation, Training a Classifier–Lessons Learned, Moving On to the Professional World, Government Data Is Very Real, Lessons Learned and Looking Ahead, File Formats–File Formats, File Formats, File Formats, File Formats, File Formats, File Formats, File Formats, File Formats, File Formats, File Formats, File Formats, File Formats, A Relational Cost Allocations Model–A Relational Cost Allocations Model, The Delicate Sound of a Combinatorial Explosion…–The Delicate Sound of a Combinatorial Explosion…, The Hidden Network Emerges–Finding Value in Network Properties Apache Thrift, File Formats columnar, Understand the Data Structure–Understand the Data Structure complexity of, increasing, The Delicate Sound of a Combinatorial Explosion…–The Delicate Sound of a Combinatorial Explosion… CSV, Is It Just Me, or Does This Data Smell Funny?, Understand the Data Structure–Understand the Data Structure, Keyword PPC Example–Keyword PPC Example, Problem: Application-Specific Characters Leaking into Plain Text–Problem: Application-Specific Characters Leaking into Plain Text, File Formats ER model, A Relational Cost Allocations Model–A Relational Cost Allocations Model Google Protocol Buffers, File Formats graph model, The Hidden Network Emerges–Finding Value in Network Properties human-readable format, Data Intended for Human Consumption, Not Machine Consumption–Data Spread Across Multiple Files, The Arrangement of Data–The Arrangement of Data, Reading Data from an Awkward Format–Reading Data Spread Across Several Files limiting analysis, The Arrangement of Data–The Arrangement of Data reading with software, Reading Data from an Awkward Format–Reading Data Spread Across Several Files JSON, Is It Just Me, or Does This Data Smell Funny?

We’ll also need queries for all the cases in between if we continue with this design to cover assets allocated to departments and products allocated to cost centers. Each new level of the query adds to a combinatorial explosion and begs many questions about the design. What happens if we change the allocation rules? What if a product can be allocated directly to a cost center instead of passing through a department? Are the queries efficient as the amount of data in the system increases? Is the system testable? To make matters worse, a real-world allocation model would also contain many more entities and associative entities. The Delicate Sound of a Combinatorial Explosion… We’ve introduced the problem and sketched out a rudimentary solution in just a few pages, but imagine how a real system like this might evolve over an extended period of months or even years with a team of people involved.

(see JSON) jellyfish library, Python, Text Processing with Python JSON (JavaScript Object Notation), Is It Just Me, or Does This Data Smell Funny?, Understand the Data Structure–Understand the Data Structure, File Formats, File Formats, File Formats K Koch snowflake, The Delicate Sound of a Combinatorial Explosion… L Laiacano, Adam (author), (Re)Organizing the Web’s Data–Conclusion Levy, Josh (author), Bad Data Lurking in Plain Text–Exercises Logistic Regression classifier, Polarized Language longitudinal datasets, Imputation Bias: General Issues, Other Sources of Bias M machine-learning experts, outsourcing, How to Feed and Care for Your Machine-Learning Experts–Conclusion (see also data scientist) manufacturing data example, Example 1: Defect Reduction in Manufacturing–Example 1: Defect Reduction in Manufacturing Maximum Entropy classifier, Polarized Language McCallum, Q.

pages: 396 words: 117,149

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
by Pedro Domingos
Published 21 Sep 2015

In turn, the number of possible concepts is an exponential function of the number of possible instances: since a concept labels each instance as positive or negative, adding an instance doubles the number of possible concepts. As a result, the number of concepts is an exponential function of an exponential function of the number of attributes! In other words, machine learning is a combinatorial explosion of combinatorial explosions. Perhaps we should just give up and not waste our time on such a hopeless problem? Fortunately, something happens in learning that kills off one of the exponentials, leaving only an “ordinary” singly exponential intractable problem. Suppose you have a bag full of concept definitions, each written on a piece of paper, and you take out a random one and see how well it matches the data.

An E. coli bacterium can divide into two roughly every fifteen minutes; given enough nutrients it can grow into a mass of bacteria the size of Earth in about a day. When the number of things an algorithm needs to do grows exponentially with the size of its input, computer scientists call it a combinatorial explosion and run for cover. In machine learning, the number of possible instances of a concept is an exponential function of the number of attributes: if the attributes are Boolean, each new attribute doubles the number of possible instances by taking each previous instance and extending it with a yes or no for that attribute.

In Chapter 6, we’ll also see how to orient learning toward the goal while steering clear of the things we don’t know and don’t need to know. More immediately, we know we can use inverse deduction to infer the structure of the cell’s networks from data and previous knowledge, but there’s a combinatorial explosion of ways to apply it, and we need a strategy. Since metabolic networks were designed by evolution, perhaps simulating it in our learning algorithms is the way to go. In the next chapter, we’ll see how to do just that. Deeper into the brain When backprop first hit the streets, connectionists had visions of quickly learning larger and larger networks until, hardware permitting, they amounted to artificial brains.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All
by Robert Elliott Smith
Published 26 Jun 2019

This is an easy-to-describe problem, one that people would have been solving even in Llull’s time. Imagine that a medieval merchant (or modern salesman) has a map of n cities, places he must pitch his tent (or deliver his pitch). He wants to find the shortest route to accomplish that goal. It’s easy to show that the number of routes possible is combinatorially explosive as a function of the number of cities n. As n gets bigger, the number of alternative routes explodes to n*(n-1)*(n-2) … 1, which mathematicians call n!, or ‘n factorial’. A factorial grows faster than any polynomial (geometric figure) of n. It expands faster than any square, any cube, in four-dimensional cube, etc., all the way up to massive geometric figures that we can’t even imagine.

The TSP can be described in a single sentence, but is provably amongst the hardest computational problems possible, regardless of computational power, because there is no algorithm for simplifying the TSP such that a combinatoric explosion can be averted. There is no way to find the best route that practically is better than looking through all the (combinatorially explosive number of) possible routes. This sort of hard fact isn’t unique to the TSP; there are scores of other problems that have this characteristic. The TSP is one of a massive number of simple problems that occur every day, all around us, but are as hard as anything. This difficulty is because of how rapidly superexponentials get big.

Your current position is calculated (using signals from satellites), and it is also found in the graph. The satnav reasons quantitatively over distances, by exploring the roads spanning out from your location, to find the ‘best’ (shortest) means to the end (your desired destination). This is why means–ends analysis is often called reasoning as search. While not as combinatorially explosive as the TSP, the search for the best route is, in general, terrifically hard, particularly in terms of the usually large number of roads and intermediate locations between you and your destination. So rather than take until the sun burns out to find you a route, means–ends analysis in your satnav exploits heuristics to find you a route in reasonable time.

pages: 574 words: 164,509

Superintelligence: Paths, Dangers, Strategies
by Nick Bostrom
Published 3 Jun 2014

One such early system, the Logic Theorist, was able to prove most of the theorems in the second chapter of Whitehead and Russell’s Principia Mathematica, and even came up with one proof that was much more elegant than the original, thereby debunking the notion that machines could “only think numerically” and showing that machines were also able to do deduction and to invent logical proofs.13 A follow-up program, the General Problem Solver, could in principle solve a wide range of formally specified problems.14 Programs that could solve calculus problems typical of first-year college courses, visual analogy problems of the type that appear in some IQ tests, and simple verbal algebra problems were also written.15 The Shakey robot (so named because of its tendency to tremble during operation) demonstrated how logical reasoning could be integrated with perception and used to plan and control physical activity.16 The ELIZA program showed how a computer could impersonate a Rogerian psychotherapist.17 In the mid-seventies, the program SHRDLU showed how a simulated robotic arm in a simulated world of geometric blocks could follow instructions and answer questions in English that were typed in by a user.18 In later decades, systems would be created that demonstrated that machines could compose music in the style of various classical composers, outperform junior doctors in certain clinical diagnostic tasks, drive cars autonomously, and make patentable inventions.19 There has even been an AI that cracked original jokes.20 (Not that its level of humor was high—“What do you get when you cross an optic with a mental object? An eye-dea”—but children reportedly found its puns consistently entertaining.) The methods that produced successes in the early demonstration systems often proved difficult to extend to a wider variety of problems or to harder problem instances. One reason for this is the “combinatorial explosion” of possibilities that must be explored by methods that rely on something like exhaustive search. Such methods work well for simple instances of a problem, but fail when things get a bit more complicated. For instance, to prove a theorem that has a 5-line long proof in a deduction system with one inference rule and 5 axioms, one could simply enumerate the 3,125 possible combinations and check each one to see if it delivers the intended conclusion.

Proving a theorem with a 50-line proof does not take ten times longer than proving a theorem that has a 5-line proof: rather, if one uses exhaustive search, it requires combing through 550 ≈ 8.9 × 1034 possible sequences—which is computationally infeasible even with the fastest supercomputers. To overcome the combinatorial explosion, one needs algorithms that exploit structure in the target domain and take advantage of prior knowledge by using heuristic search, planning, and flexible abstract representations—capabilities that were poorly developed in the early AI systems. The performance of these early systems also suffered because of poor methods for handling uncertainty, reliance on brittle and ungrounded symbolic representations, data scarcity, and severe hardware limitations on memory capacity and processor speed.

Without an efficient way to encode candidate solutions (a genetic language that matches latent structure in the target domain), evolutionary search tends to meander endlessly in a vast search space or get stuck at a local optimum. Even if a good representational format is found, evolution is computationally demanding and is often defeated by the combinatorial explosion. Neural networks and genetic algorithms are examples of methods that stimulated excitement in the 1990s by appearing to offer alternatives to the stagnating GOFAI paradigm. But the intention here is not to sing the praises of these two methods or to elevate them above the many other techniques in machine learning.

When Computers Can Think: The Artificial Intelligence Singularity
by Anthony Berglas , William Black , Samantha Thalind , Max Scratchmann and Michelle Estes
Published 28 Feb 2015

The result of this is that chess programs that can just look ahead a few moves can play a passable game. A chess program that looked ahead 20 half moves would be unbeatable. But combinatorial explosion makes that impossible. The problem is known as as having exponential complexity. That is because the number of cases grows as a power to the problem size. In this case the size is 10n where n is the number of moves to look ahead. Many other problems are like that, and several very promising early results in artificial intelligence failed to scale to more realistic problems due to the resulting combinatorial explosion. This does not mean that problems cannot be solved. It just means that the naive brute force application of a simplistic algorithms cannot easily solve the world’s problems.

Human intelligence now minimal for AGI 8. Definitions of singularity 4. Hollywood and HAL 2001 1. Anthropomorphic zap gun vs. virus 2. The two HAL's 3. HAL dialog 5. The Case Against Machine Intelligence 1. Turing halting problem 2. Gödel's incompleteness theorem 3. Incompleteness argument against general AGI 4. Combinatorial explosion 5. Chinese room 6. Simulated vs. real intelligence 7. Emperors new mind 8. Intentionality 9. Brain in a vat 10. Understanding the brain 11. Consciousness and the soul 12. Only what was programmed 13. What computers can't do 14. Over-hyped technologies 15. Nonlinear difficulty, chimpanzees 16.

Indeed, in 1950 Turing wrote a landmark paper “Computing machinery and intelligence”in which he discussed the proposition that computers will be able to really think. In the paper he addressed nine objections to the proposition, and specifically addressed the irrelevance of the halting problem and the incompleteness theorem to this question. Combinatorial explosion Many problems in artificial intelligence involve searching for a solution out of a large number of possibilities. For example, suppose a chess program considers ten plausible moves that it might make, then for each of those moves it considers ten moves its opponent might make. That would make a total of 100 moves it needs to consider.

pages: 339 words: 92,785

I, Warbot: The Dawn of Artificially Intelligent Conflict
by Kenneth Payne
Published 16 Jun 2021

Wittgenstein was wrong—just because you couldn’t speak of something, it didn’t mean it didn’t matter. The ‘frame problem’, as it’s known, was all about knowledge representation. How could you capture reality in a simple way that presented all the salient information a computer might need to solve whatever conundrum you had in mind? Related to this was the problem of the ‘combinatorial explosion’. Many problems AI researchers worked with involved searching ahead through possible scenarios. Which move should you make in chess, or Go? What would the world look like if you moved this block here? What was the optimal route between two points on a map? The basic difficulty is that the number of possible moves increases rapidly the further into the future you look.

That’s because these simplified worlds strip out much of the rich complexity of the real world, making decisions much easier to compute—even if the calculations required are often still far from trivial. The most complex video games have a welter of possible moves. Complex sequences of actions are needed to succeed against adversaries. Even games with far fewer possible legal moves, like chess or Go, are fiendishly complicated because of the ‘combinatorial explosion’ that occurs when simple moves are combined. The fastest, most powerful computers of today certainly can’t ‘solve’ Go or chess—there are simply far too many combinations of possible moves. But it turns out that doesn’t matter. All that counts is that the machines can outperform human opponents, who have very different ways of working out what to do next.

A-10 Warthog abacuses Abbottabad, Pakistan Able Archer (1983) acoustic decoys acoustic torpedoes Adams, Douglas Aegis combat system Aerostatic Corps affective empathy Affecto Afghanistan agency aircraft see also dogfighting; drones aircraft carriers algorithms algorithm creation Alpha biases choreography deep fakes DeepMind, see DeepMind emotion recognition F-117 Nighthawk facial recognition genetic selection imagery analysis meta-learning natural language processing object recognition predictive policing alien hand syndrome Aliens (1986 film) Alpha AlphaGo Altered Carbon (television series) Amazon Amnesty International amygdala Andropov, Yuri Anduril Ghost anti-personnel mines ants Apple Aristotle armour arms races Army Research Lab Army Signal Corps Arnalds, Ólafur ARPA Art of War, The (Sun Tzu) art Artificial Intelligence agency and architecture autonomy and as ‘brittle’ connectionism definition of decision-making technology expert systems and feedback loops fuzzy logic innateness intelligence analysis meta-learning as ‘narrow’ needle-in-a-haystack problems neural networks reinforcement learning ‘strong AI’ symbolic logic and unsupervised learning ‘winters’ artificial neural networks Ashby, William Ross Asimov, Isaac Asperger syndrome Astute class boats Atari Breakout (1976) Montezuma’s Revenge (1984) Space Invaders (1978) Athens ATLAS robots augmented intelligence Austin Powers (1997 film) Australia authoritarianism autonomous vehicles see also drones autonomy B-21 Raider B-52 Stratofortress B2 Spirit Baby X BAE Systems Baghdad, Iraq Baidu balloons ban, campaigns for Banks, Iain Battle of Britain (1940) Battle of Fleurus (1794) Battle of Midway (1942) Battle of Sedan (1940) batwing design BBN Beautiful Mind, A (2001 film) beetles Bell Laboratories Bengio, Yoshua Berlin Crisis (1961) biases big data Bin Laden, Osama binary code biological weapons biotechnology bipolarity bits Black Lives Matter Black Mirror (television series) Blade Runner (1982 film) Blade Runner 2049 (2017 film) Bletchley Park, Buckinghamshire blindness Blunt, Emily board games, see under games boats Boden, Margaret bodies Boeing MQ-25 Stingray Orca submarines Boolean logic Boston Dynamics Bostrom, Nick Boyd, John brain amygdala bodies and chunking dopamine emotion and genetic engineering and language and mind merge and morality and plasticity prediction and subroutines umwelts and Breakout (1976 game) breathing control brittleness brute force Buck Rogers (television series) Campaign against Killer Robots Carlsen, Magnus Carnegie Mellon University Casino Royale (2006 film) Castro, Fidel cat detector centaur combination Central Intelligence Agency (CIA) centre of gravity chaff Challenger Space Shuttle disaster (1986) Chauvet cave, France chemical weapons Chernobyl nuclear disaster (1986) chess centaur teams combinatorial explosion and creativity in Deep Blue game theory and MuZero as toy universe chicken (game) chimeras chimpanzees China aircraft carriers Baidu COVID-19 pandemic (2019–21) D-21 in genetic engineering in GJ-11 Sharp Sword nuclear weapons surveillance in Thucydides trap and US Navy drone seizure (2016) China Lake, California Chomsky, Noam choreography chunking Cicero civilians Clarke, Arthur Charles von Clausewitz, Carl on character on culmination on defence on genius on grammar of war on materiel on nature on poker on willpower on wrestling codebreaking cognitive empathy Cold War (1947–9) arms race Berlin Crisis (1961) Cuban Missile Crisis (1962) F-117 Nighthawk Iran-Iraq War (1980–88) joint action Korean War (1950–53) nuclear weapons research and SR-71 Blackbird U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) VRYAN Cole, August combinatorial creativity combinatorial explosion combined arms common sense computers creativity cyber security games graphics processing unit (GPU) mice Moore’s Law symbolic logic viruses VRYAN confirmation bias connectionism consequentialism conservatism Convention on Conventional Weapons ConvNets copying Cormorant cortical interfaces cost-benefit analysis counterfactual regret minimization counterinsurgency doctrine courageous restraint COVID-19 pandemic (2019–21) creativity combinatorial exploratory genetic engineering and mental disorders and transformational criminal law CRISPR, crows Cruise, Thomas Cuban Missile Crisis (1962) culmination Culture novels (Banks) cyber security cybernetics cyborgs Cyc cystic fibrosis D-21 drones Damasio, Antonio dance DARPA autonomous vehicle research battlespace manager codebreaking research cortical interface research cyborg beetle Deep Green expert system programme funding game theory research LongShot programme Mayhem Ng’s helicopter Shakey understanding and reason research unmanned aerial combat research Dartmouth workshop (1956) Dassault data DDoS (distributed denial-of-service) dead hand system decision-making technology Deep Blue deep fakes Deep Green DeepMind AlphaGo Atari playing meta-learning research MuZero object recognition research Quake III competition (2019) deep networks defence industrial complex Defence Innovation Unit Defence Science and Technology Laboratory defence delayed gratification demons deontological approach depth charges Dionysus DNA (deoxyribonucleic acid) dodos dogfighting Alpha domains dot-matrix tongue Dota II (2013 game) double effect drones Cormorant D-21 GJ-11 Sharp Sword Global Hawk Gorgon Stare kamikaze loitering munitions nEUROn operators Predator Reaper reconnaissance RQ-170 Sentinel S-70 Okhotnik surveillance swarms Taranis wingman role X-37 X-47b dual use technology Eagleman, David early warning systems Echelon economics Edge of Tomorrow (2014 film) Eisenhower, Dwight Ellsberg, Daniel embodied cognition emotion empathy encryption entropy environmental niches epilepsy epistemic community escalation ethics Asimov’s rules brain and consequentialism deep brain stimulation and deontological approach facial recognition and genetic engineering and golden rule honour hunter-gatherer bands and identity just war post-conflict reciprocity regulation surveillance and European Union (EU) Ex Machina (2014 film) expert systems exploratory creativity extra limbs Eye in the Sky (2015 film) F-105 Thunderchief F-117 Nighthawk F-16 Fighting Falcon F-22 Raptor F-35 Lightning F/A-18 Hornet Facebook facial recognition feedback loops fighting power fire and forget firmware 5G cellular networks flow fog of war Ford forever wars FOXP2 gene Frahm, Nils frame problem France Fukushima nuclear disaster (2011) Future of Life Institute fuzzy logic gait recognition game theory games Breakout (1976) chess, see chess chicken Dota II (2013) Go, see Go Montezuma’s Revenge (1984) poker Quake III (1999) Space Invaders (1978) StarCraft II (2010) toy universes zero sum games gannets ‘garbage in, garbage out’ Garland, Alexander Gates, William ‘Bill’ Gattaca (1997 film) Gavotti, Giulio Geertz, Clifford generalised intelligence measure Generative Adversarial Networks genetic engineering genetic selection algorithms genetically modified crops genius Germany Berlin Crisis (1961) Nuremburg Trials (1945–6) Russian hacking operation (2015) World War I (1914–18) World War II (1939–45) Ghost in the Shell (comic book) GJ-11 Sharp Sword Gladwell, Malcolm Global Hawk drone global positioning system (GPS) global workspace Go (game) AlphaGo Gödel, Kurt von Goethe, Johann golden rule golf Good Judgment Project Google BERT Brain codebreaking research DeepMind, see DeepMind Project Maven (2017–) Gordievsky, Oleg Gorgon Stare GPT series grammar of war Grand Challenge aerial combat autonomous vehicles codebreaking graphics processing unit (GPU) Greece, ancient grooming standard Groundhog Day (1993 film) groupthink guerilla warfare Gulf War First (1990–91) Second (2003–11) hacking hallucinogenic drugs handwriting recognition haptic vest hardware Harpy Hawke, Ethan Hawking, Stephen heat-seeking missiles Hebrew Testament helicopters Hellfire missiles Her (2013 film) Hero-30 loitering munitions Heron Systems Hinton, Geoffrey Hitchhiker’s Guide to the Galaxy, The (Adams) HIV (human immunodeficiency viruses) Hoffman, Frank ‘Holeshot’ (Cole) Hollywood homeostasis Homer homosexuality Hongdu GJ-11 Sharp Sword honour Hughes human in the loop human resources human-machine teaming art cyborgs emotion games King Midas problem prediction strategy hunter-gatherer bands Huntingdon’s disease Hurricane fighter aircraft hydraulics hypersonic engines I Robot (Asimov) IARPA IBM identity Iliad (Homer) image analysis image recognition cat detector imagination Improbotics nformation dominance information warfare innateness intelligence analysts International Atomic Energy Agency International Criminal Court international humanitarian law internet of things Internet IQ (intelligence quotient) Iran Aegis attack (1988) Iraq War (1980–88) nuclear weapons Stuxnet attack (2010) Iraq Gulf War I (1990–91) Gulf War II (2003–11) Iran War (1980–88) Iron Dome Israel Italo-Turkish War (1911–12) Jaguar Land Rover Japan jazz JDAM (joint directed attack munition) Jeopardy Jobs, Steven Johansson, Scarlett Johnson, Lyndon Joint Artificial Intelligence Center (JAIC) de Jomini, Antoine jus ad bellum jus in bello jus post bellum just war Kalibr cruise missiles kamikaze drones Kasparov, Garry Kellogg Briand Pact (1928) Kennedy, John Fitzgerald KGB (Komitet Gosudarstvennoy Bezopasnosti) Khrushchev, Nikita kill chain King Midas problem Kissinger, Henry Kittyhawk Knight Rider (television series) know your enemy know yourself Korean War (1950–53) Kratos XQ-58 Valkyrie Kubrick, Stanley Kumar, Vijay Kuwait language connectionism and genetic engineering and natural language processing pattern recognition and semantic webs translation universal grammar Law, Jude LeCun, Yann Lenat, Douglas Les, Jason Libratus lip reading Litvinenko, Alexander locked-in patients Lockheed dogfighting trials F-117 Nighthawk F-22 Raptor F-35 Lightning SR-71 Blackbird logic loitering munitions LongShot programme Lord of the Rings (2001–3 film trilogy) LSD (lysergic acid diethylamide) Luftwaffe madman theory Main Battle Tanks malum in se Manhattan Project (1942–6) Marcus, Gary Maslow, Abraham Massachusetts Institute of Technology (MIT) Matrix, The (1999 film) Mayhem McCulloch, Warren McGregor, Wayne McNamara, Robert McNaughton, John Me109 fighter aircraft medical field memory Merkel, Angela Microsoft military industrial complex Mill, John Stuart Milrem mimicry mind merge mind-shifting minimax regret strategy Minority Report (2002 film) Minsky, Marvin Miramar air base, San Diego missiles Aegis combat system agency and anti-missile gunnery heat-seeking Hellfire missiles intercontinental Kalibr cruise missiles nuclear warheads Patriot missile interceptor Pershing II missiles Scud missiles Tomahawk cruise missiles V1 rockets V2 rockets mission command mixed strategy Montezuma’s Revenge (1984 game) Moore’s Law mosaic warfare Mueller inquiry (2017–19) music Musk, Elon Mutually Assured Destruction (MAD) MuZero Nagel, Thomas Napoleon I, Emperor of the French Napoleonic France (1804–15) narrowness Nash equilibrium Nash, John National Aeronautics and Space Administration (NASA) National Security Agency (NSA) National War College natural language processing natural selection Nature navigation computers Nazi Germany (1933–45) needle-in-a-haystack problems Netflix network enabled warfare von Neumann, John neural networks neurodiversity nEUROn drone neuroplasticity Ng, Andrew Nixon, Richard normal accident theory North Atlantic Treaty Organization (NATO) North Korea nuclear weapons Cuban Missile Crisis (1962) dead hand system early warning systems F-105 Thunderchief and game theory and Hiroshima and Nagasaki bombings (1945) Manhattan Project (1942–6) missiles Mutually Assured Destruction (MAD) second strike capability submarines and VRYAN and in WarGames (1983 film) Nuremburg Trials (1945–6) Obama, Barack object recognition Observe Orient Decide and Act (OODA) offence-defence balance Office for Naval Research Olympic Games On War (Clausewitz), see Clausewitz, Carl OpenAI optogenetics Orca submarines Ottoman Empire (1299–1922) pain Pakistan Palantir Palmer, Arnold Pandemonium Panoramic Research Papert, Seymour Parkinson’s disease Patriot missile interceptors pattern recognition Pearl Harbor attack (1941) Peloponnesian War (431–404 BCE) Pentagon autonomous vehicle research codebreaking research computer mouse development Deep Green Defence Innovation Unit Ellsberg leaks (1971) expert system programme funding ‘garbage in, garbage out’ story intelligence analysts Project Maven (2017–) Shakey unmanned aerial combat research Vietnam War (1955–75) perceptrons Perdix Pershing II missiles Petrov, Stanislav Phalanx system phrenology pilot’s associate Pitts, Walter platform neutrality Pluribus poker policing polygeneity Portsmouth, Hampshire Portuguese Man o’ War post-traumatic stress disorder (PTSD) Predator drones prediction centaur teams ‘garbage in, garbage out’ story policing toy universes VRYAN Prescience principles of war prisoners Project Improbable Project Maven (2017–) prosthetic arms proximity fuses Prussia (1701–1918) psychology psychopathy punishment Putin, Vladimir Pyeongchang Olympics (2018) Qinetiq Quake III (1999 game) radar Rafael RAND Corporation rational actor model Rawls, John Re:member (Arnalds) Ready Player One (Cline) Reagan, Ronald Reaper drones reciprocal punishment reciprocity reconnaissance regulation ban, campaigns for defection self-regulation reinforcement learning remotely piloted air vehicles (RPAVs) revenge porn revolution in military affairs Rid, Thomas Robinson, William Heath Robocop (1987 film) Robotics Challenge robots Asimov’s rules ATLAS Boston Dynamics homeostatic Shakey symbolic logic and Rome Air Defense Center Rome, ancient Rosenblatt, Frank Royal Air Force (RAF) Royal Navy RQ-170 Sentinel Russell, Stuart Russian Federation German hacking operation (2015) Litvinenko murder (2006) S-70 Okhotnik Skripal poisoning (2018) Ukraine War (2014–) US election interference (2016) S-70 Okhotnik SAGE Said and Done’ (Frahm) satellite navigation satellites Saudi Arabia Schelling, Thomas schizophrenia Schwartz, Jack Sea Hunter security dilemma Sedol, Lee self-actualisation self-awareness self-driving cars Selfridge, Oliver semantic webs Shakey Shanahan, Murray Shannon, Claude Shogi Silicon Valley Simon, Herbert Single Integrated Operations Plan (SIOP) singularity Siri situational awareness situationalist intelligence Skripal, Sergei and Yulia Slaughterbots (2017 video) Slovic, Paul smartphones Smith, Willard social environments software Sophia Sorcerer’s Apprentice, The (Goethe) South China Sea Soviet Union (1922–91) aircraft Berlin Crisis (1961) Chernobyl nuclear disaster (1986) Cold War (1947–9), see Cold War collapse (1991) Cuban Missile Crisis (1962) early warning systems Iran-Iraq War (1980–88) Korean War (1950–53) nuclear weapons radar technology U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) VRYAN World War II (1939–45) Space Invaders (1978 game) SpaceX Sparta Spike Firefly loitering munitions Spitfire fighter aircraft Spotify Stanford University Stanley Star Trek (television series) StarCraft II (2010 game) stealth strategic bombing strategic computing programme strategic culture Strategy Robot strategy Strava Stuxnet sub-units submarines acoustic decoys nuclear Orca South China Sea incident (2016) subroutines Sukhoi Sun Tzu superforecasting surveillance swarms symbolic logic synaesthesia synthetic operation environment Syria Taliban tanks Taranis drone technological determinism Tempest Terminator franchise Tesla Tetlock, Philip theory of mind Threshold Logic Unit Thucydides TikTok Tomahawk cruise missiles tongue Top Gun (1986 film) Top Gun: Maverick (2021 film) torpedoes toy universes trade-offs transformational creativity translation Trivers, Robert Trump, Donald tumours Turing, Alan Twitter 2001: A Space Odyssey (1968 film) Type-X Robotic Combat Vehicle U2 incident (1960) Uber Uexküll, Jacob Ukraine ultraviolet light spectrum umwelts uncanny valley unidentified flying objects (UFOs) United Kingdom AI weapons policy armed force, size of Battle of Britain (1940) Bletchley Park codebreaking Blitz (1940–41) Cold War (1947–9) COVID-19 pandemic (2019–21) DeepMind, see DeepMind F-35 programme fighting power human rights legislation in Litvinenko murder (2006) nuclear weapons principles of war Project Improbable Qinetiq radar technology Royal Air Force Royal Navy Skripal poisoning (2018) swarm research wingman concept World War I (1914–18) United Nations United States Afghanistan War (2001–14) Air Force Army Research Lab Army Signal Corps Battle of Midway (1942) Berlin Crisis (1961) Bin Laden assassination (2011) Black Lives Matter protests (2020) centaur team research Central Intelligence Agency (CIA) Challenger Space Shuttle disaster (1986) Cold War (1947–9), see Cold War COVID-19 pandemic (2019–21) Cuban Missile Crisis (1962) culture cyber security DARPA, see DARPA Defense Department drones early warning systems F-35 programme Gulf War I (1990–91) Gulf War II (2003–11) IARPA Iran Air shoot-down (1988) Korean War (1950–53) Manhattan Project (1942–6) Marines Mueller inquiry (2017–19) National Security Agency National War College Navy nuclear weapons Office for Naval Research Patriot missile interceptor Pearl Harbor attack (1941) Pentagon, see Pentagon Project Maven (2017–) Rome Air Defense Center Silicon Valley strategic computing programme U2 incident (1960) Vienna Summit (1961) Vietnam War (1955–75) universal grammar Universal Schelling Machine (USM) unmanned aerial vehicles (UAVs), see drones unsupervised learning utilitarianism UVision V1 rockets V2 rockets Vacanti mouse Valkyries Van Gogh, Vincent Vietnam War (1955–75) Vigen, Tyler Vincennes, USS voice assistants VRYAN Wall-e (2008 film) WannaCry ransomware War College, see National War College WarGames (1983 film) warrior ethos Watson weapon systems WhatsApp Wiener, Norbert Wikipedia wingman role Wittgenstein, Ludwig World War I (1914–18) World War II (1939–45) Battle of Britain (1940) Battle of Midway (1942) Battle of Sedan (1940) Bletchley Park codebreaking Blitz (1940–41) Hiroshima and Nagasaki bombings (1945) Pearl Harbor attack (1941) radar technology V1 rockets V2 rockets VRYAN and Wrangham, Richard Wright brothers WS-43 loitering munitions Wuhan, China X-37 drone X-drone X-rays YouTube zero sum games

pages: 509 words: 92,141

The Pragmatic Programmer
by Andrew Hunt and Dave Thomas
Published 19 Oct 1999

Rather than digging though a hierarchy yourself, just ask for what you need directly: We added a method to Selection to get the time zone on our behalf: the plotting routine doesn't care whether the time zone comes from the Recorder directly, from some contained object within Recorder, or whether Selection makes up a different time zone entirely. The selection routine, in turn, should probably just ask the recorder for its time zone, leaving it up to the recorder to get it from its contained Location object. Traversing relationships between objects directly can quickly lead to a combinatorial explosion[1] of dependency relationships. You can see symptoms of this phenomenon in a number of ways: [1] If n objects all know about each other, then a change to just one object can result in the other n – 1 objects needing changes. Large C or C++ projects where the command to link a unit test is longer than the test program itself "Simple" changes to one module that propagate through unrelated modules in the system Developers who are afraid to change code because they aren't sure what might be affected Systems with many unnecessary dependencies are very hard (and expensive) to maintain, and tend to be highly unstable.

A big advantage of systems such as these is that you have a single, consistent interface to the blackboard. When building a conventional distributed application, you can spend a great deal of time crafting unique API calls for every distributed transaction and interaction in the system. With the combinatorial explosion of interfaces and interactions, the project can quickly become a nightmare. Organizing Your Blackboard When the detectives work on large cases, the blackboard may become cluttered, and it may become difficult to locate data on the board. The solution is to partition the blackboard and start to organize the data on the blackboard somehow.

Index A Accessor function, 31 ACM, see Association for Computing Machinery Active code generator, 104 Activity diagram, 150 Advanced C++ Programming Styles and Idioms, 265 Advanced Programming in the Unix Environment, 264 Aegis transaction-based configuration management, 246, 271 Agent, 76, 117, 297 Algorithm binary chop, 180 choosing, 182 combinatoric, 180 divide-and-conquer, 180 estimating, 177, 178 linear, 177 O() notation, 178, 181 quicksort, 180 runtime, 181 sublinear, 177 Allocations, nesting, 131 Analysis Patterns, 264 Anonymity, 258 AOP, see Aspect-Oriented Programming Architecture deployment, 156 flexibility, 46 prototyping, 55 temporal decoupling, 152 Art of Computer Programming, 183 Artificial intelligence, marauding, 26 Aspect-Oriented Programming (AOP), 39, 273 Assertion, 113, 122, 175 side effects, 124 turning off, 123 Association for Computing Machinery (ACM), 262 Communications of the ACM, 263 SIGPLAN, 263 Assumptions, testing, 175 “at” command, 231 Audience, 21 needs, 19 auto_ptr, 134 Automation, 230 approval procedures, 235 build, 88, 233 compiling, 232 cron, 231 documentation, 251 scripts, 234 team, 229 testing, 29, 238 Web site generation, 235 awk, 99 B Backus-Naur Form (BNF), 59n Base class, 112 bash shell, 80, 82n Bean, see Enterprise Java Beans (EJB) Beck, Kent, 194, 258 Beowulf project, 268 “Big O” notation, 177 “Big picture”, 8 Binary chop, 97, 180 Binary format, 73 problems parsing, 75 bison, 59, 269 BIST, see Built-In Self Test Blackboard system, 165 partitioning, 168 workflow, 169 Blender example contract for, 119, 289 regression test jig, 305 workflow, 151 BNF, see Backus-Naur Form (BNF) Boiled frog, 8, 175, 225 Boundary condition, 173, 243 Brain, Marshall, 265 Branding, 226 Brant, John, 268 “Broken Window Theory”, 5 vs. stone soup, 9 Brooks, Fred, 264 Browser, class, 187 Browser, refactoring, 187, 268 Bug, 90 failed contract as, 111 see also Debugging; Error Build automation, 88, 233 dependencies, 233 final, 234 nightly, 231 refactoring, 187 Built-In Self Test (BIST), 189 Business logic, 146 Business policy, 203 C C language assertions, 122 DBC, 114 duplication, 29 error handling, 121 error messages, 115 macros, 121 Object Pascal interface, 101 C++ language, 46 assertions, 122 auto_ptr, 134 books, 265 DBC, 114 decoupling, 142 DOC++, 251, 269 duplication, 29 error messages, 115 exceptions, 132 unit tests, 193 Caching, 31 Call, routine, 115, 173 Cascading Style Sheets (CSS), 253 Cat blaming, 3 herding, 224 Schrödinger’s, 47 Catalyzing change, 8 Cathedrals, xx Cetus links, 265 Change, catalyzing, 8 Christiansen, Tom, 81 Class assertions, 113 base, 112 coupling, 139, 142 coupling ratios, 242 encapsulating resource, 132 invariant, 110, 113 number of states, 245 resource allocation, 132 subclass, 112 wrapper, 132, 133, 135, 141 Class browser, 187 ClearCase, 271 Cockburn, Alistair, xxiii, 205, 264, 272 Code generator, 28, 102 active, 104 makefiles, 232 parsers, 105 passive, 103 Code profiler, 182 Code reviews, 33, 236 Coding algorithm speed, 177 comments, 29, 249 coupled, 130 coverage analysis, 245 database schema, 104 defensive, 107 and documentation, 29, 248 estimating, 68 exceptions, 125 implementation, 173 iterative, 69 “lazy”, 111 metrics, 242 modules, 138 multiple representations, 28 orthogonality, 34, 36, 40 ownership, 258 prototypes, 55 server code, 196 “shy”, 40, 138 specifications, 219 tracer bullets, 49–51 unit testing, 190, 192 see also Coupled code; Decoupled code; Metadata; Source code control system (SCCS) Cohesion, 35 COM, see Component Object Model Combinatorial explosion, 140, 167 Combinatoric algorithm, 180 Command shell, 77 bash, 80 Cygwin, 80 vs. GUI, 78 UWIN, 81 Windows, 80 Comment, 29, 249 avoiding duplication, 29 DBC, 113 parameters, 250 types of, 249 unnecessary, 250 see also Documentation Common Object Request Broker (CORBA), 29, 39, 46 Event Service, 160 Communicating, 18 audience, 19, 21 duplication, 32 e-mail, 22 and formal methods, 221 presentation, 20 style, 20 teams, 225 users, 256 writing, 18 Communications of the ACM, 263 Comp.object FAQ, 272 Compiling, 232 compilers, 267 DBC, 113 warnings and debugging, 92 Component Object Model (COM), 55 Component-based systems, see Modular system Concurrency, 150 design, 154 interfaces, 155 and Programming by Coincidence, 154 requirements analysis of, 150 workflow, 150 Concurrent Version System (CVS), 271 Configuration cooperative, 148 dynamic, 144 metadata, 147 Configuration management, 86, 271 Constantine, Larry L., 35 Constraint management, 213 Constructor, 132 initialization, 155 Contact, authors’ e-mail, xxiii Context, use instead of globals, 40 Contract, 109, 174 see also Design by contract (DBC) Controller (MVC), 162 Coplien, Jim, 265 CORBA, see Common Object Request Broker Coupled code, 130 coupling ratios, 242 minimizing, 138, 158 performance, 142 temporal coupling, 150 see also Decoupled code Coverage analysis, 245 Cox, Brad J., 189n Crash, 120 Critical thinking, 16 cron, 231 CSS, see Cascading Style Sheets CVS, see Concurrent Version System Cygwin, 80, 270 D Data blackboard system, 169 caching, 31 dictionary, 144 dynamic data structures, 135 global, 40 language, 60 normalizing, 30 readable vs. understandable, 75 test, 100, 243 views, 160 visualizing, 93 see also Metadata Data Display Debugger (DDD), 93, 268 Database active code generator, 104 schema, 105f, 141, 144 schema maintenance, 100 DBC, see Design by contract DDD, see Data Display Debugger Deadline, 6, 246 Deadlock, 131 Debugging, 90 assertions, 123 binary search, 97 bug location, 96 bug reproduction, 93 checklist, 98 compiler warnings and, 92 corrupt variables, 95 “Heisenbug”, 124 rubber ducking, 95 and source code branching, 87 surprise bug, 97 and testing, 92, 195 time bomb, 192 tracing, 94 view, 164 visualizing data, 93 Decision making, 46 Decoupled code, 38, 40 architecture, 152 blackboard system, 166 Law of Demeter, 140 metadata, 145 minimizing coupling, 138 modular testing, 244 physical decoupling, 142 temporal coupling, 150 workflow, 150 see also Coupled code Defensive coding, 107 Delegation, 304 Delphi, 55 see also Object Pascal Demeter project, 274 Demeter, Law of, 140 Dependency, reducing, see Modular system; Orthogonality Deployment, 156 Deployment descriptor, 148 Design accessor functions, 31 concurrency, 154 context, 174 deployment, 156 design/methodology testing, 242 metadata, 145 orthogonality, 34, 37 physical, 142 refactoring, 186 using services, 154 Design by contract (DBC), 109, 155 and agents, 117 assertions, 113 class invariant, 110 as comments, 113 dynamic contracts, 117 iContract, 268 language support, 114 list insertion example, 110 pre- and postcondition, 110, 113, 114 predicates, 110 unit testing, 190 Design Patterns, 264 observer, 158 singleton, 41 strategy, 41 Destructor, 132 Detectives, 165 Development tree, 87 Development, iterative, 69 Divide-and-conquer algorithm, 180 DOC++ documentation generator, 251, 269 DocBook, 254 Documentation automatic updating, 251 and code, 29, 248 comments, 29, 113, 249, 251 executable, 251 formats, 253 HTML, 101 hypertext, 210 internal/external, 248 invariant, 117 mark-up languages, 254 orthogonality, 42 outline, 18 requirements, 204 technical writers, 252 word processors, 252, 254 writing specifications, 218 see also Comment; Web documentation Dodo, 148 Domain, problem, 58, 66 Don’t repeat yourself, see DRY principle Downloading source code, see Example code Dr.

pages: 313 words: 91,098

The Knowledge Illusion
by Steven Sloman
Published 10 Feb 2017

And to fully appreciate the answer to each of these questions would require understanding the answer to a number of other questions. Fully understanding who buys hairpins would require an analysis of hairstyles, which in turn would require understanding fashion and its underlying social structure. Computer scientists refer to this problem of ever-growing information needs as combinatorial explosion. To achieve complete understanding necessitates understanding increasingly more and more, and the combination of everything you need to understand to achieve complete understanding quickly becomes more than you can bear without, well, exploding. Chaos theory is another mathematical tool that shows that the complexity of the world is too much to handle.

See also artificial intelligence (AI); thought body-brain cooperation in cognitive processing, 101–05 collective mind, 5–6 CRT (Cognitive Reflection Test), 80–84 division of cognitive labor, 14, 109–11, 120–21, 128–29 illusion of understanding, 8, 15 individual limitations, 4–5, 15 Landauer, Thomas, 24–26 the mind compared to a computer, 24–27 moving text window example, 93–95 origins of, 4 Turing, Alan, 25 collaboration, 14, 109–11, 115–18, 121–22, 149–50, 226 collective intelligence hypothesis, 209–10 combinatorial explosion, 34 “The Coming Technological Singularity” (Vinge), 132 communal learning Brown, Ann, 228–30 Fostering Communities of Learners program, 228–30 group/regroup strategy, 228–30 jigsaw method, 229–30 communication language, 113–14 non-verbal, 114–15, 117 community development of, 112–13 intelligence, 259 responsibility, 259–61 community of knowledge, 80, 200, 206–14, 221, 223–27, 241–42 compatibility of different group members’ knowledge, 126 complexity airplane example, 28 beehive example, 107–08, 113–14 car example, 28 chaos theory, 34–35 class reunion example, 31 combinatorial explosion, 34 fractals, 33–34 hairpin example, 34 of the human brain, 29–30 military strategy, 32–33 in the natural world, 29–31 of politics, 16 recognizing, 35 reducing, 250 of technology, 134–35 weather prediction, 30–31 comprehension illusion of, 217–18 inverted text example, 217 Pledge of Allegiance example, 217–18 “Purple Haze” example, 218 computer checkers example of testing intelligence of a team, 210–11 “Computing Machinery and Intelligence” (Turing), 25 consequences of tiny changes.

See also artificial intelligence (AI); thought body-brain cooperation in cognitive processing, 101–05 collective mind, 5–6 CRT (Cognitive Reflection Test), 80–84 division of cognitive labor, 14, 109–11, 120–21, 128–29 illusion of understanding, 8, 15 individual limitations, 4–5, 15 Landauer, Thomas, 24–26 the mind compared to a computer, 24–27 moving text window example, 93–95 origins of, 4 Turing, Alan, 25 collaboration, 14, 109–11, 115–18, 121–22, 149–50, 226 collective intelligence hypothesis, 209–10 combinatorial explosion, 34 “The Coming Technological Singularity” (Vinge), 132 communal learning Brown, Ann, 228–30 Fostering Communities of Learners program, 228–30 group/regroup strategy, 228–30 jigsaw method, 229–30 communication language, 113–14 non-verbal, 114–15, 117 community development of, 112–13 intelligence, 259 responsibility, 259–61 community of knowledge, 80, 200, 206–14, 221, 223–27, 241–42 compatibility of different group members’ knowledge, 126 complexity airplane example, 28 beehive example, 107–08, 113–14 car example, 28 chaos theory, 34–35 class reunion example, 31 combinatorial explosion, 34 fractals, 33–34 hairpin example, 34 of the human brain, 29–30 military strategy, 32–33 in the natural world, 29–31 of politics, 16 recognizing, 35 reducing, 250 of technology, 134–35 weather prediction, 30–31 comprehension illusion of, 217–18 inverted text example, 217 Pledge of Allegiance example, 217–18 “Purple Haze” example, 218 computer checkers example of testing intelligence of a team, 210–11 “Computing Machinery and Intelligence” (Turing), 25 consequences of tiny changes.

pages: 174 words: 56,405

Machine Translation
by Thierry Poibeau
Published 14 Sep 2017

This was particularly the case in March 2016, when Google Deepmind’s system AlphaGo—based on deep learning—beat the world champion in the game of Go. This approach is especially efficient in complex environments such as Go, where it is impossible to systematically explore all the possible combinations due to combinatorial explosion (i.e., there are very quickly too many possibilities to be able to explore all of them systematically). The complexity of human languages is somewhat different: the overall meaning of a sentence or of a text is based on ambiguous words, with no clear-cut boundaries between word senses, and all in relation to one another.

See Mobile phone Centre d’Études sur la Traduction Automatique (CETA), 67–68, 84 Centre National de la Recherche Scientifique (CNRS), 67 Chandioux, John, 87 Child language acquisition, 255 China, 67, 86 Chinese, 56, 88, 163–165, 192, 209, 215, 228, 232, 250 Chomsky, Noam, 63, 65 Church, Kenneth, 105 Co-construction of meaning, 20 Cognate, 11, 107–108, 261 Cognitive plausibility, 20, 23, 178, 181–184, 187, 251–256 Cognitive sciences, 2. See also Cognitive plausibility Cold War, 49, 60 Colmerauer, Alain, 84 Combinatorial explosion, 182 Communication network, 249. See also Social network Compendium of translation software, 229 Complexity (linguistic), 18, 23, 182, 195, 255 Compound words, 15, 23, 33, 46, 164–165, 214, 261 Comprehension evaluation. See Evaluation measure and test Computational linguistics, 15, 36, 37, 68, 82–84 Computation time, 54, 149, 155, 170, Computer documentation, 119 Confidential data 230–231.

pages: 696 words: 143,736

The Age of Spiritual Machines: When Computers Exceed Human Intelligence
by Ray Kurzweil
Published 31 Dec 1998

AND CONCLUDED—EUREKA—SPACE IS CURVED! CHAPTER FIVE CONTEXT AND KNOWLEDGE PUTTING IT ALL TOGETHER So how well have we done? Many apparently difficult problems do yield to the application of a few simple formulas. The recursive formula is a master at analyzing problems that display inherent combinatorial explosion, ranging from the playing of board games to proving mathematical theorems. Neural nets and related self-organizing paradigms emulate our pattern-recognition faculties, and do a fine job of discerning such diverse phenomena as human speech, letter shapes, visual objects, faces, fingerprints, and land terrain images.

Then these little machines could build replicas of themselves, achieving the field’s key objective. The reason that self-replication is important is that it is too expensive to build these tiny machines one at a time. To be effective, nanometer-sized machines need to come in the trillions. The only way to achieve this economically is through combinatorial explosion: let the machines build themselves. Drexler, Merkle (a coinventor of public key encryption, the primary method of encrypting messages), and others have convincingly described how such a self-replicating nanorobot—nanobot—could be constructed. The trick is to provide the nanobot with sufficiently flexible manipulators—arms and hands—so that it is capable of building a copy of itself.

Raj Reddy, Carnegie Mellon University’s AI guru, cites studies of chess as playing the same role in artificial intelligence that studies of E. coli play in biology: an ideal laboratory for studying fundamental questions.5 Computers use their extreme speed to analyze the vast combinations created by the combinatorial explosion of moves and countermoves. While chess programs may use a few other tricks (such as storing the openings of all master chess games in this century and precomputing endgames), they essentially rely on their combination of speed and precision. In comparison, humans, even chess masters, are extremely slow and imprecise.

Hands-On Machine Learning With Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurelien Geron
Published 14 Aug 2019

For example, if there were two features a and b, PolynomialFeatures with degree=3 would not only add the features a2, a3, b2, and b3, but also the combinations ab, a2b, and ab2. Warning PolynomialFeatures(degree=d) transforms an array containing n features into an array containing features, where n! is the factorial of n, equal to 1 × 2 × 3 × ⋯ × n. Beware of the combinatorial explosion of the number of features! Learning Curves If you perform high-degree Polynomial Regression, you will likely fit the training data much better than with plain Linear Regression. For example, Figure 4-14 applies a 300-degree polynomial model to the preceding training data, and compares the result with a pure linear model and a quadratic model (2nd-degree polynomial).

Fortunately, when using SVMs you can apply an almost miraculous mathematical technique called the kernel trick (it is explained in a moment). It makes it possible to get the same result as if you added many polynomial features, even with very high-degree polynomials, without actually having to add them. So there is no combinatorial explosion of the number of features since you don’t actually add any features. This trick is implemented by the SVC class. Let’s test it on the moons dataset: from sklearn.svm import SVC poly_kernel_svm_clf = Pipeline([ ("scaler", StandardScaler()), ("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5)) ]) poly_kernel_svm_clf.fit(X, y) This code trains an SVM classifier using a 3rd-degree polynomial kernel.

pages: 721 words: 197,134

Data Mining: Concepts, Models, Methods, and Algorithms
by Mehmed Kantardzić
Published 2 Jan 2003

In that case, a sample with the missing value may be extended to the set of artificial samples, where, for each new sample, the missing value is replaced with one of the possible feature values of a given domain. Although this interpretation may look more natural, the problem with this approach is the combinatorial explosion of artificial samples. For example, if one 3-D sample X is given as X = {1, ?, 3}, where the second feature’s value is missing, the process will generate five artificial samples for the feature domain [0, 1, 2, 3, 4] Finally, the data miner can generate a predictive model to predict each of the missing values.

Since we are looking for a compact decision tree that is consistent with the training set, why not explore all possible trees and select the simplest? Unfortunately, the problem of finding the smallest decision tree consistent with a training data set is NP-complete. Enumeration and analysis of all possible trees will cause a combinatorial explosion for any real-world problem. For example, for a small database with five attributes and only 20 training examples, the possible number of decision trees is greater than 106, depending on the number of different values for every attribute. Therefore, most decision tree-construction methods are non-backtracking, greedy algorithms.

The increasing prominence of data streams arising in a wide range of advanced applications such as fraud detection and trend learning has led to the study of online mining of frequent itemsets. Unlike mining static databases, mining data streams poses many new challenges. In addition to the one-scan nature, the unbounded memory requirement and the high data arrival rate of data streams, the combinatorial explosion of itemsets exacerbates the mining task. The high complexity of the frequent itemset mining problem hinders the application of the stream-mining techniques. We recognize that a critical review of existing techniques is needed in order to design and develop efficient mining algorithms and data structures that are able to match the processing rate of the mining with the high arrival rate of data streams.

pages: 72 words: 21,361

Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy
by Erik Brynjolfsson
Published 23 Jan 2012

Here’s a simple proof: suppose the people in a small company write down their work tasks— one task per card. If there were only 52 tasks in the company, as many as in a standard deck of cards, then there would be 52! different ways to arrange these tasks.8 This is far more than the number of grains of rice on the second 32 squares of a chessboard or even a second or third full chessboard. Combinatorial explosion is one of the few mathematical functions that outgrows an exponential trend. And that means that combinatorial innovation is the best way for human ingenuity to stay in the race with Moore’s Law. Most of the combinations may be no better than what we already have, but some surely will be, and a few will be “home runs” that are vast improvements.

pages: 405 words: 117,219

In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence
by George Zarkadakis
Published 7 Mar 2016

Early models of general-solving were built, but could not scale up. Systems could solve one general problem but not any general problem.6 Algorithms that searched data in order to make general inferences failed quickly because of something called ‘combinatorial explosion’: there were simply too many interrelated parameters and variables to calculate after a number of steps. An approach called ‘heuristics’ tried to solve the combinatorial explosion problem by ‘pruning’ branches off the tree of the search executed by any given algorithm; but even this was shown to be of limited value. In the end, AI researchers came to realise that problems such as the recognition of faces or objects required ‘common sense’ reasoning, which was fiendishly difficult to code.

pages: 196 words: 58,122

AngularJS
by Brad Green and Shyam Seshadri
Published 15 Mar 2013

End-to-End/Integration Tests As applications grow (and they tend to, really fast, before you even realize it), testing whether they work as intended manually just doesn’t cut it anymore. After all, every time you add a new feature, you have to not only verify that the new feature works, but also that your old features still work, and that there are no bugs or regressions. If you start adding multiple browsers, you can easily see how this can become a combinatorial explosion! AngularJS tries to ease that by providing a Scenario Runner that simulates user interactions with your application. The Scenario Runner allows you to describe your application in a Jasmine-like syntax. Just as with the unit tests before, we will have a series of describes (for the feature), and individual its (to describe each individual functionality of the feature).

pages: 222 words: 53,317

Overcomplicated: Technology at the Limits of Comprehension
by Samuel Arbesman
Published 18 Jul 2016

Unfortunately, understanding individual modules—or building them to begin with—doesn’t always yield the kinds of expected behaviors we might hope for. If each module has multiple inputs and multiple outputs, when they are connected the resulting behavior can still be difficult to comprehend or to predict. We often end up getting a combinatorial explosion of interactions: so many different potential interactions that the number of combinations balloons beyond our ability to handle them all. For example, if each module in a system has a total of six distinct inputs and outputs, and we have only ten modules, there are more ways of connecting all these modules together than there are stars in the universe.

pages: 913 words: 265,787

How the Mind Works
by Steven Pinker
Published 1 Jan 1997

There would be a baby-eats-slug node and a slug-eats-baby node. The brain contains a massive number of neurons, one might think, so why not do it that way? One reason not to is that there is massive and then there is really massive. The number of combinations grows exponentially with their allowable size, setting off a combinatorial explosion whose numbers surpass even our most generous guess of the brain’s capacity. According to legend, the vizier Sissa Ben Dahir claimed a humble reward from King Shirham of India for inventing the game of chess. All he asked for was a grain of wheat to be placed on the first square of a chessboard, two grains of wheat on the second, four on the third, and so on.

The limitation is all too clear to microcomputer owners deciding whether to invest in more RAM. Of course the brain, unlike a computer, comes with vast amounts of parallel hardware for storage. Sometimes theorists infer that the brain can store all contingencies in advance and that thought can be reduced to one-step pattern recognition. But the mathematics of a combinatorial explosion bring to mind the old slogan of MTV: Too much is never enough. Simple calculations show that the number of humanly graspable sentences, sentence meanings, chess games, melodies, seeable objects, and so on can exceed the number of particles in the universe. For example, there are thirty to thirty-five possible moves at each point in a chess game, each of which can be followed by thirty to thirty-five responses, defining about a thousand complete turns.

One is that memory cannot hold all the events that bombard our senses; by storing only their categories, we cut down on the load. But the brain, with its trillion synapses, hardly seems short of storage space. It’s reasonable to say that entities cannot fit in memory when the entities are combinatorial—English sentences, chess games, all shapes in all colors and sizes at all locations—because the numbers from combinatorial explosions can exceed the number of particles in the universe and overwhelm even the most generous reckoning of the brain’s capacity. But people live for a paltry two billion seconds, and there is no known reason why the brain could not record every object and event we experience if it had to. Also, we often remember both a category and its members, such as months, family members, continents, and baseball teams, so the category adds to the memory load.

pages: 202 words: 62,901

The People's Republic of Walmart: How the World's Biggest Corporations Are Laying the Foundation for Socialism
by Leigh Phillips and Michal Rozworski
Published 5 Mar 2019

The unexpected development requires a revision of the projections and schedule of the factory that produces the viscose machines, and this in turn forces an alteration of the projections and schedule of all the factories that produce the parts that make the machine, and in turn the raw materials that make those parts. Waves of impact ripple out across the entire economy in what one reviewer called a “nightmare combinatorial explosion.” And the episode is only there to illustrate what occurs, moment to moment, as a result of what happens to every single one of billions of commodities throughout the economy. Everything affects everything. How is it possible to gather all of these variables? And then, even if it were somehow possible to track all of this, using thousands of the most modern supercomputers with our early twenty-first-century processing speeds, how could we calculate all of that, and constantly reassess it on a daily or even moment to moment basis?

pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron
Published 13 Mar 2017

For example, if there were two features a and b, PolynomialFeatures with degree=3 would not only add the features a2, a3, b2, and b3, but also the combinations ab, a2b, and ab2. Warning PolynomialFeatures(degree=d) transforms an array containing n features into an array containing features, where n! is the factorial of n, equal to 1 × 2 × 3 × ⋯ × n. Beware of the combinatorial explosion of the number of features! Learning Curves If you perform high-degree Polynomial Regression, you will likely fit the training data much better than with plain Linear Regression. For example, Figure 4-14 applies a 300-degree polynomial model to the preceding training data, and compares the result with a pure linear model and a quadratic model (2nd-degree polynomial).

Fortunately, when using SVMs you can apply an almost miraculous mathematical technique called the kernel trick (it is explained in a moment). It makes it possible to get the same result as if you added many polynomial features, even with very high-degree polynomials, without actually having to add them. So there is no combinatorial explosion of the number of features since you don’t actually add any features. This trick is implemented by the SVC class. Let’s test it on the moons dataset: from sklearn.svm import SVC poly_kernel_svm_clf = Pipeline(( ("scaler", StandardScaler()), ("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5)) )) poly_kernel_svm_clf.fit(X, y) This code trains an SVM classifier using a 3rd-degree polynomial kernel.

pages: 237 words: 64,411

Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence
by Jerry Kaplan
Published 3 Aug 2015

But the early AI researchers quickly ran into a problem: the computers didn’t seem to be powerful enough to do very many interesting tasks. Formalists who studied the arcane field of theory of computation understood that building faster computers could not address this problem. No matter how speedy the computer, it could never tame what was called the “combinatorial explosion.” Solving real-world problems through step-wise analysis had this nasty habit of running out of steam the same way pressure in a city’s water supply drops when vast new tracts of land are filled with housing developments. Imagine finding the quickest driving route from San Francisco to New York by measuring each and every way you could possibly go; your trip would never get started.

pages: 296 words: 66,815

The AI-First Company
by Ash Fontana
Published 4 May 2021

Researchers at MIT, the US Defense Department’s Defense Advanced Research Projects Agency (DARPA), and International Business Machines (IBM) studied how neurons move when sensing and reacting to stimuli in the environment, and they wrote computer programs to understand natural language, vision, and reasoning. Around this time, computers learned to play chess—an enduring fascination for the rest of the century. Government funding for the field was cut in the seventies because developing a general purpose AI was deemed to be an intractable problem, prone to combinatorial explosion. This was more of a political than technological problem: the field overpromised and under-delivered. Corporations picked up the slack, developing the first robots, speech recognition systems, and language translators. Corporations anticipated AI intersecting with another trend: process automation.

pages: 719 words: 181,090

Site Reliability Engineering: How Google Runs Production Systems
by Betsy Beyer , Chris Jones , Jennifer Petoff and Niall Richard Murphy
Published 15 Apr 2016

Backend A has exactly the same options for the request it received from the Frontend, and proceeds accordingly. Figure 21-2. A stack of dependencies The key point is that a failed request from the DB Frontend should only be retried by Backend B, the layer immediately above it. If multiple layers retried, we’d have a combinatorial explosion. Load from Connections The load associated with connections is one last factor worth mentioning. We sometimes only take into account load at the backends that is caused directly by the requests they receive (which is one of the problems with approaches that model load based upon queries per second).

Releasing a new version becomes much easier if we don’t need to maintain parallel release tracks for a version with the new functionality versus without the functionality. This holds particularly true if we’re not dealing with a single piece of new functionality, but a set of independent features that might be released on different schedules, which would necessitate maintaining a combinatorial explosion of different versions. Having this sort of dormant functionality also makes aborting launches easier when adverse effects are discovered during a rollout. In such cases, we can simply switch the feature off, iterate, and release an updated version of the app. Without this type of client configuration, we would have to provide a new version of the app without the feature, and update the app on all users’ phones.

pages: 301 words: 85,263

New Dark Age: Technology and the End of the Future
by James Bridle
Published 18 Jun 2018

They can also use their stronger pieces to push and pull weaker pieces towards a series of trap squares, removing them from the board and clearing the way for the rabbits. The combination of many different initial setups, the ability of pieces to move other pieces, and the possibility of making up to four moves per turn results in combinatorial explosion: a vast increase in possibilities that rapidly becomes too great for a computer programme to handle – the Alterman Wall taken to exponential extremes. Or so it was hoped. The first computer Arimaa tournament was held in 2004, with the most successful programme winning the right to challenge a group of top human players for a cash prize.

pages: 301 words: 85,126

AIQ: How People and Machines Are Smarter Together
by Nick Polson and James Scott
Published 14 May 2018

Most subscribers haven’t watched most films, so most of those trillion-plus entries in the ratings matrix are missing. Moreover, as in the case of the World War II bombers, that missingness pattern is informative. If you haven’t watched Fight Club, maybe you just haven’t gotten around to it—but then again, maybe films about nihilism just do nothing for you. The final issue is combinatorial explosion. Or, if you’d rather stick with Fight Club and philosophy over mathematics: each Netflix subscriber is a beautiful and unique phenomenological snowflake. In a database with only two films, millions of users will share identical like/dislike experiences, since only four such experiences are possible: liked both, liked neither, or liked one but not the other.

pages: 339 words: 88,732

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
by Erik Brynjolfsson and Andrew McAfee
Published 20 Jan 2014

Three years later, they created a computer program modestly called the “General Problem Solver,” which was designed to solve, in principle, any logic problem that could be described by a set of formal rules. It worked well on simple problems like Tic-Tac-Toe or the slightly harder Tower of Hanoi puzzle, although it didn’t scale up to most real-world problems because of the combinatorial explosion of possible options to consider. Cheered by their early successes and those of other artificial intelligence pioneers like Marvin Minsky, John McCarthy and Claude Shannon, and Simon and Newell were quite optimistic about how rapidly machines would master human skills, predicting in 1958 that a digital computer would be the world chess champion by 1968.29 In 1965, Simon went so far as to predict, “machines will be capable, within twenty years, of doing any work a man can do.”30 Simon won the Nobel Prize in Economics in 1978, but he was wrong about chess, not to mention all the other tasks that humans can do.

pages: 315 words: 92,151

Ten Billion Tomorrows: How Science Fiction Technology Became Reality and Shapes the Future
by Brian Clegg
Published 8 Dec 2015

These memories are inserted into the doll’s brain using a chair with some kind of remote electromagnetic stimulus, transferring information stored on what appear to be computer hard drives (referred to in the show as “wedges”). The Dollhouse approach, which bears a resemblance to a whole-brain version of the learning process in The Matrix, seems to underestimate the complexity of what’s going on inside a human skull. The number of potential connections of all the neurons in the brain provides a combinatorial explosion that would require every atom in the universe if we were to try to map out every possible combination. Of course, if the brain can store the data, so can an electronic device, but even in the actual connections in any particular brain, we are talking far more storage than is feasible in a compact device at the moment.

The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise
by Martin L. Abbott and Michael T. Fisher
Published 1 Dec 2009

you might ask. The answer, unfortunately, is going to depend on your particular needs, the rate of growth and unavailability and causes of unavailability in your system, customer expectation with respect to availability, contractual availability commitments, and a whole host of things that result in a combinatorial explosion, which make it impossible for us to describe for you what you need to do in your environment. That said, there are some simple rules to apply to increase your scalability and availability. We present some of the most useful here to help you in your fault isolation endeavors. Approach 1: Swim Lane the Money-Maker Whatever you do, always make sure that the thing that is most closely related to making money is appropriately isolated from the failures and demand limitations of other systems.

Often, these systems are out-of-the-box third-party or open source solutions that you install on systems to monitor resource utilization. Some application monitors might also be employed. The data collected by these systems help inform other processes such as our capacity planning process and problem resolution process. Care must be taken to avoid a combinatorial explosion of data, as that data is costly and the value of immense amounts of old data is very low. Finally, we move to answer the question of “What is the problem?” This very often requires us to rely heavily on our architectural principal Design to Be Monitored. Here, we are monitoring individual components, and often these are proprietary applications for which we are responsible.

pages: 311 words: 94,732

The Rapture of the Nerds
by Cory Doctorow and Charles Stross
Published 3 Sep 2012

* * * When the limit is reached, it jars Huw’s self-sense like a long fall to a hard floor, every virtual bone and joint buckling and bending, spine compressing, jaws clacking together. It has been going so well, the end in sight, the time running fast but Huw and father-thing and ambassador running faster, and now— “I’m stuck,” Huw says. “Not a problem. We could play this game forever—the number of variables gives rise to such a huge combinatorial explosion that there isn’t enough mass in this universe to explore all the possible states. The objective of the exercise was to procure a representative sample of moves, played by a proficient emissary, and we’ve now delivered that.” “Hey, wait a minute! ...” Huw’s stomach does a backflip, followed by a triple somersault, and is preparing to unicycle across a tightrope across the Niagara Falls while carrying a drunken hippo on his back: “You mean that was it?”

pages: 375 words: 102,166

The Genetic Lottery: Why DNA Matters for Social Equality
by Kathryn Paige Harden
Published 20 Sep 2021

A baby girl is born with about 2 million immature eggs nestled in her tiny ovaries, and over her life about 400 of those will mature and be released during ovulation. Boys don’t begin to produce sperm until puberty, and then they churn out, on average, 525 billion over their lifetime.7 For each sperm or egg cell, the meiotic remixing of DNA begins anew. The resulting combinatorial explosion of potential child genotypes from any two parents is mind-boggling: each pair of parents could produce over 70 trillion genetically unique offspring.8 And that’s before you take into account the possibility of de novo genetic mutations: brand-new genetic changes that arise in the production of gametes.

pages: 390 words: 113,737

Someone comes to town, someone leaves town
by Cory Doctorow
Published 1 Jul 2005

It was easy enough to understand why the arbiters of the system subdivided Motorized Land Vehicles (629.2) into several categories, but here in the 629.22s, where the books on automobiles were, you could see the planners' deficiencies. Automobiles divided into dozens of major subcategories (taxis and limousines, buses, light trucks, cans, lorries, tractor trailers, campers, motorcycles, racing cars, and so on), then ramified into a combinatorial explosion of sub-sub-sub categories. There were Dewey numbers on some of the automotive book spines that had twenty digits or more after the decimal, an entire Dewey Decimal system hidden between 629.2 and 629.3. To the librarian, this shelf-reading looked like your garden-variety screwing around, but what really made her nervous were Alan's excursions through the card catalogue, which required constant tending to replace the cards that errant patrons made unauthorized reorderings of.

pages: 463 words: 118,936

Darwin Among the Machines
by George Dyson
Published 28 Mar 2012

The success of such a network may be evaluated by examining the number of congressmen surviving an attack and comparing such number to the number of congressmen able to communicate with one another and vote via the communications network. Such an example is, of course, farfetched but not completely without utility.”51 The more alternative connection paths there are between the nodes of a communications net, the more resistant it is to damage from within or without. But there is a combinatorial explosion working the other way: the more you increase the connectivity, the more intelligence and memory is required to route messages efficiently through the net. In a conventional circuit-switched communications network, such as the telephone system, a central switching authority establishes an unbroken connection for every communication, mediating possible conflicts with other connections being made at the same time.

pages: 352 words: 120,202

Tools for Thought: The History and Future of Mind-Expanding Technology
by Howard Rheingold
Published 14 May 2000

Shannon pointed out that the way most people would design a machine to play chess -- to mechanically examine each alternative move and evaluate it, the so-called brute-force method -- would be virtually impossible, even on the fastest imaginable computer. He estimated that a typical chess game has about 10^120 possible moves, so "A machine calculating one variation each millionth of a second would require over 10^95 years to decide on its first move!" This "combinatorial explosion" -- the rapid and overwhelming buildup of alternatives in any system in which each level leads to two or more deeper levels -- was another one of those secrets of nature that Claude Shannon was in the habit of turning up. The explosive expansion of the number of alternative decisions is a barrier that confronts any attempt to exhaustively examine a branching structure, and continues to confront programmers who seek to emulate cognitive functions by performing searches through problem spaces.

pages: 489 words: 117,470

Programming in Lua, Fourth Edition
by Roberto Ierusalimschy
Published 14 Jul 2016

Whenever we want to ask for a value from Lua (such as the value of a global variable), we call Lua, which pushes the required value onto the stack. Whenever we want to pass a value to Lua, we first push the value onto the stack, and then we call Lua (which will pop the value). We still need a different function to push each C type onto the stack and a different function to get each C type from the stack, but we avoid combinatorial explosion. Moreover, because this stack is part of the Lua state, the garbage collector knows which values C is using. Nearly all functions in the API use the stack. As we saw in our first example, luaL_loadstring leaves its result on the stack (either the compiled chunk or an error message); lua_pcall gets the function to be called from the stack and leaves any error message there too.

pages: 410 words: 119,823

Radical Technologies: The Design of Everyday Life
by Adam Greenfield
Published 29 May 2017

As near as I can tell, a few want just exactly what some have always wanted from other human beings: a cheap, reliable, docile labor force. Others, though, are seeking something less tangible: sense, meaning, order, a ward against uncertainty. They’re looking for something that might help them master the combinatorial explosion of possibility on a planet where nine billion people are continually knitting their own world-lines; for just a little reassurance, in a world populated by so many conscious actors that it often feels like it’s spinning out of anyone’s control. These are impulses I think most of us can relate to, and intuitively react to with some sympathy.

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006
by Ben Goertzel and Pei Wang
Published 1 Jan 2007

Moving on: This is not quite a bottleneck, but I would say that if the Novamente system is going to fail to achieve AGI, which I think is quite unlikely, then it would be because of a failure in the aspect of the design wherein the different parts of the system all interact with each other dynamically, to stop each other from coming to horrible combinatorial explosions. A difficult thing is that AI is all about emergence and synergy, so that in order to really test your system, you have test all the parts, put them together in combination, and look at the emergence effects. And that’s actually hard. The most basic bottleneck is that you are building an emergent system that has to be understood and tested as a whole, rather than a system that can be implemented and tested piece by piece.

Advanced Software Testing—Vol. 3, 2nd Edition
by Jamie L. Mitchell and Rex Black
Published 15 Feb 2015

Typically, we are going to create our testing framework in these languages and then use data or keywords in flat files, XML files, or databases to drive the tests. This supports maintainability. We will discuss this further when we talk about architectures later in this chapter. Every tester has run up against the impossibility of testing every possible combination of inputs. This combinatorial explosion cannot be solved via automation, but we are likely to be able to run more combinations with automation than manually. Some tools also provide the ability to go directly to an application’s application programming interface (API). For example, some test tools can talk directly to the web server at the HTTP and HTTPS interfaces rather than pushing test input through the browser.

pages: 528 words: 146,459

Computer: A History of the Information Machine
by Martin Campbell-Kelly and Nathan Ensmenger
Published 29 Jul 2013

Because the number of software packages IBM offered to its customers was constantly increasing, the proliferation of computer models created a nasty gearing effect: given m different computer models, each requiring n different software packages, a total of m × n programs had to be developed and supported. This was a combinatorial explosion that threatened to overwhelm IBM at some point in the not-too-distant future. Just as great a problem was that of the software written by IBM’s customers. Because computers were so narrowly targeted at a specific market niche, it was not possible for a company to expand its computer system in size by more than a factor of about two without changing to a different computer model.

pages: 673 words: 164,804

Peer-to-Peer
by Andy Oram
Published 26 Feb 2001

However, even once that has been worked out, there is still the issue of implementation specifics. There are a couple of ways that this could be approached. The first is to make each gateway between networks X and Y a hybrid X-Y node, speaking the protocols of both networks. This is undesirable because it leads to a combinatorial explosion of customized nodes, each of which has to be updated if the protocol changes for one of the networks. A preferable solution would be to define a simple and universal interface that one node can use to query another for a file. Then, a gateway would consist merely of a cluster of nodes running on different networks and speaking different protocols, but talking to each other via a common interface mechanism.

pages: 2,466 words: 668,761

Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig
Published 14 Jul 2019

Early experiments in machine evolution (now called genetic programming) (Friedberg, 1958; Friedberg et al., 1959) were based on the undoubtedly correct belief that by making an appropriate series of small mutations to a machine-code program, one can generate a program with good performance for any particular task. The idea, then, was to try random mutations with a selection process to preserve mutations that seemed useful. Despite thousands of hours of CPU time, almost no progress was demonstrated. Failure to come to grips with the “combinatorial explosion” was one of the main criticisms of AI contained in the Lighthill report (Lighthill, 1973), which formed the basis for the decision by the British government to end support for AI research in all but two universities. (Oral tradition paints a somewhat different and more colorful picture, with political ambitions and personal animosities whose description is beside the point.)

The cross-fertilization of ideas from the two areas has allowed planners to scale up from toy problems where the number of actions and states was limited to around a dozen, to real-world industrial applications with millions of states and thousands of actions. Planning is foremost an exercise in controlling combinatorial explosion. If there are n propositions in a domain, then there are 2n states. Against such pessimism, the identification of independent subproblems can be a powerful weapon. In the best case—full decomposability of the problem—we get an exponential speedup. Decomposability is destroyed, however, by negative interactions between actions.

G., 360, 1091 COLBERT(robot control language), 987 collaboration, 968, 971 Collin, Z., 191, 1091 Collins, M., 736, 896, 904, 905, 1085, 1087, 1118 collision checker, 953 collusion, 624 Colmerauer, A., 296, 329, 1091 Colombano, S.P., 161, 1104 color, 995 Colossal Clean Crawled Corpus (C4), 928, 930 Colossus, 32 Colton, S., 222, 1089 column player, 596 combinatorial auction, 628 combinatorial explosion, 39 commitment epistemological, 273, 295, 404 ontological, 272, 295, 404 Common Crawl, 903, 922 common goal, 590 common sense, 426 common value, 624 communication, 595, 874 commutativity (in search problems), 175 Compagna, L., 268, 1085 COMPAS (expert system), 1044, 1060 competitive environment, 192 competitive ratio, 153 complementary literals, 244 complete-state formulation, 129 complete assignment, 165 complete data, 775 completeness of a proof procedure, 234, 240, 264 of resolution, 244, 321–324 of a search algorithm, 93, 123 completing the square, 499 completion (of a data base), 315 complexity, 1074–1076 analysis, 1075 sample, 692 space, 93, 123 time, 93, 123 complex sentence, 235, 278 component (of mixture distribution), 790 composite decision process, 126 composite object, 336 compositionality, 269 compositional semantics, 894 computability, 27 computational learning theory, 690, 691 computational linguistics, 34, 904 computation graph, 805 computed torque control, 961 computer engineering, 32–33 computer vision, 30, 38, 186, 188, 989–1026 concession, 634 conclusion (of an implication), 235 concurrency, 591 concurrent action constraint, 594 condensation, 516 condition-action rule, 549 conditional distribution, 409 conditional distributions, 438–442 conditional effect, 386 conditional Gaussian, 441 conditional independence, 419, 424, 426, 433, 436–445, 472, 487 conditional plan, 128, 140, 141, 580 conditional probability, 407, 414, 417, 425, 434 conditional probability table (CPT), 431 conditioning, 414 conditioning case, 432 Condon, J.H., 222, 1091 Condorcet’s Paradox, 629 configuration space, 946 confirmation theory, 25, 427 conflict-directed backjumping, 180, 187 conflict clause learning, 252 conflict deal, 631 conflict set, 179 conformant planning, 383, 385–388, 397, 400 confusion matrix, 728 Congdon, C.

pages: 834 words: 180,700

The Architecture of Open Source Applications
by Amy Brown and Greg Wilson
Published 24 May 2011

For example, there is a JavascriptExecutor interface that provides the ability to execute arbitrary chunks of Javascript in the context of the current page. A successful cast of a WebDriver instance to that interface indicates that you can expect the methods on it to work. Figure 16.1: Accountant and Stockist Depend on Shop Figure 16.2: Shop Implements HasBalance and Stockable 16.4.2. Dealing with the Combinatorial Explosion One of the first things that is apparent from a moment's thought about the wide range of browsers and languages that WebDriver supports is that unless care is taken it would quickly face an escalating cost of maintenance. With X browsers and Y languages, it would be very easy to fall into the trap of maintaining X×Y implementations.

pages: 612 words: 187,431

The Art of UNIX Programming
by Eric S. Raymond
Published 22 Sep 2003

One of the objectives of the OSD is to ensure that people in the distribution chain of OSD-conforming software do not need to consult with intellectual-property lawyers to know what their rights are. OSD forbids complicated restrictions against persons, groups, and occupations partly so that people dealing with collections of software will not face a combinatorial explosion of slightly differing (and perhaps conflicting) restrictions on what they can do with it. This concern is not hypothetical, either. One important part of the open-source distribution chain is CD-ROM distributors who aggregate it in useful collections ranging from simple anthology CDs up to bootable operating systems.

pages: 647 words: 43,757

Types and Programming Languages
by Benjamin C. Pierce
Published 4 Jan 2002

The basic steps in these algorithms will involve "running F backwards": to check membership for an element x, we need to ask how x could have been generated by F. The advantage of an invertible F is that there is at most one way to generate a given x. For a non-invertible F, elements can be generated in multiple ways, leading to a combinatorial explosion in the number of paths that the algorithm must explore. From now on, we restrict our attention to invertible generating functions. 21.5.3 Definition An element x is F-supported if supportF(x)↓; otherwise, x is F-unsupported. An F-supported element is called F-ground if supportF(x) = ø.

pages: 933 words: 205,691

Hadoop: The Definitive Guide
by Tom White
Published 29 May 2009

One area that the specification does not rule on, however, is APIs: implementations have complete latitude in the API they expose for working with Avro data, since each one is necessarily language-specific. The fact that there is only one binary format is significant, since it means the barrier for implementing a new language binding is lower, and avoids the problem of a combinatorial explosion of languages and formats, which would harm interoperability. Avro has rich schema resolution capabilities. Within certain carefully defined constraints, the schema used to read data need not be identical to the schema that was used to write the data. This is the mechanism by which Avro supports schema evolution.

pages: 746 words: 221,583

The Children of the Sky
by Vernor Vinge
Published 11 Oct 2011

Similarly, Amdi had probably said that “someone” had betrayed “something”—but the software had generated the particular nouns from a long list of suspects. It was amazing that Jefri had even made it onto that list, much less coming out at the top. So what logic had put him there? She drilled down through the program’s reasoning, into depths she had never visited. As suspected, the “why I chose ‘this’ over ‘that’” led to a combinatorial explosion. She could spend centuries studying this—and get nowhere. Ravna leaned back in her chair, turning her head this way and that, trying to get the stress out her neck. What am I missing? Of course, the program could simply be broken. Oobii’s emergency automation was specially designed to run in the Slow Zone, but the surveillance program was a bit of purely Beyonder software, not on the ship’s Usables manifest.

pages: 556 words: 46,885

The World's First Railway System: Enterprise, Competition, and Regulation on the Railway Network in Victorian Britain
by Mark Casson
Published 14 Jul 2009

Network optimization cannot be eVected by linear programming, as Fogel mistakenly suggests. Making a connection between two locations involves a binary decision: the two locations are either connected or they are not. Network optimization is therefore an integer programming problem, and problems of this kind encounter combinatorial explosion: the number of possible network structures increases at an accelerating rate as the number of locations to be served rises. An additional complexity arises from the fact that the optimal location for a railway junction may be in the middle of the countryside rather than at a town. Constraining all junctions to be at towns may reduce the performance of a network quite considerably.

pages: 846 words: 232,630

Darwin's Dangerous Idea: Evolution and the Meanings of Life
by Daniel C. Dennett
Published 15 Jan 1995

So, even though there has been no dearth of utilitarian (and Kantian, and contractarian, etc.) arguments in favor of particular policies, institutions, practices, and acts, these have all been heavily hedged with ceteris paribus clauses and plausibility claims about their idealizing assumptions. These hedges are designed to overcome the combinatorial explosion of calculation that threatens if one actually attempts — as theory says one must — to consider all things. And as arguments — not derivations — they have all been controversial (which is not to say that none of them could be sound in the last analysis). To get a better sense of the difficulties that contribute to actual moral reasoning, let us give ourselves a smallish moral problem and see what we do with it.

pages: 798 words: 240,182

The Transhumanist Reader
by Max More and Natasha Vita-More
Published 4 Mar 2013

Finally, there are limiting factors to fast growth, such as economic returns (if very few can afford a new technology it will be very expensive and not as profitable as a mass market technology), constraints on development speed (even advanced manufacturing processes need time for reconfiguration, development, and testing), human adaptability, and especially the need for knowledge. As the amount of knowledge grows, it becomes harder and harder to keep up and to get an overview, necessitating specialization. Even if information technologies can help somewhat, the basic problem remains, with the combinatorial explosion of possible combinations of different fields. This means that a development project might need specialists in many areas, which in turns means that there is a smaller size of group able to do the development. In turn, this means that it is very hard for a small group to get far ahead of everybody else in all areas, simply because it will not have the necessary know-how in all necessary areas.

pages: 923 words: 516,602

The C++ Programming Language
by Bjarne Stroustrup
Published 2 Jan 1986

For example, in some cases the conversion can impose overheads, and in other cases, a simpler algorithm can be used for specific argument types. Where such issues are not significant, relying on conversions and providing only the most general variant of a function – plus possibly a few critical variants – contains the combinatorial explosion of variants that can arise from mixed-mode arithmetic. Where several variants of a function or an operator exist, the compiler must pick ‘‘the right’’ variant based on the argument types and the available (standard and user-defined) conversions. Unless a best match exists, an expression is ambiguous and is an error (see §7.4).

Engineering Security
by Peter Gutmann

The second option is to use the approach suggested earlier by Michael Ströder and try and field-qualify every version of every application that you plan to use for processing certificates to see how it’ll react to every certificate extension and feature that you care about. Obviously this is impossible to do in general because of the 724 PKI combinatorial explosion of certificate extensions and features (one survey that covered only SSL/TLS server certificates found 219 different combinations of just the keyUsage and basicConstraints.cA flags, including many that were completely illogical [504]), but if you’re using a very small number of features and only one or two applications and can control which versions get deployed and in which configurations they get used then it may be feasible.