algorithmic feed

back to index

13 results

pages: 321 words: 105,480

Filterworld: How Algorithms Flattened Culture
by Kyle Chayka
Published 15 Jan 2024

But suddenly, a post from two days ago appeared at the top of your feed. Early 2016 was also when Twitter became less chronological, briefly making the algorithmic feed the default when users first got on the app—a problem for a site that many people used as a real-time news ticker. (The chronological option was called “Twitter Classic,” as if it were a beloved junk-food flavor.) Later, the app would swap users over to an algorithmic feed automatically after a while and force them to opt out of it. Although Netflix’s content recommendations had long been algorithmic, 2016 was also when the streaming service began changing its home-page interface, prioritizing recommendations and individualizing it for each user.

That ongoing engagement is sustained by automated recommendations, delivering the next provocative news headline or hypnotic entertainment release. Today, it is difficult to think of creating a piece of culture that is separate from algorithmic feeds, because those feeds control how it will be exposed to billions of consumers in the international digital audience. Without the feeds, there is no audience—the creation would exist only for its creator and their direct connections. And it is even more difficult to think of consuming something outside of algorithmic feeds, because their recommendations inevitably influence what is shown on television, played on the radio, and published in books, even if those experiences are not contained within feeds.

But as I kept listening, pulled along by some undefinable quality within the music, I came to realize that the album’s abstraction was the point, its elusiveness a portrait of modern alienation and the need to keep living despite it. Of course, Blonde was a major popular masterpiece of the early twenty-first century, a bestseller. But the album and the musician alike didn’t play by the rules of algorithmic feeds. If taste indeed must be deeply felt, requires time to engage with, and benefits from the surprise that comes from the unfamiliar, then it seems that technology could not possibly replicate it, because algorithmic feeds run counter to these fundamental qualities. When recommendation algorithms are based only on data about what you and other platform users already like, then these algorithms are less capable of providing the kind of surprise that might not be immediately pleasurable, that Montesquieu described.

pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron
Published 13 Mar 2017

spurious patterns, Hopfield Networks stack(), Static Unrolling Through Time stacked autoencoders, Stacked Autoencoders-Unsupervised Pretraining Using Stacked AutoencodersTensorFlow implementation, TensorFlow Implementation training one-at-a-time, Training One Autoencoder at a Time-Training One Autoencoder at a Time tying weights, Tying Weights-Tying Weights unsupervised pretraining with, Unsupervised Pretraining Using Stacked Autoencoders-Unsupervised Pretraining Using Stacked Autoencoders visualizing the reconstructions, Visualizing the Reconstructions-Visualizing the Reconstructions stacked denoising autoencoders, Visualizing Features, Denoising Autoencoders stacked denoising encoders, Denoising Autoencoders stacked generalization (see stacking) stacking, Stacking-Stacking stale gradients, Asynchronous updates standard correlation coefficient, Looking for Correlations standard deviation, Select a Performance Measure standardization, Feature Scaling StandardScaler, Transformation Pipelines, Implementing Gradient Descent, Training an MLP with TensorFlow’s High-Level API state-action values, Markov Decision Processes states tensor, Handling Variable Length Input Sequences state_is_tuple, Distributing a Deep RNN Across Multiple GPUs, LSTM Cell static unrolling through time, Static Unrolling Through Time-Static Unrolling Through Time static_rnn(), Static Unrolling Through Time-Static Unrolling Through Time, An Encoder–Decoder Network for Machine Translation stationary point, SVM Dual Problem-SVM Dual Problem statistical mode, Bagging and Pasting statistical significance, Regularization Hyperparameters stemming, Exercises step functions, The Perceptron step(), Introduction to OpenAI Gym Stochastic Gradient Boosting, Gradient Boosting Stochastic Gradient Descent (SGD), Stochastic Gradient Descent-Stochastic Gradient Descent, Soft Margin Classification, The Perceptrontraining, Training and Cost Function Stochastic Gradient Descent (SGD) classifier, Training a Binary Classifier, Ridge Regression stochastic neurons, Boltzmann Machines stochastic policy, Policy Search stratified sampling, Create a Test Set-Create a Test Set, Measuring Accuracy Using Cross-Validation stride, Convolutional Layer string kernels, Gaussian RBF Kernel string_input_producer(), Other convenience functions strong learners, Voting Classifiers subderivatives, Online SVMs subgradient vector, Lasso Regression subsample, Gradient Boosting, Pooling Layer supervised learning, Supervised/Unsupervised Learning-Supervised learning Support Vector Machines (SVMs), Multiclass Classification, Support Vector Machines-Exercisesdecision function and predictions, Decision Function and Predictions-Decision Function and Predictions dual problem, SVM Dual Problem-SVM Dual Problem kernelized SVM, Kernelized SVM-Kernelized SVM linear classification, Linear SVM Classification-Soft Margin Classification mechanics of, Under the Hood-Online SVMs nonlinear classification, Nonlinear SVM Classification-Computational Complexity online SVMs, Online SVMs-Online SVMs Quadratic Programming (QP) problems, Quadratic Programming-Quadratic Programming SVM regression, SVM Regression-Online SVMs the dual problem, The Dual Problem training objective, Training Objective-Training Objective support vectors, Linear SVM Classification svd(), Principal Components symbolic differentiation, Using autodiff, Symbolic Differentiation-Numerical Differentiation synchronous updates, Synchronous updates T t-Distributed Stochastic Neighbor Embedding (t-SNE), Other Dimensionality Reduction Techniques tail heavy, Take a Quick Look at the Data Structure target attributes, Take a Quick Look at the Data Structure target_weights, An Encoder–Decoder Network for Machine Translation tasks, Multiple Devices Across Multiple Servers Temporal Difference (TD) Learning, Temporal Difference Learning and Q-Learning-Temporal Difference Learning and Q-Learning tensor processing units (TPUs), Installation TensorBoard, Up and Running with TensorFlow TensorFlow, Up and Running with TensorFlow-Exercisesabout, Objective and Approach autodiff, Using autodiff-Using autodiff, Autodiff-Reverse-Mode Autodiff Batch Normalization with, Implementing Batch Normalization with TensorFlow-Implementing Batch Normalization with TensorFlow construction phase, Creating Your First Graph and Running It in a Session control dependencies, Control Dependencies convenience functions, Other convenience functions convolutional layers, ResNet convolutional neural networks and, TensorFlow Implementation-TensorFlow Implementation data parallelism and, TensorFlow implementation denoising autoencoders, TensorFlow Implementation-TensorFlow Implementation dropout with, Dropout dynamic placer, Placing Operations on Devices execution phase, Creating Your First Graph and Running It in a Session feeding data to the training algorithm, Feeding Data to the Training Algorithm-Feeding Data to the Training Algorithm Gradient Descent with, Implementing Gradient Descent-Using an Optimizer graphs, managing, Managing Graphs initial graph creation and session run, Creating Your First Graph and Running It in a Session-Creating Your First Graph and Running It in a Session installation, Installation l1 and l2 regularization with, ℓ1 and ℓ2 Regularization learning schedules in, Learning Rate Scheduling Linear Regression with, Linear Regression with TensorFlow-Linear Regression with TensorFlow max pooling layer in, Pooling Layer max-norm regularization with, Max-Norm Regularization model zoo, Model Zoos modularity, Modularity-Modularity Momentum optimization in, Momentum optimization name scopes, Name Scopes neural network policies, Neural Network Policies NLP tutorials, Natural Language Processing, An Encoder–Decoder Network for Machine Translation node value lifecycle, Lifecycle of a Node Value operations (ops), Linear Regression with TensorFlow optimizer, Using an Optimizer overview, Up and Running with TensorFlow-Up and Running with TensorFlow parallel distributed computing (see parallel distributed computing with TensorFlow) Python APIconstruction, Construction Phase-Construction Phase execution, Execution Phase using the neural network, Using the Neural Network queues (see queues) reusing pretrained layers, Reusing a TensorFlow Model-Reusing a TensorFlow Model RNNs in, Basic RNNs in TensorFlow-Handling Variable-Length Output Sequences(see also recurrent neural networks (RNNs)) saving and restoring models, Saving and Restoring Models-Saving and Restoring Models sharing variables, Sharing Variables-Sharing Variables simple placer, Placing Operations on Devices sklearn.metrics.accuracy_score(), Implementing Batch Normalization with TensorFlow sparse autoencoders with, TensorFlow Implementation and stacked autoencoders, TensorFlow Implementation TensorBoard, Visualizing the Graph and Training Curves Using TensorBoard-Visualizing the Graph and Training Curves Using TensorBoard tf.abs(), ℓ1 and ℓ2 Regularization tf.add(), Modularity, ℓ1 and ℓ2 Regularization-ℓ1 and ℓ2 Regularization tf.add_n(), Modularity-Sharing Variables, Sharing Variables-Sharing Variables tf.add_to_collection(), Max-Norm Regularization tf.assign(), Manually Computing the Gradients, Reusing Models from Other Frameworks, Max-Norm Regularization-Max-Norm Regularization, Chapter 9: Up and Running with TensorFlow tf.bfloat16, Bandwidth saturation tf.bool, Implementing Batch Normalization with TensorFlow, Dropout tf.cast(), Construction Phase, Training a Sequence Classifier tf.clip_by_norm(), Max-Norm Regularization-Max-Norm Regularization tf.clip_by_value(), Gradient Clipping tf.concat(), Exercises, GoogLeNet, Neural Network Policies, Policy Gradients tf.ConfigProto, Managing the GPU RAM, Logging placements-Soft placement, In-Graph Versus Between-Graph Replication, Chapter 12: Distributing TensorFlow Across Devices and Servers tf.constant(), Lifecycle of a Node Value-Manually Computing the Gradients, Simple placement-Dynamic placement function, Control Dependencies, Opening a Session-Pinning Operations Across Tasks tf.constant_initializer(), Sharing Variables-Sharing Variables tf.container(), Sharing State Across Sessions Using Resource Containers-Asynchronous Communication Using TensorFlow Queues, TensorFlow implementation-Exercises, Chapter 9: Up and Running with TensorFlow tf.contrib.framework.arg_scope(), Implementing Batch Normalization with TensorFlow, TensorFlow Implementation, Variational Autoencoders tf.contrib.layers.batch_norm(), Implementing Batch Normalization with TensorFlow-Implementing Batch Normalization with TensorFlow tf.contrib.layers.convolution2d(), Learning to Play Ms.

Rumelhart et al. published a groundbreaking article8 introducing the backpropagation training algorithm.9 Today we would describe it as Gradient Descent using reverse-mode autodiff (Gradient Descent was introduced in Chapter 4, and autodiff was discussed in Chapter 9). For each training instance, the algorithm feeds it to the network and computes the output of every neuron in each consecutive layer (this is the forward pass, just like when making predictions). Then it measures the network’s output error (i.e., the difference between the desired output and the actual output of the network), and it computes how much each neuron in the last hidden layer contributed to each output neuron’s error.

Pac-Man Using Deep Q-Learning global_variables(), Max-Norm Regularization global_variables_initializer(), Creating Your First Graph and Running It in a Session Glorot initialization, Vanishing/Exploding Gradients Problems-Xavier and He Initialization Google, Up and Running with TensorFlow Google Images, Introduction to Artificial Neural Networks Google Photos, Semisupervised learning GoogleNet architecture, GoogLeNet-GoogLeNet gpu_options.per_process_gpu_memory_fraction, Managing the GPU RAM gradient ascent, Policy Search Gradient Boosted Regression Trees (GBRT), Gradient Boosting Gradient Boosting, Gradient Boosting-Gradient Boosting Gradient Descent (GD), Training Models, Gradient Descent-Mini-batch Gradient Descent, Online SVMs, Training Deep Neural Nets, Momentum optimization, AdaGradalgorithm comparisons, Mini-batch Gradient Descent-Mini-batch Gradient Descent automatically computing gradients, Using autodiff-Using autodiff Batch GD, Batch Gradient Descent-Batch Gradient Descent, Lasso Regression defining, Gradient Descent local minimum versus global minimum, Gradient Descent manually computing gradients, Manually Computing the Gradients Mini-batch GD, Mini-batch Gradient Descent-Mini-batch Gradient Descent, Feeding Data to the Training Algorithm-Feeding Data to the Training Algorithm optimizer, Using an Optimizer Stochastic GD, Stochastic Gradient Descent-Stochastic Gradient Descent, Soft Margin Classification with TensorFlow, Implementing Gradient Descent-Using an Optimizer Gradient Tree Boosting, Gradient Boosting GradientDescentOptimizer, Construction Phase gradients(), Using autodiff gradients, vanishing and exploding, Training Deep Neural Nets-Gradient Clipping, The Difficulty of Training over Many Time StepsBatch Normalization, Batch Normalization-Implementing Batch Normalization with TensorFlow Glorot and He initialization, Vanishing/Exploding Gradients Problems-Xavier and He Initialization gradient clipping, Gradient Clipping nonsaturating activation functions, Nonsaturating Activation Functions-Nonsaturating Activation Functions graphviz, Training and Visualizing a Decision Tree greedy algorithm, The CART Training Algorithm grid search, Fine-Tune Your Model-Grid Search, Polynomial Kernel group(), Learning to Play Ms.

pages: 151 words: 39,757

Ten Arguments for Deleting Your Social Media Accounts Right Now
by Jaron Lanier
Published 28 May 2018

This is sometimes because of so-called dark ads, which show up in a person’s newsfeed even though they aren’t technically published as news.4 Many extremist political dark ads on Facebook only came to light as a result of forensic investigations of what happened in the 2016 elections.5 They were blatant and poisonous, and Facebook has announced plans to reduce their harm, though that policy is in flux as I write. While no one outside Facebook—or maybe even inside Facebook—knows how common or effective dark ads and similar messages have been,6 the most common form of online myopia is that most people can only make time to see what’s placed in front of them by algorithmic feeds. I fear the subtle algorithmic tuning of feeds more than I fear blatant dark ads. It used to be impossible to send customized messages to millions of people instantly. It used to be impossible to test and design multitudes of customized messages, based on detailed observation and feedback from unknowing people who are kept under constant surveillance.

pages: 444 words: 130,646

Twitter and Tear Gas: The Power and Fragility of Networked Protest
by Zeynep Tufekci
Published 14 May 2017

On Facebook’s algorithmically controlled news feed, however, it was as if nothing had happened.33 I wondered whether it was me: were my Facebook friends just not talking about it? I tried to override Facebook’s default options to get a straight chronological feed. Some of my friends were indeed talking about Ferguson protests, but the algorithm was not showing the story to me. It was difficult to assess fully, as Facebook keeps switching people back to an algorithmic feed, even if they choose a chronological one. As I inquired more broadly, it appeared that Facebook’s algorithm—the opaque, proprietary formula that changes every week, and that can cause huge shifts in news traffic, making or breaking the success and promulgation of individual stories or even affecting whole media outlets—may have decided that the Ferguson stories were lower priority to show to many users than other, more algorithm-friendly ones.

It’s worth pondering if without Twitter’s reverse chronological stream, which allowed its users to amplify content as they choose, unmediated by an algorithmic gatekeeper, the news of unrest and protests might never have made it onto the national agenda.43 The proprietary, opaque, and personalized nature of algorithmic control on the web also makes it difficult even to understand what drives visibility on platforms, what is seen by how many people, and how and why they see it. Broadcast television can be monitored by anyone to see what is being covered and what is not, but the individualized algorithmic feed or search results are visible only to their individual users. This creates a double challenge: if the content a social movement is trying to disseminate is not being shared widely, the creators do not know whether the algorithm is burying it, or whether their message is simply not resonating.

pages: 218 words: 65,422

Better Living Through Criticism: How to Think About Art, Pleasure, Beauty, and Truth
by A. O. Scott
Published 9 Feb 2016

I would suggest, rather, that the tiresome theater of contending certainties—of boasting and gotcha-calling and loud accusations of bad faith—is enabled by a relativism that refuses to distinguish between truth claims and assertions of opinion, or between skepticism and invention. We find it so easy to be wrong, and to recover from the shame of error, because we have trouble crediting the real difficulty of being right. The desire for a shortcut—whether in the form of an unshakable worldview or a set of nifty algorithms—feeds the suspicion that every assertion is a scam, and is therefore vulnerable to simple debunking. The essential modesty and rigor of the scientific method is widely and cheaply travestied and willfully misunderstood. The work of scientists consists to some degree of trying, over and over, to prove themselves wrong.

pages: 297 words: 83,651

The Twittering Machine
by Richard Seymour
Published 20 Aug 2019

Even if it didn’t reduce bullying, it shifted the conversation and it mitigated Twitter’s long-term problem with user engagement. Intriguingly, this solution paid no attention to user demand. The social industry bosses don’t trust that users know what they want. As Facebook’s former chief technical officer, Ben Taylor, explained: ‘Algorithmic feed was always the thing people said they didn’t want, but demonstrated they did via every conceivable metric.’41 The metrics in question are those of user engagement, which fuel the mobile advertising business. Even if users complain bitterly about being trolled and abused, as long as we stay hooked, then the metrics will say we love the system.

pages: 315 words: 93,522

How Music Got Free: The End of an Industry, the Turn of the Century, and the Patient Zero of Piracy
by Stephen Witt
Published 15 Jun 2015

So, going back to the cymbal crash, you could also assign fewer bits to the first few milliseconds before the beat. Relying on decades of empirical auditory research, Brandenburg told the bits where to go. But this was just the first step. Brandenburg’s real achievement was figuring out that you could run this process iteratively. In other words, you could take the output of his bit-assignment algorithm, feed it back into the algorithm, and run it again. And you could do this as many times as you wished, each time reducing the number of bits you were spending, making the audio file as small as you liked. There was degradation of course: like a copy of a copy or a fourth-generation cassette dub, with each successive pass of the algorithm, audio quality got worse.

pages: 324 words: 96,491

Messing With the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians, and Fake News
by Clint Watts
Published 28 May 2018

Pariser recognized that filter bubbles would create “the impression that our narrow self-interest is all that exists.”1 The internet brought people together, but social media preferences have now driven people apart through the creation of preference bubbles—the next extension of Pariser’s filter bubbles. Preference bubbles result not only from social media algorithms feeding people more of what they want, but also people choosing more of what they like in the virtual world, leading to physical changes in the real world. In sum, our social media tails in the virtual world wag our dog in the real world. Preference bubbles arise subtly from three converging biases that collectively and powerfully herd like-minded people and harden their views as hundreds and thousands of retweets, likes, and clicks aggregate an audience’s preferences.

pages: 519 words: 102,669

Programming Collective Intelligence
by Toby Segaran
Published 17 Dec 2008

Maps distance metric, Defining a Distance Metric defining, Defining a Distance Metric distance metrics, Discovering Groups distributions, uneven, Uneven Distributions diversity, The Importance of Diversity docclass.py, Documents and Words, Training the Classifier, Calculating Probabilities, Calculating Probabilities, Calculating Probabilities, A Naïve Classifier, Probability of a Whole Document, A Quick Introduction to Bayes' Theorem, The Fisher Method, Category Probabilities for Features, Combining the Probabilities, Classifying Items, Classifying Items, Persisting the Trained Classifiers, Persisting the Trained Classifiers, Persisting the Trained Classifiers, Using SQLite, Using SQLite, Using SQLite, Using SQLite, Improving Feature Detection classifer class, Persisting the Trained Classifiers, Persisting the Trained Classifiers, Persisting the Trained Classifiers, Using SQLite, Using SQLite, Using SQLite, Using SQLite catcount method, Using SQLite categories method, Using SQLite fcount method, Persisting the Trained Classifiers incc method, Using SQLite incf method, Persisting the Trained Classifiers setdb method, Persisting the Trained Classifiers totalcount method, Using SQLite classifier class, Training the Classifier, Calculating Probabilities, Calculating Probabilities, A Naïve Classifier, The Fisher Method, Category Probabilities for Features, Improving Feature Detection classify method, The Fisher Method fisherclassifier method, Category Probabilities for Features fprob method, Calculating Probabilities train method, Calculating Probabilities weightedprob method, A Naïve Classifier fisherclassifier class, Combining the Probabilities, Classifying Items, Classifying Items classify method, Classifying Items fisherprob method, Combining the Probabilities setminimum method, Classifying Items getwords function, Documents and Words naivebayes class, Probability of a Whole Document, A Quick Introduction to Bayes' Theorem prob method, A Quick Introduction to Bayes' Theorem sampletrain function, Calculating Probabilities document filtering, Document Filtering, Document Filtering, Documents and Words, Training the Classifier, Calculating Probabilities, Calculating Probabilities, Starting with a Reasonable Guess, A Naïve Classifier, Choosing a Category, The Fisher Method, The Fisher Method, The Fisher Method, Combining the Probabilities, Classifying Items, Persisting the Trained Classifiers, Using SQLite, Filtering Blog Feeds, Improving Feature Detection, Using Akismet, Exercises, Exercises, Exercises, Exercises, Exercises, Exercises Akismet, Using Akismet arbitrary phrase length, Exercises blog feeds, Filtering Blog Feeds calculating probabilities, Calculating Probabilities, Calculating Probabilities, Starting with a Reasonable Guess assumed probability, Starting with a Reasonable Guess conditional probability, Calculating Probabilities classifying documents, Documents and Words, Training the Classifier training classifiers, Training the Classifier exercises, Exercises Fisher method, The Fisher Method, The Fisher Method, Combining the Probabilities, Classifying Items classifying items, Classifying Items combining probabilities, Combining the Probabilities versus naïve Bayesian filter, The Fisher Method improving feature detection, Improving Feature Detection naïve Bayesian classifier, A Naïve Classifier, Choosing a Category choosing category, Choosing a Category naïve Bayesian filter, The Fisher Method versus Fisher method, The Fisher Method neural network classifier, Exercises persisting trained classifiers, Persisting the Trained Classifiers, Using SQLite SQLite, Using SQLite Pr(Document), Exercises spam, Document Filtering varying assumed probabilities, Exercises virtual features, Exercises document location, Content-Based Ranking, Document Location content-based ranking, Document Location document location, Document Location dorm.py, Student Dorm Optimization, Student Dorm Optimization, The Cost Function dormcost function, The Cost Function printsolution function, Student Dorm Optimization dot-product, Dot-Products, Dot-Products code, Dot-Products dot-products, Basic Linear Classification, The Kernel Trick downloadzebodata.py, Scraping the Zebo Results, Scraping the Zebo Results E eBay, Open APIs eBay API, Using Real Data—the eBay API, Getting a Developer Key, Getting a Developer Key, Setting Up a Connection, Performing a Search, Getting Details for an Item, Building a Price Predictor, Exercises developer key, Getting a Developer Key getting details for item, Getting Details for an Item performing search, Performing a Search price predictor, building, Building a Price Predictor Quick Start Guide, Getting a Developer Key setting up connection, Setting Up a Connection ebaypredict.py, Setting Up a Connection, Setting Up a Connection, Setting Up a Connection, Performing a Search, Performing a Search, Performing a Search, Getting Details for an Item, Building a Price Predictor doSearch function, Performing a Search getCategory function, Performing a Search getHeaders function, Setting Up a Connection getItem function, Getting Details for an Item getSingleValue function, Setting Up a Connection makeLaptopDataset function, Building a Price Predictor sendrequest function, Setting Up a Connection, Performing a Search elitism, Building the Environment entropy, Entropy, Entropy, Entropy code, Entropy Euclidean distance, Euclidean Distance Score, Basic Linear Classification, k-Nearest Neighbors, Mathematical Formulas, Euclidean Distance code, Mathematical Formulas k-nearest neighbors (kNN), k-Nearest Neighbors score, Euclidean Distance Score exact matches, Exercises F Facebook, Network Visualization, Matching on Facebook, Getting a Developer Key, Creating a Session, Download Friend Data, Building a Match Dataset, Exercises building match dataset, Building a Match Dataset creating session, Creating a Session developer key, Getting a Developer Key downloading friend data, Download Friend Data matching on, Matching on Facebook other Facebook predictions, Exercises facebook.py, Creating a Session, Creating a Session, Creating a Session, Creating a Session, Creating a Session, Creating a Session, Download Friend Data, Download Friend Data, Building a Match Dataset, Building a Match Dataset arefriends function, Building a Match Dataset createtoken function, Creating a Session fbsession class, Creating a Session, Download Friend Data getfriends function, Download Friend Data getinfo method, Download Friend Data getlogin function, Creating a Session getsession function, Creating a Session makedataset function, Building a Match Dataset makehash function, Creating a Session sendrequest method, Creating a Session factorize function, The Algorithm feature extraction, Finding Independent Features, A Corpus of News news, A Corpus of News feature-extraction algorithm, Downloading Sources features, Bayesian Classifier features matrix, What Does This Have to Do with the Articles Matrix? feedfilter.py, Filtering Blog Feeds, Improving Feature Detection entryfeatures method, Improving Feature Detection feedforward algorithm, Feeding Forward feedparser, Downloading Sources filtering, Filtering Spam, Choosing a Category, Choosing a Category rule-based, Filtering Spam spam, Choosing a Category, Choosing a Category threshold, Choosing a Category tips, Choosing a Category financial fraud detection, Other Uses for Learning Algorithms financial markets, What Is Collective Intelligence?

pages: 382 words: 105,819

Zucked: Waking Up to the Facebook Catastrophe
by Roger McNamee
Published 1 Jan 2019

Partisan TV channels like Fox News and MSNBC maintain powerful filter bubbles, but they cannot match the impact of Facebook and Google because television is a one-way, broadcast medium. It does not allow for personalization, interactivity, sharing, or groups. In the context of Facebook, filter bubbles have several elements. In the endless pursuit of engagement, Facebook’s AI and algorithms feed each of us a steady diet of content similar to what has engaged us most in the past. Usually that is content we “like.” Every click, share, and comment helps Facebook refine its AI just a little bit. With 2.2 billion people clicking, sharing, and commenting every month—1.47 billion every day—Facebook’s AI knows more about users than they can imagine.

pages: 484 words: 114,613

No Filter: The Inside Story of Instagram
by Sarah Frier
Published 13 Apr 2020

The biggest threat to that brand, which Ariana Grande and Miley Cyrus had raised in years past, was the fact that on an anonymous network, it’s easier for people to say hateful things about one another. Finally, Systrom decided, it was time to take on bullying. But in Instagram style, the plan started out as a reaction to a celebrity, this time Taylor Swift, in a crisis on the site. (Instagram might have resolved to prioritize its regular users in product builds, with changes like the algorithmic feed, but it was still listening intently to celebrities about their needs, reasoning that doing so was good for the brand, as the celebs’ problems also affected their millions of followers.) The pop star, who knew Systrom through close friends, the investor Joshua Kushner and his supermodel girlfriend Karlie Kloss, started having a major problem that summer before the election.

pages: 1,172 words: 114,305

New Laws of Robotics: Defending Human Expertise in the Age of AI
by Frank Pasquale
Published 14 May 2020

Facebook has responded sluggishly to fake or misleading viral content.25 When it takes Jack Dorsey of Twitter six years to pull the plug on a Sandy Hook truther like Alex Jones (who aggressively backed conspiracists who tormented the parents of slaughtered children, claiming that the parents had made the whole massacre up), something is deeply wrong online. The core problem is a blind faith in the AI behind algorithmic feeds and a resulting evasion of responsibility by the tech leaders who could bring bad actors to heel. Tech behemoths can no longer credibly describe themselves as mere platforms for others’ content, especially when they are profiting from micro-targeted ads premised on bringing that content to the people it will affect most.26 They must take editorial responsibility, and that means bringing in far more journalists and fact checkers to confront the problems that algorithms have failed to address.27Apologists for big tech firms claim that this type of responsibility is impossible (or unwise) for only a few firms to take on.

pages: 706 words: 202,591

Facebook: The Inside Story
by Steven Levy
Published 25 Feb 2020

s attempt to acquire FB, 136 Mossberg, Walt, 271 Mosseri, Adam, 362, 364–65, 388–89, 390, 512 “Move fast and break things” ethic of Facebook, 6, 16, 106, 240–43, 246, 427 MoveOn.org, 188, 262 Mueller, Robert, 336, 374–75, 378 murders on Facebook Live, 441 Murdoch, Rupert, 132 Murphy, Bobby, 308, 311–12 music industry, 93–94 Musk, Elon, 5, 6, 7, 8, 88, 232, 506 Myanmar (previously Burma), 11, 435–39, 526 MySpace acquired by Murdoch’s NewsCorp, 132 advertising practices on, 199 and child predation settlement, 250 and iLike app, 158 images posted to, 113–14 market dominance of, 91, 132 obsolescence of, 297 and Open Registration at FB, 144 popularity of, 59 proposed acquisition of FB, 132, 179 and RapLeaf (data broker), 269 third-party apps on, 153–54, 155, 159–60 “Napalm Girl” image, 456–58 Napster, 80, 86, 93–94 Narendra, Divya, 56, 73, 76 NECO campaigns, 176 Netflix, 171, 175, 239 net neutrality, 233 network effect (Metcalfe’s law), 67 News Feed of FB advertisements in, 138, 181, 295–98, 475, 477 algorithms feeding and filtering, 127–28, 142, 163, 172, 260–61, 385, 391 and Cambridge Analytica, 399 and content publishers, 387–90, 391 criticisms of, 385–86 and customer support issues, 250 design and implementation of, 14, 123–31, 139–40 “disputed content” on, 389 and emotional contagion study, 406–8 and fact-checking operations, 389 fake news on (see fake news and misinformation) and Instant Articles feature, 387, 388 and Like button, 202–7, 402–4, 416 and Mini-Feed feature, 130–31, 139–40 and monetization of FB, 180 and myPersonality survey, 401 need for changes in, 385, 386–89 news consumption on, 128, 386–90, 391 and new users, 224 and Next Facebook, 488 outside content appearing on, 307–8 power of, 142 and privacy questions, 137–38, 143–44 and propaganda, 346, 348 quality studies for, 391 sensational content amplified on, 399 and sharing of content, 401 and spam wars, 163–65 and sponsored stories, 180, 296, 476 and stories from newer, weaker ties, 225 testing of, 137 and third-party developers and apps, 163–65 toxic addictiveness of, 385–86 and Trump campaign’s ads, 353 Twitter-inspired redesign of, 259–63, 525 and user activities reported from partner apps, 172 users’ reactions to, 140–43 video content in, 390–91 and virality, 262–63 news sources on social media, 132, 347 nipples policy, 252–53, 254 Nix, Alexander, 410–11, 413–14, 420–23 Obama, Barack, 334, 339–40, 350, 361–62, 363 Obama, Michelle, 467–68 Obama presidential campaigns, 350, 352 objectionable content on Facebook, 247–54, 435, 456.