Understanding Social Media Algorithms using Explainability and Interpretability
by Ashok Balasubramanian Updated: May 3, 2022
Should you have a say in your social feed algorithms? In the recent past, Elon Musk had been very vocal in suggesting that Twitter algorithms should be made public. He even added that the algorithms be made open source and are driven by the community like Linux or Signal. This could set a new precedent in social media and tech platforms. We will have to wait and see if and how he drives this change in Twitter. Interestingly Koo had shared the workings of their algorithms at https://info.kooapp.com/algorithms-at-koo/. This page gives a high-level view of how the feed is influenced by your following, trending content, reactions, type of media, etc. It, however, doesn't share the weights. It also explains how trending topics are surfaced based on keywords and hashtags, how creators are recommended, and how followers receive relevant notifications. It is a step in the right direction, but a long way to get to the Web3 ideologies of decentralization and fairness controlled by the users. While we all navigate through this, today's technology offers Explainability and Interpretability to better understand and control model behaviors. You could use these to enable users to understand and tweak algorithms in your applications and contribute to this transformation. Explainability is the extent to which a system's behavior can be traced back to its underlying causes, such as the parameters in use by the algorithm and the data used during training. It shows how significant each of these parameters and nodes contributes to the final decision. This helps debug and improve model performance and understand the model's behavior. Interpretability communicates the extent to which a cause and effect can be observed within a system. i.e. the extent to which you can predict what will happen, given a change in input or algorithmic parameters. For example, if you have a model that predicts whether someone will buy your product based on their age and income, then interpretability would let you know how much each of these factors actually contributes to predicting whether they'll buy your product or not (i.e., it could show that age only contributes 25% to predicting whether someone will buy your product). In contrast, explainability would let you know that age might be important because there's an interaction between age and income - meaning that it's more likely for people who have high incomes Together, they help understand how the model arrived at a decision and how each step contributed to that. Here's a list of open source libraries that can help you experiment on Explainability and Interpretability.
A game theoretic approach to explain the output of any machine learning model.
Jupyter Notebook 18846 Version:v0.41.0 License: Permissive (MIT)
Lime: Explaining the predictions of any machine learning classifier
Fit interpretable models. Explain blackbox machine learning.
C++ 5378 Version:v0.3.2 License: Permissive (MIT)
A collection of infrastructure and tools for research in neural network interpretability.
Jupyter Notebook 4512 Version:v0.3.10 License: Permissive (Apache-2.0)
🔅 Shapash makes Machine Learning models transparent and understandable by everyone
Jupyter Notebook 2136 Version:v2.3.0 License: Permissive (Apache-2.0)
Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.
Python 1578 Version:v0.4.2 License: Permissive (MIT)
DrWhy is the collection of tools for eXplainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.
R 590 Version:Current License: No License