Detecting Attacks in Recommender Systems


Mirza Elahi, Zhengyi Le, Sheng Zhang


This project extends research on collaborative recommendation systems, which base recommendations for an individual on the preferences expressed by other people, by investigating the problem of malicious manipulation of these systems. An attacker can attempt to influence the behavior of the system for other users by using biased (fake) rating profiles to artificially either promote or demote a target item. This project is investigating and trying to improve the technology and knowledge needed for trustworthy recommendation systems in the following ways. (1) It will evaluate vulnerabilities in relatively unstudied model-based recommendation systems, in which recommendations are based on a model that relates ratings on one item to ratings on other items. This facet of the work also aims to identify any previously unknown attack methods that might be specifically effective against model-based recommendation systems. (2) It will advance the current state-of-the-art in recognizing and defeating recommendation attacks. This project is working on a more complete list of attack detection methods including a probabilistic approach, outlier detection, and semi-supervised learning. (3) The project is developing tools of use to the research community. The evaluations and comparisons of recommendation attacks and recommendation algorithms for the research above will take place in a new unified platform designed to simplify this process. A tool will also be created to provide an automated solution for identifying and flagging potential recommendation attacks.

Preliminary Results

Our preliminary work [2] has explored the effect of shilling attacks on an SVD-based algorithm and shown that it is more resistant to attacks than memory-based algorithms. We also proposed an attack detection approach directly based on the SVD-based algorithm that is effective in detecting random attacks (though not average attacks). On another track of our research work in building trustworthy recommendation systems, we are investigating privacy-preserving techniques in collaborative filtering to help users obtain accurate recommendations while still keeping their rating profiles private (to some extent) [4, 6].

1. Effect of Attacks on an SVD-based Algorithm

We note that all previous work considered only memory-based CF algorithms when exploring vulnerabilities of recommendation systems, and no empirical study has been conducted on model-based algorithms. Since model-based algorithms have different mechanisms and generally outperform memory-based algorithms in terms of scalability and recommendation accuracy, it is important to know to what extent the attacks that are effective against memory-based algorithms can affect model-based algorithms. In [2], we picked an SVD-based CF algorithm and performed a series of experiments to quantitatively evaluate the effect of random attacks and average attacks on this algorithm. Two popular measures, Prediction Shift and ExpTopN, were used to evaluate how effective an attack is in accomplishing its goal. Prediction shift is defined as the average change in prediction toward some target values (the maximum or the minimum allowed rating, respectively, for a push or nuke attack) over all users and target items. ExpTopN is defined as the expected number of occurrences of all target items in a topN recommendation list, measured over all users, assuming that the displayed ordering of items tied at any particular rank is random. The following table lists the obtained results when 100 shilling bots are injected into the publicly available MovieLens ratings matrix (6040 users by 3706 items); Our results suggest that the SVD-based algorithm is much more resistant to random and average attacks than the representative memory-based algorithms (details in [2]).

2. A Probabilistic Approach of Detecting Attacks

In [2], we also proposed a probabilistic approach to detect recommendation attacks. In the following figure, we illustrate the effectiveness of this approach in MovieLens when a random attack model is used.

Each figure shows plots of the belief divergence D(Ai||Xi) for each rating profile. The first 6040 rating profiles (on the left of the vertical line) are real rating profiles and the last 100 profiles (on the right) are inserted bots. All inserted bots push or nuke one target item and give random ratings (~N(3.6,1.12)$, representing the distribution of the original data set) to the other items. The values of the threshold for D(Ai||Xi) when Ca=2 and Ca=2.5 are also shown as the dashed line and the dotted line. This figure shows how sharply the belief divergence is able to separate random attacks from real rating profiles. As seen in the plots, almost all inserted attack profiles result in values of D(Ai||Xi) greater than the threshold when Ca=2, while very few of the real rating profiles yield D(Ai||Xi) greater than that. Our results in [2] demonstrated that for a broad range of settings (i.e., the number of filler items, the number of target items, and the variance of random ratings) in random attacks, this method has high detection rates and low false alarm rates.

3. A Temporal Approach of Detecting Attacks

A considerable complication in detecting shilling attacks is that it is difficult to precisely and completely define the set of shilling attack patterns. New attacks will continue to arise over time, so an attack detection approach should avoid being restricted to any predefined set of attacks. In [3], we proposed a temporal approach that is able to detect a diverse and general set of recommendation attacks.

If we assume that attack profiles are injected into the system within a relatively short period of time, most shilling attack models share a common characteristic despite their diversity: over their attack period they induce changes in the rating distributions of target items (and possibly other items). For example, a push attack, regardless of its attack model, will cause the rating distribution of a target item to be concentrated on high ratings during its duration. The following table rating distribution changes of target items (and possibly other items) of the five attack models surveyed.

The rating distribution is a high-dimensional object and so can be difficult to work with directly. However, we can make the observation that in most cases, one can extract very useful information from the following two properties: the degree of dispersal or concentration of the distribution and the degree of likability. We suggest that sample average and sample entropy are effective to extract these two properties, respectively. To construct time series for an item, every disjoint k consecutive ratings given to the item (according to their given time) as a window. Sample average and sample entropy are then computed for each window.

It can be shown that both sample average and sample entropy are asymptotically Normal as k goes larger, therefore we can decide whether a window is an anomaly by computing its z-score for each measure. We illustrate the effectiveness of this approach through the example in the following figure.

The figure plots sample average and sample entropy of a MovieLens item around the time of the push attack event. The window size is 50 and the window containing attacks is marked with a circle. Both plots show that the attack event stands out clearly, while sample average and sample entropy of those windows containing normal ratings vary within a small range.

During the attack event, sample average of the window containing attacks inclines sharply and sample entropy declines sharply, consistent with a rating distribution concentration on the highest rating.

In [3], we gave a theoretical analysis to quantify the changes in sample average and sample entropy in time series when attack profiles are injected. Assuming that the number of attack profiles is known, an optimal window size can be derived to maximally amplify changes caused by attacks, which helps to enhance the performance of attack detection. For practical applications where this assumption does not hold, we proposed a heuristic algorithm to estimate the number of attack profiles and adaptively adjust the window size.


  1. Fillia Makedon, Sheng Zhang, Zhengyi Le, James Ford, and Euripides Loukis,“ HProviding Recommendations in an Open Collaboration System”, Proceedings of the 11th Panhellenic Conference on Informatics (PCI'07), volume A, pp. 239-248, Patras Greece, May 18-20, 2007.
  2. Sheng Zhang, Yi Ouyang, James Ford and Fillia Makedon, “ Analysis of a Low-dimensional Linear Model under Recommendation Attacks”, In the 29th ACM Conference on Research and Development in Information Retrieval (SIGIR), 2006.
  3. Sheng Zhang, Amit Chakrabarti, James Ford and Fillia, “ Makedon Attack Detection in Time Series for Recommendation Systems”, In the 12th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2006.
  4. Sheng Zhang, James Ford and Fillia Makedon, “Deriving Private Information from Randomly Perturbed Ratings” in the 2006 SIAM Conference on Data Mining (SDM06), Bethesda, MD, April 20-22, 2006.
  5. Sheng Zhang, Weihong Wang, James Ford and Fillia Makedon,“ Learning from Incomplete Ratings Using Non-negative Matrix Factorization” in the 2006 SIAM Conference on Data Mining (SDM06), Bethesda, MD, April 20-22, 2006.
  6. Sheng Zhang, James Ford and Fillia Makedon, “A Privacy-Preserving Collaborative Filtering Scheme with Two-way Communication”, ACM Conference on Electronic Commerce (EC06), Ann Arbor, MI, June 11-15, 2006.
  7. Sheng Zhang, Weihong Wang, James Ford, and Fillia Makedon, “Using Singular Value Decomposition Approximation for Collaborative Filtering ” in the 7th IEEE Conference on E-commerce (CEC), 2005