Revision #1 Authors: Eli Ben-Sasson, Eden Saig

Accepted on: 19th November 2017 17:29

Downloads: 429

Keywords:

A model is suggested for abstracting the interaction between a guru and her followers. A guru has mental powers of insight that surpass those of her followers, but they (i) might not believe in her powers and (ii) have limited attention span, so may terminate interaction with her before reaching full ``enlightenment''. The main question that interests us is to understand when are gurus likely to retain their flock. This question affects not only spiritual leaders, but also automated processes competing to retain users in a congested information world.

Our model assumes followers interact with the guru while keeping in mind a single \emph{``retention parameter''} that measures the strength of their belief in her predictive power, and the guru's objective is to reinforce and maximize this parameter through ``informative'' and ``correct'' predictions. We make three contributions.

First, we define a natural class of \emph{retentive scoring rules} to model the way followers update their retention parameter and thus evaluate gurus they interact with. These rules are shown to be tightly connected to truth-eliciting ``proper scoring rules'' studied in Decision Theory since the 1950's [McCarthy, PNAS 1956].

Second, we focus on the intrinsic properties of phenomena that are amenable to collaborative discovery with a guru. Assuming follower types (or ``identities'') are sampled from a distribution $D$, the \emph{retention complexity} of $D$ is the minimal initial retention value (or ``strength of faith'') that a follower must have before approaching the guru, in order for the guru to retain that follower throughout the collaborative discovery, during which the follower ``discovers'' his true ``identity''.

Third and last, we take a modest first step towards relating retention complexity to other established computational complexity measures, namely, \emph{dual distance} and \emph{query complexity}, when $D$ is a uniform distribution over a linear space. We show that (i) the retention complexity of Walsh-Hadamard codes is \emph{constant} and (ii) that of random Low Density Parity Check (LDPC) codes is, with high probability, linear in the code's blocklength; intriguingly, for these two interesting families of linear codes, retention complexity is roughly equal to \emph{query complexity} as locally testable codes.

Introduction was restructured. Minor corrections in the technical sections.

TR17-160 Authors: Eli Ben-Sasson, Eden Saig

Publication: 29th October 2017 22:44

Downloads: 646

Keywords:

Gurus are individuals who claim to possess mental powers of insight and prediction that far surpass those of the average person; they compete over followers, offering them insight in return for continued devotion. Followers wish to harness a (true) Guru’s predictive power but (i) have limited attention span and (ii) doubt the Guru’s predictive advantage over them. This dynamic raises the question of follower retention: how do Gurus retain the faith of their flock in the face of limited attention and competition? This problem is not merely a spiritual one but one that also affects automated interactive processes competing for limited user attention in today’s congested Information World.

The phenomenon that a Guru wishes to instruct her followers about is modeled here by a distribution over a sequence of (possibly correlated) events. We define a natural class of retentive scoring rules to model the way followers evaluate Gurus they interact with. We show that these rules are tightly connected to the classical notion of truth-eliciting “proper scoring rules” studied in Decision Theory since the 1950’s [McCarthy, PNAS 1956].

Next, we move our attention from the dynamics of interaction between Guru and follower to the study of the intrinsic properties of distributions that deem them appropriate for instruction by a Guru. More to the point, we define the retention complexity of a distribution as the minimal initial level of “faith” that a follower should have before approaching the Guru, in order for the Guru to retain that follower throughout the full collaborative discovery process.

Finally, we initiate the study of the retention complexity of linear spaces over finite fields. We show (i) the retention complexity of Walsh-Hadamard codes is constant and (ii) that of random Low Density Parity Check (LDPC) codes is, with high probability, linear in the code’s blocklength; intriguingly, for these two interesting families of codes, retention complexity is roughly equal to query complexity as locally testable codes.