Trust-based systems have been proposed as means to fight against malicious agents in peer-to-peer networks, volunteer and grid computing systems, among others. However, there still exist some issues that have been generally overlooked in the literature. One of them is the question of whether punishing disconnecting agents is effective. In this paper, we investigate this question for these initial cases where prior direct and reputational evidence is unavailable, what is referred in the literature as trust bootstrapping.
First, we demonstrate that there is not a universally optimal penalty for disconnection and that the effectiveness of this punishment is markedly dependent on the uptime and downtime session lengths. Second, to minimize the effects of an improper selection of the disconnection penalty, we propose to incorporate predictions into the trust bootstrapping process. These predictions based on the current activity of the agents shorten the trust bootstrapping time when direct and reputational information is lacking.