Sarcasm detection is challenging in natural language processing since its peculiar linguistic expression. Thanks in part to the availability of considerable annotated resources for some datasets, current supervised learning-based approaches can achieve promising performance in sarcasm detection. In the real-world scenario, however, labeling data for such a peculiar language expression of sarcasm. Therefore, several studies have recently explored unsupervised learning to perform sarcasm detection, aiming to reduce the labor cost of annotation. In this paper, based on many unlabeled data on social platforms, we propose a novel unsupervised sarcasm detection method based on prompts. Specifically, we first crawl about 3 million texts from Twitter by hashtag keywords search and divide them into sarcasm and non-sarcasm based on hashtags. Then, we feed the crawled texts into the pre-trained BERT for masked language model training, called SarcasmBERT, to improve the learning of sarcastic cues. Finally, we design prompts for the unlabeled data to perform sarcasm detection in an unsupervised way. Experimental results on six benchmark datasets show that our method outperforms the state-of-the-art unsupervised baselines. Further, our SarcasmBERT can be directly incorporated into existing BERT-based sarcasm detection methods for better performance.