/TwitterTrending

A Storm topology that computes trending tweets from hashtags.

Primary LanguageJava

#Twitter Trending topics with Apache Storm on HDInsight

A storm topology using Trident. It calculates trending topics (hashtags) on Twitter. This is heavily based on the trident-storm example by Juan Alonso.

Trident is a high-level abstraction that provides tools such as joins, aggregations, grouping, functions and filters. Additionally, Trident adds primitives for doing stateful, incremental processing. This project demonstrates how you can build a topology using a custom spout, function, and several built-in functions provided by trident.

##Download the project

git clone this repository

##What it does

The Trident code that implements the topology is as follows:

topology.newStream("spout", spout)
	.each(new Fields("tweet"), new HashtagExtractor(), new Fields("hashtag"))
	.groupBy(new Fields("hashtag"))
	.persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count"))
	.newValuesStream()
	.applyAssembly(new FirstN(10, "count"))
	.each(new Fields("hashtag", "count"), new Debug());

This does the following:

  1. Create a new stream from the spout. The spout retrieves tweets from Twitter, filtered for specific keywords - love, music, and coffee in this example.

  2. HashtagExtractor, a custom function, is used to extract hashtags from each tweet. These are emitted to the stream.

  3. The stream is then grouped by hashtag, and passed to a aggregator. This aggregator creates a count of how many times each hashtag has occurred, which is persisted in memory. Finally, a new stream is emitted that contains the hashtag and the count.

  4. Since we are only interested in the most popular hashtags for a given batch of tweets, the FirstN assembly is applied to return only the top 10 values based on the count field.

Other than the spout and HashtagExtractor, we are using built-in Trident functionality.

For information on built-in operations, see storm.trident.operation.builtin.

For Trident-state implementations other than MemoryMapState, see the following:

###The spout

The spout, TwitterSpout uses Twitter4j to retrieve tweets from Twitter. A filter is created (love, music, and coffee,) and incoming tweets (status) that match the filter are stored into a LinkedBlockingQueue. Finally, items are pulled off the queue and emitted into the topology.

###The HashtagExtractor

To extract hashtags, getHashtagEntities is used to retrieve all hashtags contained in the tweet. These are then emitted to the stream.

##Get a twitter account

Use the following steps to register a new Twitter Application and obtain the consumer and access token information needed to read from Twitter.

  1. Go to https://apps.twitter.com/ and use the Create new app button. When filling in the form, leave Callback URL empty.

  2. Once the app has been created, select the Keys and Access Tokens tab.

  3. Copy the Consumer Key and Consumer Secret information.

  4. At the bottom of the page, select Create my access token if no tokens exist. Once the tokens have been created, copy the Access Token and Access Token Secret information.

  5. In the TwitterSpoutTopology project you previously cloned, open the resources/twitter4j.properties file, and add the information gathered in the previous steps and then save the file.

##Build the topology

Use the following build the project.

	cd TwitterTrending
	mvn compile

##Test the topology

Use the following command to test the topology locally.

mvn compile exec:java -Dstorm.topology=com.microsoft.example.TwitterTrendingTopology

After the topology starts, you should see debug information containing the hashtags and counts emitted by the topology. The output should appear similar to the following.

DEBUG: [Quicktellervalentine, 7]
DEBUG: [GRAMMYs, 7]
DEBUG: [AskSam, 7]
DEBUG: [poppunk, 1]
DEBUG: [rock, 1]
DEBUG: [punkrock, 1]
DEBUG: [band, 1]
DEBUG: [punk, 1]
DEBUG: [indonesiapunkrock, 1]