Twitter Emote Robot Using Pic

High level design 8


Social Media outlets like Twitter and Facebook have become dominating players in the field of human interaction. Indeed many interactions have become mediated by digital technology. We believe the loss of the physical component of interaction has had negative effects on human relationships overall. Current research studies agree that people become more isolated through use of social media.

Our project aims to explore potential solutions to the lack of physical feedback in the world of social media. We built an emotionally expressive robot which physically reacts to Tweets in a live setting. Users can tweet to the robot’s Twitter account and receive near instant feedback as the robot, plays a sound, moves on its surface, displays the tweet text, shows a facial expression, lights up with different colors and intensities to convey its feelings about the tweet.

A server application running on a laptop computer monitors the robot’s Twitter account in real time. When a tweet is received, the server processes the content and fits the text to an emotion. The twitter user ID, tweet content, and emotion are all sent to the robot via a wireless bluetooth serial connection whereupon the robot displays the information and maps the emotion to a predetermined set of outputs for each of the possible emotions.

High Level DesignHigh level design 8

When considering options for our final project, we wanted to design and building something that could have social impact. Considering concern over social media use and its impact on the behavior of young people has grown in recent years, we decided build a robot to facilitate social media interaction in the real world. Consequently, we designed a twitter robot that analyzed the emotions of and reacted to tweets from users in real-time.

At a high-level, our system consisted of 3 main parts: a python script running on a PC, a primary pic32, and a secondary pic32. On execution, the python script connected to a bluetooth module in the robot and then began monitoring twitter for any mentions of our robot’s twitter handle. Once a tweet came in, the script analyzed its emotion and sent the handle, tweet, and emotion over bluetooth to the primary pic32. The primary pic32 parsed the information it received, displayed the tweet on a tft display, and sent the emotion over UART to a secondary pic32. This pic32, upon receiving an emotion, displayed the appropriate face on a second tft display, played a sound indicating a tweet was received, and changed the color of lights mounted the robot, and actuated the robot’s servos until it receives a yield command from the primary pic32.

Initially, we planned on using WiFi to broker communication between the PC and the primary pic32. However, according to work done in previous projects, using WiFi lead to problems with TCP connections, parsing, and sending successive commands too quickly. Consequently, we decided to use bluetooth instead which while reducing the range capabilities of the robot, afforded grater flexibility with parsing and sending commands. We also had to compromise on some of the robot’s idle behavior such as blinking due to the CPU requirement of the method getMachineBuffer which returned data from the rx line of the UART. To avoid missing data, the thread containing getMachineBuffer needed to be running whenever there was a possibility of a receive occurring. Consequently, scheduling a thread that handled blinking caused the thread running getMachineBuffer to non-deterministically miss transmissions. Thus we carefully implemented our thread-scheduling algorithm to wait for a transmission from the UART before signaling other threads to run.

We also had to make a tradeoff between using multi-color LED lights or many different single-color LED lights. The main tradeoff here was the high cost but better aesthetics of using multi-colored LEDs. Furthermore, single-colored LED light were less technically challenging as they only required a PWM signal to control them as opposed to multi-colored LEDs which would require an additional communication channel for its chip.

Design Details

Hardware Design


At the center of the hardware design was two PIC32 microcontrollers, each connected to a small board. We decided to use two PICs because we needed the ample amount of I/O ports for all of our connected sensors. Additionally, we used the small boards so that they could fit nicely into the portable size of our device.

The function of the first PIC was to handle the tweet information. Connected to it was both a TFT display and a bluetooth module. From the computer, the script outlined above sent the tweet and emotion data using bluetooth to the bluetooth module. The PIC then read this data and performed two tasks. First, it took the tweet and the sender of the tweet and displayed it on the TFT display. Second, it took the analyzed emotion and transmitted it via UART to the second PIC.

The function of the second board was to perform all of the reactions to the tweet based on the emotion. One of the first things it did was play the tweeting sound using DMA to reflect that a tweet had been received and that it was going to react. We used a DAC for these connections and chose to use Piezo-electric buzzers to create a louder sound than electromagnetic speakers could produce. Next, the PIC changed the face on the TFT display to reflect the emotion. It then varied the duty cycle of the PWMs being sent to the LED circuit to turn the three different color lights (yellow, red, green) on, off, or fading in and out. Different patterns of lights were used depending on the emotion. Finally, the PIC sent two more PWM signals to the two servos allowing them to spin the robot in a circular fashion.

To power the system, we used a 9V battery. This was directly connected to each of the small boards as well the LED circuit. In order to power the servos, we built a simple voltage regulator to drop the 9V down to 5V. This was necessary as the max voltage the servos could run off of was 5V. For the rest of the devices (the bluetooth module, the DAC, and the two TFTs), we used the 3.3V pins on the small boards as a power source. For further detail on the exact connections between hardware components, refer to Appendix C for the complete schematic.

MechanicalMechanical 1

The robot itself is built mostly from plywood. The design is a two-box design in the spirit of classic robot design where the body is simply a box and the head a smaller box sitting on top of the body box. The boxes were constructed by laser-cutting plywood with teeth-like notches such that the panels could fit into each other well to form a box. The body box is 6”x5.5”x4”, and the head box is 4”x2.5”x2”.

The head has a rectangular hole cut out in the front into which we placed the Adafruit TFT display to represent the robot’s face. Four screw holes were made to hold the display in place. Holes were drilled into the sides of the head, one on each side, to hold the Piezo Speakers. The placement of the speakers is intended to give the appearance of ears on the robot. Another rectangular hole was cut on the bottom panel of the head so that wires can be fed from the body into the head. Inside the head also was placed the HC-05 Bluetooth module. The head panels were hot glued together to form the final head.

The body was constructed much the same as the head. A rectangular hole was made for the other Adafruit TFT display in the approximate center of the front panel of the body. The display was mounted here with screws and its serves as a place to show the tweet content received by the robot. Above the display and also on the front panel is a rectangular hole through which the robot’s expressive lights are visible. The rectangular hole is covered from the inside by a thin layer of frosted-texture plastic cut from a plastic shoe-box. The plastic was added to give the LEDs a more dispersed lighting effect as well as to obscure the view of the robot’s inner circuitry. Behind the plastic was mounted the LED circuit board. The board is actually two small solder boards glued together (used because one large solder board did not fit as intended) and contains a total of 12 LEDs (4 Green, 4 Yellow, and 4 Red). The LEDs are oriented on the board such that the four LEDs of each color are approximately equidistant and spanning the length of the rectangular viewing hole. The board is secured by screws to the front wooden panel and offset from the panel using spacers. On either side of the body box are holes cut to fit the continuous rotation servos. The servos are secured to the panels with screws. The servos are intentionally placed toward the front of the robot so that with wheels attached to the servos the robot will rest on a table with points of contact being two wheels in the front and a small piece of wood on the back. The piece of wood is secured to the bottom panel near the rear of the body using screws. On the back body panel, a rectangular door was cut to allow for interior access to the body. The door was placed on a hinge and the hinge connected to the rest of the body with hot glue. A small plastic knob was glued to the door as a handle with which to open and close the access door. A hole was cut into the top piece of the body to allow wires to run from the body to the head. All of the body panels were glued together to form the final body box. The head was simply glued onto the top of the body box, completing the body design.

Inside the body box, in addition to the components already mentioned, are two ECE 4760 small boards Velcroed to the inside of the two side panels. The 9 Volt battery used to power the system is placed unsecured inside the body connected by wire to a small solder board used for power distribution. Power is delivered to the two PIC32s via DC barrel plugs from the power distribution board. The power distribution board is not secured down but is held in place by the force of the wires plugged into it.


Python Script:
To stream tweets from twitter, we implemented a script in python which analyzed the emotions fo tweets containing the mention “@BotCornell” and sent the handle, tweet content, and emotion over bluetooth serial to the robot. Our python script used a python library called tweepy to interface with the twitter API as well as a library called paralleldots which determines an emotion from text. The script contained two classes: StreamListener and TwitterRobotPushManager.

The stream listener class is an inherited class from the tweepy library containing callback methods which execute when a tweet is streamed by our script. When a tweet is received this class appends the tweet to a global thread-safe queue containing any outstanding tweets.

This class contains the main method which executes in a new thread when the script is run. On start-up, the start method connects to the twitter api, the paralleldots api, and the bluetooth serial port as well as starts a thread which filters all tweets with the mention “@BotCornell” in real time. The method then spawns another thread which waits for the global tweet queue to be updated whereupon the tweet is consumed from the queue, processed for its emotion and dividied into 5 chunks (to support the 64-character limit enforced by getMachineBuffer). Then the handle, each chunk and the emotion is sent successively over bluetooth to the robot. After sending over the relevant information, the consumer waits for at least 25 seconds before attempting to consume another tweets which allows the robot to complete its reaction to the tweet that was just sent.

primary_pic.c handled receiving messages over bluetooth from the python script, sending the emotion to the secondary pic and display the tweet’s contents onto a tft_display. Both receiving over bluetooth and transmitting to the secondary pic was done with the UART and used the threaded methods GetMachineBuffer and PutMachineBuffer. For our system, we used a baud rate of 9600 bps and expected a carriage return as the termination character for each transmission.

Protothread serial is responsible for both receiving data over bluetooth from, maintaining the receive state, and sending the emotion it receives to the secondary pic32.

On start-up the thread first clears PT_send_buffer which is used by PutSerialBuffer to send emotion to the secondary pic32. Doing this is highly critical to ensure that the buffer is not corrupted on start-up, which would prevent further communication between pics. Inside the main while loop, the thread first spawns a GetMachineBuffer thread which yields until data is received. Once data is received, since receive_state is initially 0, the received data is loaded into a global twitter handle buffer and then receive_state is incremented and getMachineBuffer is called again. This processes repeats six more times to receive each tweet chunk and finally the emotion. Once emotion is received, receive_state is reset to 0, the emotion is sent over the UART Tx line, a react_state variable is set to 0 and the thread yields until the condition that react_state is 2.

The execute thread handles timing of the robots execution and printing the tweet to one of the tft_displays. The thread switches on react_state such that when react_state is 0, indicating a reaction should commence, the thread starts timer23, calls print_tweet, and increments react_state. When react_state is 1, the execute thread does nothing. However, once react_state equals 2, indicating the timer has expired, the thread clears the tft_display showing the tweet sends a yield character over the UART Tx line to the secondary pic32 and yields until react_state is not 2. Note, the careful placement of the yields in both threads is critical to ensuring that nothing is missed by GetMachineBuffer in protothread serial.

Since the timer is used to mediate how long the robot’s reaction should last, the ISR simply clears the interrupt flag and sets react_state to 2.

Print tweet handles printing the twitter handle and tweet contents onto the screen. To do this, the method appends each tweet chunk into a full_tweet buffer and then iterates through the full_tweet buffer printing 24 character per line onto the tft_display. The method then resets the tweet_chunk and tweet_full buffers.

The main method simply sets up timer23, sets up protothreads, the tft_display and schdules both threads. It is important to note that we chose timer23 since it is 32-bit timer which was necessary as we required the timer to be able to count for 20 seconds.

The secondary pic handled receiving emotions from the primary pic over UART and subsequently actuating the motors and LEDs with a PWM signal as well as drawing faces to one of the tft_displays.

The ISR is responsible for regulating the PWM signals that control both the LEDs and the servos on the robot. We set the ISR to run at 50 Hz which is the standard PWM frequency for motors. Inside the ISR, we set the pulses for output compare 1,2, 3, 4, and 5 depending on which emotion was received. Output compares 1,2, and 3 control the lights and can either flash or fade on and off, while output compares 4 and 5 control the motors. We also used counter variables to control the speed at which the lights fade or blink.

Protothread face handles playing the twitter notification sound and drawing faces on a tft_display. The thread switches on update_flag. If update_flag is 0, a DMA transfer is started, one of seven faces is displayed depending on which emotion is received, and then the thread yields until update_flag is 0. If update_flag is 1, the thread draws the resting facing and yields until update_flag is 0. Similar to the yielding structure in primary_pic, this yielding structure is designed to ensure that GetMachineBuffer never misses a transmission.

Protothread serial receives the emotion sent by primary_pic and sets a global update_flag variable to 1 unless the received character is a yield character in which case, it sets update_flag to 0.

The main method handles setup for DMA, output compares for PWM, timers, the tft_display, and protothreads.


From a latency standpoint, our system performed very well. After a user sends a tweet, it takes the robot on-average 6.89 s to react. The main bottleneck was the twitter API which took 6.25 seconds on-average to retrieve a tweet. The actual reactions to the user’s tweets once the tweet is received happens almost instantaneously from the user’s perspective.

Our system was also designed to handle load at a reasonable scale. We implemented the python script to buffer all incoming tweets and consume each tweet from the buffer sequentially. This enabled us to control the speed at which tweets were actually sent to the robot while ensuring that no incoming tweets are dropped. For example, if two users tweeted to the robot at the same time, both tweets would be buffered in a thread-safe manner by the python script and then each tweet would be sent sequentially to the robot such that the time between each transmission allowed the robot to completely react to each received tweet.

User Experience

One of the first senses to be appealed to by this interactive device is the sense of sound. When any tweet is received by the robot, it immediately emits a chirping-like sound to notify to the user that someone has tweeted at the robot and it is going to react. The specific sound is the sound made by the twitter app when a tweet is received. We chose this because it is a universally recognized sound that would be easily understood in its functionality.

Next the robot appeals to the user visually in three ways. First its face changes on the top display from its resting position, to a more indicative face depending on the emotion. The following faces have been created to be associated with each emotion:User Experience

Simultaneously, the bottom display changes to show not only the tweet that the robot was reacting to, but who sent it. This is allows the user to read the tweet and further understand why the robot is reacting in this given way. Finally, the bar of lights on the robot lights up in different patterns, again according to the indicated emotion. The patterns themselves were determined using our best judgement and were supposed to represent interpretations of each emotion. For example, when the tweet is analyzed to be angry, the robot lights up with flashing red lights.

The last component to the user experience appeals to the users sense of motion as the robot will turn around in a complete circle when any tweet has been received. This is meant to draw attention to the robot as its movement is highly noticeable. Because the robot is meant to be a tabletop device, we made sure that the robot turned in a circle around itself so as not to run into anything else in its vicinity or fall off the table.

The combined reactions create an inexplicable user experience, appealing to multiple sensory components of users and truly bringing a new interactive and physical component to the once individualized, social media.


The design met our fullest expectations and more as the final result was fully functional and appealed to multiple senses, achieving the goals we initially set out to meet. If we had additional time, there are multiple ways that we could improve on the current functionality of the device. For starters, it might be valuable to try to find some research that could more appropriately connect emotions to colors. Using this, we could try to build on our color bar to include more scientifically supported color reactions as opposed to the patterns and colors we generated using our best judgement. Secondly, we could improve the sound component of the robot by investing in higher quality speakers or even building an amplifier to magnify the sound. Currently, the sound is just barely audible and therefore would not get the attention of people in a crowded or noisy room. Third, we could improve the graphics displayed on the faces to be more expressive and representative of the emotions we were trying to display. This was difficult to accomplish in the allotted time as it would have required us to create structs with complex graphic components. In a similar vein, we also could have animated the faces to make them even more indicative of the emotion being displayed. An example of this might be blinking while in resting position or a tear falling when sad.

In regards to intellectual property considerations, we do not have many concerns as we only referenced publicly available APIs and did not take code from any public domains. Additionally, to our knowledge, there aren’t any patent or trademark issues as this robot is not something that has been developed and patented by anyone else. Because this device is new to the market, there is potential for it to be patented if we so pursue. More likely than getting a patent, we will be spending next semester writing up the project in hopes of getting it published in an Internet of Things magazine.

There are really no safety concerns related to this project as motion is very limited and occurs on the same axis that the robot is centered around. Even if the robot were to run over something, its light weight really does not lend it to jeopardize one’s safety. Not only this, but all of the electrical connections are secured within the robot and do not pose potential for any electric hazards.

Regarding legal considerations, there are some valid points of concern given that we are working with user data. However, we have mediated these concerns by intentionally not storing any data displayed in the form of tweets. Furthermore, the robot can only interact with publicly available tweets thereby mediating any concerns with accessing tweets from private or protected accounts.

Lastly, there are some serious ethical considerations that had to be made with respect to the use of this device. Because the robot displays any tweet directed at the account twitter account, there is potential for this device to spread some seriously detrimental hate speech. If this were to occur, it would be in direct violation of the IEEE Code of Ethics Rule 8: “to treat fairly all persons and to not engage in acts of discrimination based on race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression;” Given more time, we could potentially mitigate this violation of ethics by protecting access to the robot such that only followers of the user can tweet at the account and potentially filtering tweets using natural language processing before they are sent over to the robot to be displayed.


# Import modules
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
from threading import Lock
from threading import Thread
import time
import serial
import paralleldots

EXECUTION_TIME = 25 #seconds
PORT = 'COM12'
BAUD_RATE = 9600
TERM_CHAR = "\r"
BOT_HANDLE = "@BotCornell"

## Twitter API Keys
consumer_key = 'your consumer key'
consumer_secret = 'your consumer secret'
access_token = 'your access token'
access_token_secret = 'your access token secret'

## Parallel
paralleldots_key = 'your paralleldots key'

class StreamListener(StreamListener):
    """ A listener handles tweets that are received from the stream.

    def __init__(self, batchedtweets, lock):
    	self.batchedtweets = batchedtweets
    	self.lock = lock

    def on_status(self, status):
    		tweet = status.extended_tweet["full_text"]
    	except AttributeError:
    		tweet = status.text

    	with self.lock:
    		self.batchedtweets.append((status.user.screen_name, tweet))
    	return True

    def on_error(self, status_code):
        if status_code == 420:
            return False

class TwitterRobotPushManager:
	def __init__(self):
		self.batchedtweets = []
		self.lock = Lock()
		self.emotion_dict = {"Happy" : 'h', "Angry" : 'a', "Excited" : 'e', "Sad" : 's', "Sarcasm" : 'u', "Fear" : 'f', "Bored" : 'b'}
		self.btport = serial.Serial(PORT, BAUD_RATE)

	def getemotion(self, tweet):
		emotion= (paralleldots.emotion(tweet)['emotion'])['emotion']
		return self.emotion_dict[emotion]

	def sendtweetdata(self, handle, tweet, emotion):
		tweet = tweet.replace(" ", "-")
		print("@"+handle+" says:")
		tweet_chunks = [tweet[i:i+CHUNK_SIZE] for i in range(0, len(tweet), CHUNK_SIZE)]
		print ("TWEET CHUNKS")
		for i in range (len (tweet_chunks)):
			print (tweet_chunks[i])
		for i in range(0, len(tweet_chunks)):
		for i in range(len(tweet_chunks), 5):
		print ("sending tweet data")

	def processtweets(self):
		updateflag = False
		while (1):
			with self.lock:
				if (len(self.batchedtweets) > 0):
					handle, tweet = self.batchedtweets.pop(0)
					updateflag = True
			if (updateflag):
				updateflag = False
				emotion = self.getemotion(tweet)
				self.sendtweetdata(handle, tweet, emotion)

	def tweetfilter(self, stream):

	def start(self):
		listener = StreamListener(self.batchedtweets, self.lock)
		auth = OAuthHandler(consumer_key, consumer_secret)
		auth.set_access_token(access_token, access_token_secret)
		stream = Stream(auth, listener)
		Thread(target = self.tweetfilter, args = (stream,)).start()
		thread = Thread(target = self.processtweets)

if __name__ == '__main__':
    botpushmanager = TwitterRobotPushManager()



 * File:        TFT, keypad, DAC, LED, PORT EXPANDER test
 *              With serial interface to PuTTY console
 * Authors:     Nikhil Dhawan, Ian Kranz, Sofya Calvin 
 * For use with Sean Carroll's Big Board
 * Target PIC:  PIC32MX250F128B

// clock AND protoThreads configure!
// You MUST check this file!
#include "config_1_2_3.h"
// threading library
#include "pt_cornell_1_2_3.h"

// graphics libraries
// SPI channel 1 connections to TFT
#include "tft_master.h"
#include "tft_gfx.h"
// need for rand function
// need for sin function

// string buffer
char buffer[60];

// DDS constant
#define two32 4294967296.0 // 2^32 
#define Fs 100000

static struct pt pt_serial, pt_exec ;
// The following threads are necessary for UART control
static struct pt pt_input, pt_output, pt_DMA_output ;

// system 1 second interval tick
int sys_time_seconds ;
volatile int react_state = 2;
int receive_state = 0;
char handle[15]; //stores incoming tweet twitter handle
char tweet_blocks[5][64]; //5x64 array of tweet chunks
char tweet_full[320]; //stores entire tweet
char emotion[1]; //tweet emotion

//== Timer 23 interrupt handler ===========================================
void __ISR(_TIMER_23_VECTOR, ipl2) Timer23Handler(void)
    react_state = 2;

void print_tweet() {
    static char handle_line[25];
    static char line[24];
    tft_setCursor(10, 10);
    sprintf(handle_line,"@%s says:", handle);
    int i;
    int j;
    //load tweet full
    for (i = 0; i <5; i++) {
        for (j = 0; j < 64 && (tweet_blocks[i][0] != '-' || tweet_blocks[i][1] != '-'); j++) {
            tweet_full[64*i+j] = tweet_blocks[i][j];
    //print tweet
    for (i = 0; i < 180; i++) {
        line[i%24] = tweet_full[i];
        if (30+i/24*15 > 180) {
        if (i%24 == 23 || i == 319) {
            tft_setCursor(10, 30+i/24*15);
            memset(line, 0, 24);
    //clear tweet chunks and tweet full
    for (i = 0; i < 5; i++) {
        memset(tweet_blocks[i], 0, 64);
    memset(tweet_full, 0, 320);
// Predefined colors definitions (from tft_master.h)
//#define	ILI9340_BLACK   0x0000
//#define	ILI9340_BLUE    0x001F
//#define	ILI9340_RED     0xF800
//#define	ILI9340_GREEN   0x07E0
//#define ILI9340_CYAN    0x07FF
//#define ILI9340_MAGENTA 0xF81F
//#define ILI9340_YELLOW  0xFFE0
//#define ILI9340_WHITE   0xFFFF

// === thread structures ============================================
// thread control structs
// note that UART input and output are threads

static PT_THREAD(protothread_execute(struct pt *pt))
    while (1) {
        PT_terminate_char = '\r' ;
        PT_terminate_count = 0 ;
        PT_terminate_time = 0 ;
        switch (react_state) {
            case 0: //start react
                WriteTimer23(0x00000000); //start timer
                react_state = 1; //transition to IP
            case 1: //react IP
            case 2: //stop react
                tft_fillRoundRect(0,0, 320, 240, 1, ILI9340_BLACK);// x,y,w,h,radius,color
               // tft_setCursor(10, 30+2*15);
               // tft_writeString("Have you ever heard the");
                sprintf(PT_send_buffer,"%s#", "y");
                PT_SPAWN(pt, &pt_output, PutSerialBuffer(&pt_output));
                PT_YIELD_UNTIL(pt, react_state != 2);

//=== Serial terminal thread =================================================

static PT_THREAD (protothread_serial(struct pt *pt))
      // string buffer
      static char buffer[128];
      tft_setTextColor(ILI9340_WHITE);  tft_setTextSize(2);
      memset(PT_send_buffer, 0, max_chars);
      while(1) {
            int i = 0;
            PT_terminate_char = '\r' ;
            PT_terminate_count = 0 ;
            PT_terminate_time = 0 ;
            //Bluetooth code
            PT_SPAWN(pt, &pt_input, PT_GetMachineBuffer(&pt_input) );
            if(PT_timeout==0) {
                switch (receive_state) {
                    case 0:
                        sscanf(PT_term_buffer, "%s", handle);
                    case 1:
                        sscanf(PT_term_buffer, "%s", tweet_blocks[0]);
                        for (i = 0; i < 64; i++) {
                            if (tweet_blocks[0][i] == '-') {
                                tweet_blocks[0][i] = ' ';
                    case 2:
                        sscanf(PT_term_buffer, "%s", tweet_blocks[1]);
                        for (i = 0; i < 64; i++) {
                            if (tweet_blocks[1][i] == '-') {
                                tweet_blocks[1][i] = ' ';
                    case 3:
                        sscanf(PT_term_buffer, "%s", tweet_blocks[2]);
                        for (i = 0; i < 64; i++) {
                            if (tweet_blocks[2][i] == '-') {
                                tweet_blocks[2][i] = ' ';
                    case 4:
                        sscanf(PT_term_buffer, "%s", tweet_blocks[3]);
                        for (i = 0; i < 64; i++) {
                            if (tweet_blocks[3][i] == '-') {
                                tweet_blocks[3][i] = ' ';
                    case 5:
                        sscanf(PT_term_buffer, "%s", tweet_blocks[4]);
                        for (i = 0; i < 64; i++) {
                            if (tweet_blocks[4][i] == '-') {
                                tweet_blocks[4][i] = ' ';
                    case 6:
                        sscanf(PT_term_buffer, "%s", emotion);
                        sprintf(PT_send_buffer,"%s#", emotion);
                        PT_SPAWN(pt, &pt_output, PutSerialBuffer(&pt_output));
                        react_state = 0;
                        PT_YIELD_UNTIL(pt, react_state == 2);
            // never exit while
      } // END WHILE(1)
} // thread 3

// === Main  ======================================================
void main(void) {
  ANSELA = 0; ANSELB = 0; 

  // set up DAC on big board
  // timer interrupt //////////////////////////
  // Set up timer2 on,  interrupts, internal clock, prescalar 1, toggle rate
  // at 30 MHz PB clock 60 counts is two microsec
  // 400 is 100 ksamples/sec
  // 2000 is 20 ksamp/sec
  OpenTimer23(T23_ON | T23_SOURCE_INT | T23_PS_1_256, 0x002FAF08);
  // set up the timer interrupt with a priority of 2
  ConfigIntTimer23(T23_INT_ON | T23_INT_PRIOR_2);
  mT23ClearIntFlag(); // and clear the interrupt flag
  // === config threads ==========
  // turns OFF UART support and debugger pin, unless defines are set

  // === setup system wide interrupts  ========

  // init the threads

  // init the display
  // NOTE that this init assumes SPI channel 1 connections
  //240x320 vertical display
  tft_setRotation(1); // Use tft_setRotation(1) for 320x240

  // seed random color
  // round-robin scheduler for threads
  while (1){
  } // main

// === end  ======================================================

Circuit SchematicCircuit Schematic

Leave a Comment

Your email address will not be published. Required fields are marked *