Climber's info on AVR programming - Interrupts

Many people new to the world of microcontrollers are often hesitant to try using interrupts.

Well, PHOOEY on that. I'm not a genius by any stretch of the imagination and if I can figure it out so can you. From now on all of my examples use interrupts so let's just get on with it.

What is an interrupt?

So, you want to know what an interrupt is? Time for - MR. ANALOGY!

Imagine you are sitting at home watching an Arnie movie on dvd. Your mother/significant other/whatever asks that you take out the garbage. You sigh, hit pause on the remote, take out the garbage, plop back down on the couch and hit play. The movie resumes right where you left off and Arnie continues to do his thing.

Congratulations, you now have a complete understanding of interrupts.

An interrupt is when the processor immediately stops what it is doing and heads on over to some other code segment when something happens. This "something" can be triggered externally or internally such as when a timer counts down to zero, the analog to digital converter has a result for you or whatever. Each variety of AVR has a set of interrupts it understands and it is up to us to set up the registers and whatnot to enable these interrupts and to add code to handle them.

Interrupts and C

Because interrupts are so architecture dependent C was not designed with provisions in the language to handle them. Each compiler has a different way of dealing with interrupts. This page describes how avr-gcc does it. Codevision and other compilers are likely to be a little different and you will need to dust off your compiler manuals if you want to start using interrupts.

An interrupt handler is a chunk of code in your program that is called ONLY when the interrupt occurs. That bit of code won't get called from within the "normal" code that starts with your main() function even though it can share the same data structures.

That last statement is very important. Variables that have a global scope are ACCESSIBLE from within an interrupt routine. This is something that I use a lot because you simply cannot pass arguments to an interrupt handler. If it ain't global or static the handler can't touch it.

The biggest hurdle to get over when trying to understand interrupts is the fact that when the interrupt occurs the processor will drop whatever it is doing to deal with the interrupt. The only guarantee we get is that any instruction currently being handled will complete before the interrupt handler is called. Now, that's a *machine* instruction. We mustn't forget that a single statement in C may include a number of machine instructions to perform the duty. If you are ever worried about a segment of code that needs to be executed atomically (i.e. all done or not at all) you might want to surround the code segment with cli() and sei(). By the way, these calls compile into a single AVR instruction so any code that is after the cli() will not get interrupted if the cli() actually gets run.

I'll be back.....

An important thing to keep in mind is that the facilities built into avr-libc with gcc will REMEMBER what the avr was doing when an interrupt is handled. So, when an interrupt occurs everything that the processor was working on will be saved, the handler will get called and when it returns the processor will try and continue doing what it was just before the interrupt was taken care of.

Say, for example, our AVR is running one of my robots. I typically have a while loop that never terminates in my main() function. This handles all of the stuff that it normally takes care of. If I want real-world timing (which is pretty much all of the time) I usually program one of the timers to interrupt the processor on a regular basis. So, how do I have the regular timing event get recognized by the code in the mainline? It's, simple, really. I defined a global variable that signals that a timer interrupt has occured. The while loop in the mainline polls this variable and any time-dependent code will get run when it switches to true.

Now, one could argue that this method isn't really necessary and that I could, in fact, have all of the timer dependent code inside the interrupt handler itself. That is entirely true and many programmers choose to do it that way.

For me, I prefer to keep the interrupt handlers to be as small as possible. It's purely a programming philosophy choice. I've been programming interrupt-aware software for decades and I just find it easier to debug if the handlers are small. You may find that for your forays into the world of interrupts takes a different path. Note that this "interrupt occured" variable should be defined using the keyword volatile. This tells the compiler that the variable may spontaneously change outside the scope of the code where it is checked. Without that the optimization systems may decide that if the variable isn't being changed by the code then its value is fixed and the test of its contents may get optimized out of existance. Problems that arise from forgetting this are very difficult to debug! care to guess how I know?

What we need to make it work

Now, let's list the things we need to make an interrupt work:

1) global interrupts must be enabled
2) The particular interrupt you want must be enabled
3) There must be a chunk of code (the handler) that gets run when an interrupt occurs
4) There must be a facility to trigger the interrupt in the first place

Remember the Arnie dvd analogy from above? Let's use that as an example:

1) Global interrupts: well, if the basement door is locked and no one can even reach you then global interrupts are disabled. In this case, it is not. On the avr the cli() and sei() routines turn global interrupts off and on.

2) Enable particular interrupt: when I watch arnie movies there are some events I will pause the movie on and some events I will not:

Cat meowing at the basement door wanting food. Disabled. Sorry cat but this is Arnie we are talking about.
Phone rings: that's usually, and very grudgingly, enabled.
Doorbell rings: I am expecting pizza so that interrupt is *SO* enabled. Oh yes.

3) Chunk of code or interrupt handler: Just like it is in the avr the type of interrupt defines how it is handled:

Phone rings: *BIG SIGH*. Hit pause. Stomp over to phone. Snap it up and growl "what?" into it (and hoping it isn't my mom). I hate telephones.
Pizza guy: Yell "WOOHOO!" Hit pause, scream up the stairs, rip open the door, step (sometimes trip) over the meowing cat, run to the door, give the pizza guy his money, grab a soda, say sorry to cat, lock her out of the basement, zip down the stairs and continue watching Arnie waste bad guys while scarfing my pizza. Life is good.

4) Facility to trigger the interrupt:

Phone: the damn phone is plugged in
Pizza: I called and ordered pizza earlier. Even with the interrupt enabled it will never get triggered if I don't call for pizza first.

This analogy should also illustrate the immense value and power interrupts bring to the table when programming the AVR. Without them we would have to constantly poll something to find out we want to do something about it. It's like pausing the Arnie movie every 30 seconds so I can go upstairs and see if the pizza guy has arrived. NFW.

First Example

Now, on to a simple example: the old flashing light program.

Instead of using a wait loop to do the timing, we'll use one of the timers to do it because, well, that's what they do. We will set it up so that once per second the LED will go though one complete on-off cycle. By default, the built-in timers ticks up/down at the cpu system clock rate. With an 8MHz clock we need to scale things down a little. That's where the prescale comes in. We can apply a prescale factor the system clock to generate the timer clock using the built-in facilities.

The prescale stuff for timer0/1 is described on page 72 of the mega8l pdf. Using a prescale is important because the timer has only 16 bits in it. That is, it can only count down from/up to 65535. If we divide 16 million (the max clock speed) by 65536 (counting 0) we get 244.140625. That would mean we could never define a timer period larger than 1/244th of a second at 16 MHz. The prescale simply runs the clock through a counter and divides the frequency by certain values (8, 64, 256 or 1024 for example). When we choose a different prescale by adjusting what we put into the control registers the AVR just taps the clock divider unit and routes the signal to the counter. Both timer 0 and timer 1 use the same prescale unit but, fortunately, we can choose different prescales for each.

So, back to where we were, trying to set up the timer. Dividing 8 million (or system clock) by a prescale of 256 gives us 31250. So, all we need to do then is program the counter to time down from half that (LED is on for 1/2 second then off for 1/2 second) or 15625. That's within the range of the 16 bit timer. We'll program the timer so that it increments TCNT1 31250 times a second and when it reaches 15625 it will call our interrupt routine which will turn the LED on or off depending on it's current state.

The counter runs independently of our code. So we don't need to worry about resetting anything each time it reaches the end of its cycle.

So, before looking at the example lets go over the interrupt names.

Available MEGA8 Interrupts

The avr-libc reference lists all the interrupts that the software supports. However, all but the largest processors have only a subset of those interrupts available. To get a list, I head on over the iom8.h header file that comes with libc. If you don't know where that is on your computer so here is a copy of the part with the interrupts:

#define SIG_INTERRUPT0       _VECTOR(1)
#define SIG_INTERRUPT1       _VECTOR(2)
#define SIG_OUTPUT_COMPARE2  _VECTOR(3)
#define SIG_OVERFLOW2        _VECTOR(4)
#define SIG_INPUT_CAPTURE1   _VECTOR(5)
#define SIG_OUTPUT_COMPARE1A _VECTOR(6)
#define SIG_OUTPUT_COMPARE1B _VECTOR(7)
#define SIG_OVERFLOW1        _VECTOR(8)
#define SIG_OVERFLOW0        _VECTOR(9)
#define SIG_SPI              _VECTOR(10)
#define SIG_UART_RECV        _VECTOR(11)
#define SIG_UART_DATA        _VECTOR(12)
#define SIG_UART_TRANS       _VECTOR(13)
#define SIG_ADC              _VECTOR(14)
#define SIG_EEPROM_READY     _VECTOR(15)
#define SIG_COMPARATOR       _VECTOR(16)
#define SIG_2WIRE_SERIAL     _VECTOR(17)
#define SIG_SPM_READY        _VECTOR(18)

We don't need to worry about the VECTOR stuff but the names of the interrupts right after the #define bit is a different story. Those are the names we give to our routines if we want to handle that type of interrupt. For example, if we wrote a chunk of code to do something when EEPROM is ready we will label it SIG_EEPROM_READY. For this example, we will use SIG_OUTPUT_COMPARE1A. If enabled, an interrupt of this type will occur when the counter for timer1A reaches a certain point. That "point" we care about depends on how we set up the counter itself.

Before we can use the interrupt we have to set up the timer. We need to define its parameters (prescale factor, what type of timer, etc) and enable the interrupts. By default, interrupts are not enabled. We have to specifically enable them; both within the configuration for the timer and globally.

The particular interrupt we are interested in using is SIG_OUTPUT_COMPARE1A. So, we need to make a routine that does something whenever that particular interrupt occures. Here it is:

SIGNAL(SIG_OUTPUT_COMPARE1A)
{
  static uint8_t ledon;

  if (ledon) 
  {
    ledon = 0;
    cbi(PORTD, PD4);
  }
  else
  {
    ledon = 1;
    sbi(PORTD, PD4);
  }
}

For now, we'll use the SIGNAL macro. It's a macro, not a function but we can treat it like one for our purposes here. We will make use of this macro once for each interrupt that we want to handle. When the program is compiled the linker will deal with putting the interrupt handler in the right place. That's one of the things I love about working in C. The compiler is our trusty minion and takes care of the tedious (and INCREDIBLY BORING) bits.

How it works is pretty simple. The handler just checks to see if the LED is on by checking the "on" status variable. If it is, turn it off and vice versa. The on variable must remain in existance and keep its last assigned value for this to work. That's why I defined it as static. I could also have defined this near the top of the program as a global variable. You'll also note I didn't initialize the variable. ANSI C (which avr-gcc generally adheres to) specifies that static variables are guaranteed to have 0 in them when the program starts.

Now, I know what you are thinking. What happens to the interrupt handler if another interrupt arrives? Well, that depends on whether you used the INTERRUPT macro or the SIGNAL macro like we did above. They are the same except that the INTERRUPT macro can be interrupted while the SIGNAL macro can't without us adding in special code to change their behaviour. If an interrupt arrives while we are in the SIGNAL macro then it will wait until the macro ends. The ONLY priority that interrupts have within the AVR is if they arrive or are pending at the same time. In that case, the one with a lower entry vector table will come first. I.E. in the same order that they appear in the interrupt table found in iom8.h or earlier in this web page.

Now let's go over setting up the 16 bit timer.

 
  TIMSK = _BV(OCIE1A);
  TCCR1B = _BV(CS12)        // 256 prescale
         | _BV(WGM12);      // CTC mode, TOP = OCR1A
  OCR1A = 1250;

TIMSK is the Timer Interrupt Mask Register. A "1" in certain bit positions tell the processor to enable interrupts on certain timer events. In this case we want an interrupt to happen on the "output compare A match." When the counter reaches a certain value an interrupt will occur. The value in question depends on how we set up the rest of the counter.

TCCR1A and TCCR1B are the two registers to set up the rest. You will see I am putting a "1" into CS12. The CSxx bits control the clock source that is used to change the counter. If you look on page 98 of the manul there is a little table that defines what the 3 bits (CS12, 11 and 10) do. In this case, the clock source is CLKio with a 256 prescaler. In other words the frequency of the square wave that increments the counter is 1/256th the clock we use to drive the whole processor.

The WGM bits (Waveform Generation Mode) tell us HOW the timer is going to function. Note that the bits are spread across the two TCCR1A/B registers.

What we want our counter to do is simple. Start from 0, count up to what we have in the OCR1A register, call an interrupt and reset the counter back to 0. That means we want CTC mode where TOP is OCR1A. If you look at table 39 that means we only want WMG12 set. So, if we only need CS12 and WGM12 set then we don't even need to bother with TCCR1A at all.

So, here is the whole program.

// interrupt based light flashing program for atmel mega8l
// 
// Craig Limber, July 2004
//

#include <avr/interrupt.h>
#include <avr/io.h>
#include <avr/signal.h>

//-----------------------------------------------------------------------------
SIGNAL(SIG_OUTPUT_COMPARE1A)
{
  static uint8_t ledon;

  if (ledon)
  {
    ledon = 0;
    cbi(PORTD, PD4);
  }
  else
  {
    ledon = 1;
    sbi(PORTD, PD4);
  }
}

//-----------------------------------------------------------------------------
void main()
{
  DDRD = _BV(DDD4);     // enable output

  TIMSK = _BV(OCIE1A);
  TCCR1B = _BV(CS12)    // 256 prescale
         | _BV(WGM12);  // CTC mode, TOP = OCR1A
  OCR1A = 15625;        // count up to TOP   1hz with 8 meg system clock
  sei();
  while (1)
    asm volatile("nop" ::);  // we spin!  Could also put processor to sleep
}

Microsecond timing example

One of things I had to tackle recently was inertial navigation with my minesweeper robot. On board is a gyro that outputs a voltage. When motionless it outputs 2.500 volts. When turning the voltage is either higher or lower than 2.5 volts if it turns left or right with the magnitude of the difference indicating how fast it is turning. That is so cool.

To calculate the current direction the robot multiplies the turn rate at sample time by the elapsed time since the last sample. This gives me how far it has turned in that time. With that I can add or subtract that from the last heading to get the new heading.

This method is, of course, prone to cumulative error and the robot will need to periodically reestablish its heading based on the environment around it. We need to make sure the robot stays on course between corrections by minimizing the error that accumulates.

One way to do that is ensure that the time base is correct.

What I did with my minesweeper is make all of my calculations work from microsecond timing. The finer the timing I use the finer the calculations are and the less error accumulation we get from round off/truncation. My training in numerical analysis is serving me well here.

It is not practical, however, to interrupt the processor every microsecond. With a 16 MHz clock that's only 16 cycles between interrupts. I don't think it's even possible for that to work. It's like being asked to take out the garbage and then getting asked the same thing before you even had time to get back to the couch. What we could do instead is have a timer interrupt every millisecond and add 1000 to the time with each interrupt. But then all we are really doing is millisecond timing so why even bother? Well, it's possible to get the best of both worlds: microsecond timing AND only handling an interrupt every millisecond.

This is where the counter itself comes in. If, for example, we set the system clock to 8 MHz, use a prescale of 8 to feed a 1 MHz clock to the counter, use a 16 bit timer and set TOP to 999 and enable the appropriate interrupt then our handler will get called every millisecond. What we do then is make our time equal to the number of interrupts since reset times 1000 plus TCNTx.

Chaos Theory for Lazy Slobs (like myself)

Why would we add TCNTx to the time? Well, I am never sure exactly how long it has been between when TCNTx actually reached 999 and the code to calculate the time inside the interrupt handler is reached. Two things can vary this.

First, if I had temporarily disabled the interrupts or if another interrupt is being handled when the timer reaches TOP the interrupt will stay pending and it may be some time before the timer handler itself actually gets called. We must not forget that the processor can only do one thing at a time.

Second, it takes time for the processor to save state whenever an interrupt handler is going to go into action. Depending on where in the code the processor is at the moment this could take a varying number of cycles. This is where "chaos theory for lazy slobs" comes into play. Rather than try and enumerate all save times for each segment of code I will just treat this delta as random and deal with it by adding on TCNtx.

You see, when TCNTx reaches top the interrupt is flagged and the counter starts over from zero and keeps counting up even while we are inside the handler. Asssuming it takes less than a millisecond for the processor to get around to handling the interrupt we can find the exact time, microsecond by microsecond, AT ANY POINT WE WANT within the interrupt handler by adding on TCNTx.

What I did on my robot is sample the gyro's output within the handler and then find the exact time immediately after using the example above. I then have an idea of how fast the robot was turning at that instant. Once I know that and the exact elapsed time I can then calculate the change in heading. Being able to find the time precisely means I can completely eliminate its consideration as I go forward on improving the navigation system's accuracy.

Now you are probably wondering why I said we would multiply the number of interrupts by 1000 and then add TCNTx instead of adding 1000 and TCNTx each time. Compare these two interrupt handlers. This example is from a mega128 processor. Timer2 has been set up to interrupt every millisecond.

uint32_t numofinterrupts;  // total number of time interrupts since reset
uint64_t time;             // total number of microseconds since reset

SIGNAL(SIG_OUTPUT_COMPARE2)
{
  numofinterrupts++;
  time = (uint64_t)numofinterrupts * 1000 + TCNT2
}
  

uint64_t time;      // total number of microseconds since reset

SIGNAL(SIG_OUTPUT_COMPARE2)
{
  time = time + 1000 + TCNT2
}

The second code snippet seems so much simpler, why don't we use that? If we did then the value of TCNT2 would accumulate as an error. The very first time the handler is run the time is correct. Afterwards the error gets larger and larger. The value of TCNT2 contains the delta we want at that instant. All other deltas must be ignored because the timer itself always keeps a pure millisecond time base.

Now the accuracy story does not end there. One thing I recently encountered was bizarre inaccuracies with my time when calculated inside an interrupt. I eventually figured out what was happening was that the timer itself reached top and ticked over while inside my interrupt routine before I calculated the time. This would mean that the other interrupt that increments the milliseconds is pending and won't get called until the current handler is done! See this page for more details on what a difference it could make.

BONUS OPTIMIZATION

I am always keen on keeping the code within an interrupt handler as small as possible. The multiplication in the first snippet above is very expensive. What I could do instead is adjust the timer so the top is 1023 and then do this:

SIGNAL(SIG_OUTPUT_COMPARE2)
{
  numofinterrupts++;
  time = (uint64_t)numofinterrupts << 10  + TCNT2
}

Shifting an integer to the left 10 times is the same as multiplying it by 1024 but it takes much less processing time. Of course it means that the handler is not exactly called every millisecond but if we are only using it for this one purpose then who cares?


to email Craig send to climber at shaw.ca (replace at with @ and remove spaces).
Return to Craig's Electronics page. pages.
Return to Craig's main page.

Last Modified: May 5 2008 - Good catch Mike.