Everyone knows averages, where you sum N samples and divide by N to get the average.
Simple moving averages follow naturally, just take the last N samples to be your N samples. They're popular in the stock market to smooth out daily noise so you can see long term trends. Computer systems that loop over N things often want the average of those N things, which is a moving average of the last measurement of each thing.
Simple moving averages have problems, though. They might go up today even if a stock goes down today, because the sample being removed from N days ago counts just as much as the sample being added for today. Continuously maintaining a simple moving average is a pain, because you need to store the last N samples so you can subtract out the last one when you add a new one.
Periodically taking a moving average gets around storing N samples: start an accumulator at 0, add each element as you see it, then when you get to the Nth element divide by N to get the moving average, then start over. This doesn't require storing N samples, but it only gives you a moving average every Nth sample.
Exponential moving averages (EMAs) are similar. They also let you see long term trends. They have some advantages over simple moving averages:
I maintain software where I deal with a list of N things that I iterate over, measuring each thing as I go. I do it in 10-millisecond batches. I want the average of the last N measurements. I make use of it every 5 minutes (which probably does not match when the loop has covered all N things exactly once), and I don't want to store those N measurements. So I keep an approximate EMA instead. I sum the measurements for each batch (S is that sum). If the last batch covered M items, I adjust the average to a' = a + (M/N)*(S/M - a) at the end of the batch.
This is only an approximation. If the batch is all N things, this becomes a' = a + (N/N)*(S/N - a) = a + S/N - a = S/N, while a correct EMA would come out closer to a/e + (1 - 1/e)*(S/N). But since what I really wanted was an average of the latest measurement for all N elements (which is S/N), not an EMA, the error brings this approximate EMA closer to what I really wanted. The error would overshoot if your batch size was bigger than N, so don't do that, limit the batch size to N samples or 10 milliseconds, whichever comes first. This sort of average is available always, not just at the end of each loop. I typically use this for monitoring and controlling the system, so approximations are fine.
Awhile ago I was analyzing stocks prices with EMAs. Each day would be a measurement, and I had a series of EMAs for each stock, using weights of 2-i for each i in 0..10. The 1/512 weight was about equivalent to a yearly moving average, and 1/8 about equivalent to a weekly. It is nice for stock analysis that the stock going up today always makes all the EMAs rise for today, never the other direction like normal averages would do. Stocks don't go back forever, so you assume the distant past is all the same as the stock price on the stock's first day. You'd look for patterns like, for example, the 2-2 average is up but the 2-7 average is down. You could keep EMAs of trading volume too.