Instructor (Brad Osgood):Is the screen fading and yes, Happy Halloween to everybody. Only one noble soul here came dressed as a Viking. All right. All right. Iím glad to see that sort of pioneer spirit is still with us, as ridiculous as it looks. All right. Okay. Let me call your attention to the information on the board. Mid-term exam is today. There are three sessions, from 2:00 to 3:30 and from 4:00 to 5:30 are both in building 380, room 380W. Thatís the basement of the math corner, and from 6:00 to 7:30 is here. Okay. We will provide you with the exam, with a formula sheet, which you already have seen as has been posted on the course website for some time, and blue books. All right. Any questions. Itís an open book, open notes. You can bring your stuff with you, but as I say, donít bring these stacks of signals and systems book as Iíve seen in the past. Youíre only going to waste your time. Any questions about anything? Iím somehow not surprised to see a thin crowd out there today.
Itís a shame because, actually, today, Iím gonna be talking about sampling and interpolation, which is my favorite topic in the course. Itís really too bad that no one is going to be here to hear it. But Iíll persevere somehow. All right. So no other questions about the exam or anything else on anybodyís mind? All right. So let me remind you, again, the topic today, and weíll pick it up again next time, is sampling and interpolation. Thereís a lot to say and, as with many things, we canít say it all, so I want to get the main ideas across and the main phenomenaís that are associated with it. Our approach to it is gonna be an application of what we talked about last time with the Shaw function so I want to remind you of that. So last time we introduced this train of delta functions, sometimes also called the Dirac comb or the Dirac train or a pulse train or an impulse train or all sorts of things. We introduced the Shaw function of spacing P some from ñ Iíll put the variable in there, so some of delta functions evenly spaced, P apart. So K from minus infinity to infinity and delta X minus K P. All right. And it has several important properties that I will list.
So the picture is a bunch of evenly spaced delta function all up and down the line. So itís usually indicated something like this, P 2 P and so on minus P minus 2 P also and so on. All right. Thereís three very important ñ and we introduced this in the context of an important physical problem and quite an interesting physical problem that of studying X-ray diffraction. All right. The mathematical properties that allowed us to analyze that problem so effectively, are the same mathematical properties that weíre gonna use today in a quite different setting, and I want to recall those for you. The three important properties ñ one is the sampling property, we used each one of these, but now I want to single them out. That is if I take a function and multiple it by the Shaw function, it samples the function at the points K P. So F of X times a Shaw function is the sum of F of K P delta X minus KP against [inaudible] infinity. Iíll get out of the way in just a second here. All right. That sampling property to the Shaw function falls from the sampling property of the delta functions. If you multiple the shifted delta by a function, it pulls out the value at the point times the delta function. You can read that well enough here. K times P. So thatís the sampling property. Allied with that, sort of the flip side of that, is the periodici, the periodizing property. And that has to do with convulsion that if I involve a function with the Shaw function of spacing P, I get a periodic function of period P. That is this is the sum of shifted versions of F, K going minus infinity to infinity, F of X minus K P. All right. So that gives you a periodic function of period P. Iím not worried about here of questions about conversions and things like that, Iím just worried about the formal properties and how they work.
They are, in some sense, flip sides of each other and weíll see that very strongly today because convulsion and multiplication are sort of swapped back and forth by taking the Fourier Transform of the Inverse Fourier Transform for you. Okay. So in fact, thereís actually sort of two sides of the same coin. The final property of the Shaw function, the remarkable property that falls from this [inaudible] formula, the fact about the integers is the Fourier Transform property. That is the Fourier Transform of a Shaw Function spacing P is a Shaw function with reciprocal spacing with an extra factor of one over P out in front. Okay. Thatís a very deep fact. Let me just say because Iím actually gonna use the corresponding formula for the Inverse Fourier Transform, so Inverse Fourier Transform formula is the same. Iíll just write it down. I wonít derive it separately. The Inverse Fourier Transform of the Shaw function of spacing P is the same thing, one over P ñ Shaw function spacing one over P. All right. These three properties were the basis for the mathematical analysis of X-ray ñ the fractions we talked about last time, and theyíre gonna serve us also today in quite a different context. So here is the set up for the sampling and interpolation problem. In fact, let me call it the interpolation problem. Set up for the interpolation problem. It is not one problem, but rather itís a whole category general problems which have been considered in different contexts, different ways since time and memorial. Okay. All right. The problem here is ñ and what weíre actually gonna solve in a quite remarkable way is the exact interpolation of values of a function from a discrete set of measurements or a discrete set of samples. So weíre gonna consider ñ this is what weíll do. Weíll be able to interpolate all values of a signal or a function from a discrete set of samples. All right. We wonít be able to do it in this generality, that is, weíre gonna have to make certain assumptions that are gonna allow us to solve it, but even under fairly general assumptions, itís surprising that you can do this at all, and itís ridiculous to expect to be able to do it at worse somehow. To ask us a general question and to expect an answer for it is really maybe asking too much, but in fact, by employing the techniques that we developed, we can actually get an answer for this in fairly great generality and thatís extremely important practically. All right. Hereís the set up. Hereís the thing to keep in mind. Let me formulate this question again, fairly generally and then weíll get to the assumptions that we have to make in order to be able to solve it in a reasonable way. All right. So imagine you have a process evolving time say, it doesnít have to be time, but thatís the easiest way to think about it.
A process evolving in time. All right. And you make a set of measurements at equal intervals ñ some fraction of a second. Time intervals. Equal space actually turns out to be important in this context. All right. So you have a bunch of measurements. T not, Y not, T 1, Y 1, T 2, Y 2 and so on. All right. So you can think of this as a bunch of points, and I want to formulate the question really in two ways. So hereís T not, T 1, T 2 and so on. And hereís the corresponding points somehow in the plain, or just plotting them, T 3. Now, you might ask ñ fit a curve to those points or what is equivalent ñ interpolate the values at intermediate points based on the measurements that you make. Thereís really two equivalent ways of formulating, so you might ask, one fit a curve to the points, fit a curve to the data or you might ask interpolate values of the function of the process, whatever it is, the function at intermediate points based on the sample values, based on the points you measure based on the measurements. All right. Those are reasonable ñ they are questions that are natural to ask and you can imagine all sorts of context for this. And thereís many ways of doing it. All right. As a matter of fact, if youíre doing 263, youíd probably talk about lease squares, approximations to a set of data.
We actually donít track the data, but you find somehow the line that best fits the data. But you can imagine drawing a curve like this, you can imagine doing it with straight lines, straight line ñ I was going to say straight line interpolation, but Iím mixing the two aspects of the problem. But something like this. You could do this or you can imagine fitting it with a polynomial or some other kind of curve where you might go up, you might go down, you might go up again somehow, but you want to actually track the values exactly. Okay. But there again, thereís not a unique solution for this. Thereís many ways of doing it depending on the particular problem. Now, and again, itís almost silly to prefer ñ well, you may have a reason for preferring one over the other, but you better have an extra argument to justify one choice over another. Thereís many ways of doing this. Many possibilities. All right. And the question is how would you choose one over the other or choose any at all for that matter. Why would you expect to be able to do this? Well, you might want to refine, if itís possible, you might want to refine your options or narrow your options by making more measurements. All right. If possible, you might make more measurements to suggest a better fit or better interpolation or more accurate interpolation. A better fit or more accurate interpolation. Let me make sure you know what I mean here when I say interpolation, all right, and what youíre actually getting. If you write down an explicit function or set of functions that tracks this data ñ and what I mean by interpolation is that you can find the valuate intermediate points, the reported value of the intermediate points, by plugging into your formula. All right. So you write down a formula for the straight line that goes from here to here, that means you can find every point on that straight line. You have a formula for that.
Why question the straight line from here to here and from here to here or if you have a polynomial that somehow does this or maybe does a piece wise things and maybe goes that then you can find the value at any intermediate point by plugging into that formula. Thatís what I mean by interpolation. All right. So all values are computable by knowing only these discrete values, and itís an approximation. You donít know whether thatís doing a good job or not doing a good job, but youíre making a guess and youíre making what you hope is a reasonable guess. All right. But thatís why Iím saying curve fitting and interpolation or equivalent sort of thing. One is a geometric way of looking at it by drawing a curve, the other is an analytic way of formulating it by actually trying to write down a formula for the function thatís doing the interpolation, and then plug into that formula and seeing how well it matches. All right. So again, what I mean by this is maybe you can make more measurements and then compare those measurements to your formula. All right. See if itís working. Now, maybe you can do that, maybe you canít. What is the enemy in this or what is causing uncertainty? Well, again, if this level of generality thereís more uncertainty than there is certainty. I mean, you know, you have a discrete set of values, who knows what the function is doing in between, but donít stop at saying that. Try to say something a little bit more positive. That is, the thing that is causing you difficulty, the uncertainty in the values, in the extreme case, are the oscillations. How fast the function is changing from one value to the other. The more rapidly the function might be changing, the less certainty you have about interpolating the values in between from the measured values. All right. So the more bends the function takes, the more rapid bends or the more corners a function has, the more uncertainty you have in your interpolations ñ in your current fitting or your interpolation. Same thing. The problem here in interpolation, in uncertainty is the more rapid the bends are, the more uncertain you are in your interpolation, the more jagged somehow the process is between sample points, the less certain you are about how to interpolate between the sample points. All right. So you somehow want to quantify the jaggedness or the bends in the process ñ in the function. Now, we are pretty sophisticated in questions like that. We want to regulate, maybe even get rid, but at least regulate understand quantify somehow. We want to regulate ñ let me put it this way ñ how rapidly the function, the signal is bending or oscillating ñ I donít quite know the right way of saying it or the right word for this, but I hope you know what I mean ñ oscillating ñ all right ñ between sample values. Thatís gonna be an extra assumption, all right, between sample measurements, between sample values. All right. Now, this is gonna take the form of an assumption, but the question is what should the nature of the assumption be? All right. Now, as I say, weíve actually gotten quite sophisticated in this sort of thing. For us, we think in terms of not just the function given in the time domain, but we also think of the function in the frequency domain. We think in terms of its Fourier Transform. The Fourier Transform in representing the function in terms of its frequency components tells us something about how fast the function is oscillating.
If it has high frequencies, high frequencies are just as Fourier Transform high frequencies are associated with rapid oscillations, low frequencies with smaller oscillations, less rapid oscillations. And so if we want to make an assumption thatís gonna eliminate rapid oscillations, we might make that assumption in the frequency domain, that is, it should be an assumption on the Fourier Transform. So this is governed by ñ this is a spectral property so itís governed by the Fourier Transform. All right. That is an assumption on how rapidly the function is oscillating, is an assumption on the Fourier Transform, thatís one way of saying this a little bit more precisely. All right. Now, go right to ñ past the simplest assumption you can make along these lines. It takes a little experience to know this is the simplest assumption, but the idea is you want to eliminate all of ñ one way to think about this is you want to eliminate all frequencies beyond a certain point. All right. So thatís one possibility. Youíre trying to analyze this. Maybe this one is good idea. Maybe thereís other ones that are good ideas. Iíll give you a clue. This turns out to be a good idea. One possibility is to eliminate, that is to assume, is to eliminate all frequencies beyond a certain point that is by an assumption. All right. Now, weíve seen that many times. If this is what you want to get to, then make that a definition. Concentrate your attention for the interpolation problem on functions which satisfy a property like this. We are ready for an importation definition. The enemy ñ the problem is rapid oscillation between those measured values. Our approach to that problem to resolve that uncertainty is to say Iím gonna regulate how rapidly the function is allowed to oscillate.
And Iím gonna phrase that as an assumption on the Fourier Transform of the function, all right, because the assumption on the Fourier Transform is an assumption on the frequency components. If you eliminate high frequencies you feel like youíre eliminating rapid oscillations, and Iím gonna state that as a definition. So you state a function, F of T, is band limited. If itís Fourier Transform, is identically zero outside sum, band of frequencies. That is F of S ñ the Fourier Transform is identically zero for S greater or equal to P over 2. Iím gonna write it like that. All right. Youíll see why in a little bit. For some P. And then, the smaller such P is called the bandwidth. The smallest such P for which this is true the bandwidth. All right. So the picture is ñ but the Fourier Transform is identically zero outside finite interval. And Iím working with real functions here, so the Fourier Transform is symmetric so thatís why I have absolute values. You can give the definition more generally, but this is the most natural setting for it. So hereís the picture. Heís zero, hereís P over 2, hereís minus P over 2 and whatever the Fourier Transform does in between, and it canít do that exactly, because itís supposed to be symmetric ñ itís zero outside of there. All right. This is the Fourier Transform. All right. So that is supposed to capture the idea that you are eliminating the size of the frequencies, you are limiting, in the time domain, the oscillation of the function by making this assumption in the frequency domain. Okay. Now, what I want to do is I want to show you that for band limited signals you can save the interpolation problem exactly. Itís remarkable. For band limited signals, you can solve interpolation problem exactly. You get a formula for F of T for all T for all T. In terms of values of F at a discrete set of points. T ñ let me just call them T of C K sample values. All right. You can fit that curve exactly. That is to say, if you know the process comes from a function which is band limited, you can write down a formula for the function. All right.
Now, in the notes, thereís a different sort of discussion of this. Iím gonna go right for the kill. All right. Iím gonna show you how this works and itís nothing short of remarkable. Itís almost obscene the way this thing works, I think. Itís the most remarkable ñ I think itís the most remarkable argument in the whole subject practically, and as I say, itís one of my favorite arguments because itís just obscene and it makes me feel cheap and dirty. Itís great. The notes has a discussion of this, but it goes a little bit farther. Iím not gonna go through that. That is in terms of trigger metric functions, why you expect something like this to occur, why you might expect something like this to occur in different circumstances. Weíll circle back and talk about some of these things, but for right now, I want to go, as I say, right for the kill. Iím gonna show you how this works. Iím gonna give you the argument. ñ and in evolves exactly ñ the periodizing property of the Shaw function and the sampling property of the Shaw function and the Fourier Transform part of the Shaw function. Those three properties come into this in a very essential way. All right. Again, hereís the pictures of the frequency. Iím just gonna do this. All right. And thereís no way of saying of it other than Iím going right for the kill. Just enjoy the ride. All right. Enjoy the experience because itís amazing to see this thing unfold. All right. So hereís the picture of the frequency domain again. The pictures like this. So Iím assuming the signal is band limited so that means Fourier Transform only is non-zero between minus P over 2 and it might have some zeros in here, okay, all right, Iím not saying it doesnít have any zeros in between, but Iím saying that for sure, itís identically zero outside this interval. All right. Now, Iím gonna use the Shaw function with spacing P to periodize this, all right, by convulsion. That is, I look at ñ let me draw the picture for you. I look at the Fourier Transform of F convolved with a Shaw function of spacing P. All right.
So what is the picture? The picture is hereís the Fourier Transform minus P over 2 to P to over 2. Iím gonna make it a little bit more condensed here. Minus P over 2 to P over 2. All right. Hereís the Shaw function with spacing P. Thereís a delta function there. This is P over 2. The next delta function is a P. The delta function over here is a minus P and so on and so on. All right. Hereís zero. All right. Now, what does it do when you convolve the Shaw function with this, it shifts it by P and adds it all up. All right. So the picture of the convolution of the Fourier Transform with the Shaw function looks something like this. Hereís zero, hereís minus P over 2, P over 2, it shifts the whole thing over to P down to minus P and so on. So you get just a bunch of repeating patterns of the thing. But thereís no overlap because the Fourier Transform is zero outside of the interval for minus P over 2 to P over 2, so if you shift it by P, there is no overlap when you convolve. All right. Now you say, brilliant, you have done something and you have undone something. This takes a PhD? Yeah.
Student:[Inaudible]
Instructor (Brad Osgood):Where? No, this should not be convolution, it should be multiplication. Iím multiplying by a function which is one on the interval from minus P over 2 to P over 2 and zero outside that interval. So that gets back the original Fourier Transform. All right. The original Fourier Transform is here and itís only there. All right. Itís only there. When you convolve it, you get a bunch of repeating patterns. You add them all up. But you cut those off. All right. Cut those off. And that leaves you with this, which is the original Fourier Transform. This is the whole ballgame. All right. The most important equation here is exactly this equation. Iíll write it down again. The Fourier Transform F is the cutoff of the P Fourier Transform.
All right. And you say great. Youíve done something, youíve undone something. You know, you got a PhD for this; this is why they call you Professor? Now, take the Inverse Fourier Transform. It looks like you havenít done anything, but actually, youíve done something extremely significant because by taking the Inverse Fourier Transform, it swaps multiplication and convulsion. This is a picture in the frequency domain. What is happening in the time domain? So take the Inverse Fourier Transform. Well, on the left-hand side the Fourier Transform of the Fourier Transform is just F, so F is equal to the Inverse Fourier Transform or the Fourier Transform. Right? Thatís fine. Now, take the Fourier Transform, that gives you back the original function because Iím taking the Inverse Fourier Transform and the Fourier Transform. Right. The Inverse Fourier Transform of the right-hand side, again, it swaps, letís do this a little bit at a time here. It swaps convolution and multiplication. That is to say it takes multiplication back to convolution. This is the Inverse Fourier Transform or the rectangle function; convolve with the Inverse Fourier Transform of this. Follow along.
Enjoy the ride. Enjoy the ride. All right. All right. Letís look at this part here. Well, Iíll write it out a little bit. One step at a time. The Inverse Fourier Transform of the rect function is a scaled sinc function. The Inverse Fourier Transform of the rect function of spacing P is sinc P T. If you donít believe me, figure it out yourself. All right. The Inverse Fourier Transform of this, again, is gonna swap convolution and multiplication. The Inverse Fourier Transform of involved with the Shaw function is gonna be the Inverse Fourier Transform of the Fourier Transform. What the heck. Too many Fs of the Fourier Transform of F times the Inverse Fourier Transform of the Shaw function times. Okay. I donít know if we stated the convulsion theory ñ I donít know if I stated it, Iím sure itís in the notes, we stated the convolution theorem in terms of the Fourier Transform, same thing holds in terms of the Inverse Fourier Transform. Okay. Swap convolution and multiplication. All right. Now, once again, the Inverse Fourier Transform of the Fourier Transform is just the function. The Inverse Fourier Transform of the Shaw function of spacing P is a Shaw function of spacing one over P. So this is F of T times one over P times the Shaw function of spacing one over P times FT. I guess Iím using T as my variable here. Okay. Now, I could combine all the formulas at once here, but let me not do that. Let me take this one step further. Now, remember now, weíre gonna use the sampling property of the Shaw function. We used the periodizing property of the Shaw function in the frequency domain. When we periodize the Fourier Transform. Now, in the time domain, Iím gonna use the sampling property of the Shaw function.
All right. If I multiple F and T times this, now remember, Iíll take this out one more step before I donít know remember what it is, one over P, thatís a constant that just comes out in front of the whole thing. Itís one over P times F of T times the sum of deltas minus infinity to infinity spaced one over P apart, T minus K over P. All right. Now, I use the shifting property of the delta function so itís this Fourier Transform function, P times time P T convolve with this sum of deltas. All right. So Iíll take this out one more. The P cancels with a one over P, so this is sum K equals minus infinity to infinity. These are constants. F of K over P times a sinc function, the scaled sinc function convolved with delta T minus K over P. All right. And now you use the shifting property of the delta function and write this as the sum of the sample values and it is the sum from K [inaudible] minus infinity to infinity, F of K over P times the sinc of P, T minus K over P. Which is actually the way I prefer to write this, but many people write it like this ñ they multiple by through by P. Some from K equals minus infinity to infinity, F of K over P sinc of P T minus K. Weíre done. Weíre done. We found a formula for the function at all values of T in terms of its sample values. Iíll write it down. This was a long chain of equalities here. We have shown that F of T is equal to the sum from minus infinity to infinity, F of K over P sinc of P ñ Iíll write it like this, T minus K over P. We have interpolated all of the values of the function, at all values of T, in terms of sample values, evenly spaced points. All right. So my T Ks, in the way I originally formulated the problem, are one over P, two over P, three over P, minus one over P, minus two over P, minus three over P and so on and so on. If you know all these discrete values, then you can interpolate all the values of the function in terms of those. I think itís the most remarkable formula in the whole subject. So again, the assumption was ñ I want to go through this one more time to make sure you have this. This is called the sampling theorem, the sampling formula, sometimes called the Shannon sampling formula, sometimes call the Whitaker sampling formula. Itís associated with a lot of names. Unfortunately, my name is not associated with this formula as you can tell how much I love it. All right. So again, the assumption is that if the Fourier Transform of F is zero, for S greater than or equal to P over two than you have this formula for the function, sum from minus infinity to infinity, F of K over P, sinc P times T minus K over P. All right.
Now, for me, the sampling formula is identical with the proof of the sampling formula. All right. And thatís important because I think next time weíre gonna talk about ñ I donít think Iím gonna push this into this time ñ weíll talk about when things go wrong, as well as when things go right, but for me, I never think of this formula alone in isolation without the miracle equation that makes it work. All right. So this is all depending on the fact that if you periodize the Fourier Transform and then cutoff, you get the Fourier Transform back. But this is the rectangle function of spacing of with P times the Fourier Transform of F convolved with the Shaw function of the spacing of T. All right. It followed, although I tried to stretch it out for dramatic purposes, it follows immediately, lightening fast, just very mechanically by applying the Fourier Transform from this formula. This is whatís essential. All right. And so for me, and Iím serious about this, the sampling formula, this formula, is identical with the proof of the sampling formula. All right. To understand this formula is to understand this formula. They are the same. Okay. They are the same. Itís a consequence of the Fourier Transform swapping convolution and multiplication and writing this down. Itís a simple, but exceedingly cunning idea. Why do something like that? You know. It seems like youíre not gonna do anything. Youíre periodizing and then cutting off and you get back the original function. Itís like proving one is equal to one. What is the big deal? You know. But the fact is that the reason why something non trivial happens is because of this remarkable fact that the Fourier Transform is going back and forth between the two domains, swaps convolution and multiplication. You have a combination of both of them in fact. You have convolution and multiplication in the frequency domain, you take the Inverse Fourier Transform, you have convolution and multiplication in the time domain, but the roles are swapped. All right. Thatís why periodizing in the frequency domain turned into sampling in the time domain. All right. Because of this formula and because of the way the Fourier Transform swaps convolution and multiplication. All right. Iíll tell you what. Since itís been my habit to keep people late in class, I think instead, today, weíll get out early. I want to more about the consequences of this remarkable formula next time, including, sad to say, when things go wrong, but when things go wrong, they can all be explained by when this formula is not correct, when something is not right with this formula. All right. Okay. Thatís it for today. Iíll see everybody later at various times and ñ
[End of Audio]
Duration: 43 minutes