UNDERGRADUATE RESEARCH PROJECT

University of Arizona Department of Mathematics

by

Diane Kohl

Faculty Advisor: Cinnamon Hillyard

Bounding Sound Wave Errors

This semester, with the help of Professor Cinnamon Hillyard, I investigated the errors introduced by numerical schemes in approximating waves. Out goal was to find a maximum bound on the errors.

I began my research by learning about three-dimensional optimization. We started by looking at the optimization chapter in the vector calculus textbook. The chapter began with local extrema, maximums and minimums. I learned the importance of the critical points and the use of gradients in finding them. The next section went on to global extrema in an unconstrained optimization situation. Again, the gradients came in handy. The last section in the book dealt with constrained optimization and Lagrange multipliers. This took a little while to process, but gradients were important here, too. The constrained optimization was important to the problem of the sound waves we were going to tackle, because of the number of variables and the constraints that we would place on them. In this case, the critical points, gradients and boundaries were the important areas to look at. During this time, I also learned how to use the Maple program to assist me in my work and make some of the longer, harder problems easier to handle.

Next, we investigated Euler’s method, it’s errors and the Taylor series approximation in regards to the problem we were looking at. The problem is finding some type of formula or a way to discretize sound waves. In the physical realm, sound waves are continuous and keep moving and going on. When sound waves are looked at numerically and graphically it creates a step like graph. I attempted to make a graph to show the steps but as can be seen even the Microsoft Excel program drew a line between the steps to connect them. After only one numerical approximation, a spike jumps up as seen in the second graph. The following is a table with the points I used to create the graph, the first graph is the attempt to show the steps, and the two following graphs are to show how the spike begins to occur. This "spike" is due to the numerical approximation and not part of the true solution. The true physical solution just moves the wave along without changing its shape.

Graph 1 Graph 2 Graph 3

initial condition after 1 iteration after 2 iterations

1

1

1

1

1

1

1

1

1.046875

0

1.125

0.859375

0

0.375

0.046875

0

0

-0.04688

0

0

0

0

0

0

Graph 1

Graph 2

We attempted to find a global maximum for the "spike" that would occur after the "step" of the sound wave, however, our optimization techniques gave us only zero which was the global minimum.

Graph 3

So on we went to direct iteration. We tried this to see if we could see a pattern in out errors. At first to get a handle on iteration, I tried it by hand. I did this by taking the formula we had,

[u(old)[j] – l /2*( u(old)[j+1] – u(old)[j-1]) + (l ^2)/2*(u(old)[j+1] – 2*u(old)[j] + u(old)[j-1])],

starting with a list of ten zeros and ten ones, setting lambda(l ) to 1 , and then letting j be the position of the number in the new list that would be created. Each time the errors changed a little and the list got a little bigger. I went on to use Maple to do a larger number of iterations, but ended up with a bunch of fractions. Then by using several of the commands, I turned the fractions into decimals and found the maximum for each iteration. I worked on the iterations till I had done 200 of them. We tried to turn these numbers into some type of linear graph, trying to see where the numbers were going and if there would be a maximum that the numbers were heading for. The numbers kept slowly going up, so it appeared to be logarithmic rather than linear growth. We attempted to find a way to take our list of maximums and match it through regression. However, we did the regression. We took the numbers from our list and then tried linear regression, matching to an exponential function, and finally had some luck with a logarithmic regression. The graphs presented show the data and the resemblance a log function given by the following formula, y = a + [b * ln (x)], with a = 1.12531366 and b = .0180988807.

 

 

 

This graph shows the actual data we found doing the iterations

 

 

N actual error predicted error

1

1.125

1.125313

25

1.18455

1.183571

50

1.198993

1.196116

70

1.2061

1.202206

100

1.20411

1.208661

125

1.208695

1.212700

150

1.219314

1.215999

175

1.22045

1.218789

200

1.21997

1.221206

The table shows both the data we found and the data created by the formula the first graph gives the observed data alone. The second graph shows both data groups and how well they match up. So in the end we found that there will be no maximum, but the errors will not grow out of hand either, just a slow progression up in a form similar to a logarithmic function.

 

The following graph shows both actual and predicted error.

So if we take the formula that fits the curve we can closely predict the future values that will occur after a certain number of iterations. The chart below gives a small example of the values that would be obtained using the logarithmic formula that we have found.

N error

1

1.163722

100

1.198391

200

1.234459

300

1.271612

400

1.309884

500

1.349307

600

1.389917

700

1.431750

800

1.474841

900

1.519229

1000

1.564953

1100

1.612053

1200

1.660571

1300

1.710549

1400

1.762032

1500

1.815063

1600

1.869691

1700

1.925963

1800

1.983929

1900

2.043639

2000

2.105146

Conclusion: We began hoping to find a global maximum error for our numerical scheme. We discovered that this global maximum, most likely, does not exist. However, we did find a formula that gives us a good estimation of how fast the error grows.