Vector Maxwell Simulations

Summer 2003


This summer I have worked on several pieces of software that expand upon and compliment the Vector Maxwell Simulators being developed under Professor M. Brio at the ACMS. The simulators are used at the ACMS to model electro-magnetic energy over space and time. They play an important role in the research being done on new optical devices. Since I’m much more proficient in computer science than math or physics, my focus has been on the software engineering.


There are essentially four different simulators, three of them are closely related and one has been largely built from scratch. The former three were designed by Aramis Zakharian and programmed purely in C. There is essentially a 2D version, a 3D version, and an AMR (adaptive mesh refinement) version. Recently the 2D version was equipped with a Python interface allowing the simulation to be set up in a simple Python script. This allows us to change the simulation variables without having to recompile the whole thing again. The Python script is also simple and uncluttered, making it possible for people with no programming experience to set up and run a simulation with only the simulator’s user manual in hand.


The fourth simulator was written by Colm Dineen in C++ and Fortran using the Chombo framework (designed at Lawrence Berkeley National Laboratory). This version also uses AMR, but is unique because it allows for parallel processing. Both the AMR and parallel processing are implemented because of the amount of time it can take to run a detailed simulation.


These simulators define a 3d space as a matrix of cells over which the calculations can be preformed. The size (and therefore number) of cells determines the precision of the simulation. If there is a huge number of cells it can take days or weeks to process. If there are too few cells we get an inaccurate portrayal of what really happens. This was the motivation for 'Adaptive Mesh Refinement' or AMR. Essentially AMR allows us to use large cells in areas of the domain which are of little interest, and then smaller cells in the areas where we really need precision.


My first project was to create a library of functions in C++ that would draw some basic geometric shapes into the matrix of cells for Colm Dineen's Chombo based simulator. These functions allow the user to simply specify the location, size, and material for the object, which is then drawn into the appropriate matrices for each level of mesh refinement. I later expanded on these geometric primitives and designed functions which could automatically draw triangular and square lattice structures. This library of functions is designed to simplify the setup of a simulation and eventually be linked to an interface which will eliminate the compile step.



This Lattice Structure was created using my Geometry Library. The middle row is intentionally defective (slightly elongated) and the central cylinder is also defective (smaller than the others). There is a level of mesh refinement around the center of the structure.


My second project was to create a post-processing filter that can read in binary files dumped by the three simulators which were written in C, and then output an HDF5 file (Hierarchical Data Format version 5). The HDF5 format is the default output for the Chombo based simulator, which also provides a program called ChomboVis. ChomboVis is a fairly sophisticated 3D viewing tool allowing us to select which fields and levels of mesh refinement to view. Previously the binary files were viewed using IDL which could only display a 2D slice of the domain for one component and one level of mesh refinement.






This is a closeup of the same Lattice structure being viewed with ChomboVis. You can see both levels of mesh refinement at the same time in full 3D.


The second project has proved itself useful to the researchers and so I did some work to make it a more complete and polished application. I provided command line options which allow the user to select a range of time steps that they want to convert, which components to extract (various fields or geometry layout), and various other helpful functionalities.


I have already learned a great deal, but at the same time I feel as though I’m just getting started. There is still a plethora of work to be done on these simulations as they grow into polished standalone applications. We already have several projects in mind for next semester.


The first thing I will work on is a universal Python interface, this will allow us to take a simulation initialization script written in python and use it on any of the simulators. This will involve re-writing C interface functions in a more intelligent manner, and then creating a Python interface to the C++ functions using SWIG (Simple Wrapper Interface Generator). This will make transitions between the different simulators as simple as possible and eliminate the need to recompile every time the simulation is changed. Another side project related to this would be the creation of a “simulation loop” of sorts. The idea is to let the user run the simulation in a finite loop with the python script altering the simulation parameters between runs (ie. changing the gap size on a ring resonator).


Time permitting there are some other small projects which also need attention once the interface is completed . Primarily these would involve expanding the Chombo based simulator so as to include various observers (monitoring energy over a defined space and performing some post-processing), or doing some work on shared-memory multi-processing. We would also like to be able to rotate, scale, and transform the geometric primitives and observers with a set of functions using linear algebra.