This page contains the latest distribution of the complete mloc package. The archive unpacks into a directory structure containing all necessary executables, source codes, compiler directions, and data files. The distribution is served in a zipped archive (24 MB). The /mloc_distribution/ directory structure contained in the archive may be installed anywhere in the user’s file system, but it is strongly recommended that none of the directory or file names be changed until and unless the user has a thorough understanding of the mloc ecosystem.
The current version of mloc is v10.5.0 with a release date of June 25, 2020.
The full version history of development of mloc is contained in the text file “mloc_version_history.txt” in the directory mloc_directory/mloc_src/. The entries since the last version update are reproduced here, in reverse chronological order, along with any notes about changes to other elements of the distribution.
2020/6/24: I added some logic in mloc.f90 to handle the case of a missing mloc.conf file more gracefully, and also added a mechanism to try to ensure that new users edit the sample file before running mloc. There’s a new keyword “SAMPLE” in the first line of the example file, with a reminder to edit the other keywords. mloc won’t run unless that line is deleted.
2020/4/22: Getting a segmentation fault with a cluster that had a lot of events with stations right on top of them. I think it was the section in subroutine delaz (mloclib_geog.f90) that handled zero epicentral distance. I was setting delta=0. when it got to be less than about 50 m. Setting the distance to a very small number (but non-zero) fixed the problem.
2020/4/10: In the Iwaki cluster I ran into a bug in mlocout_ttsprd.f90 where the number of P readings exceeded the hard-wired limit of 30,000. This didn’t break anything but produced a large number of warnings. I just set the limit of those arrays to be equal to the parameter that sets the limit for number of readings used (ntmax1).
2019/12/18: I changed the functioning of the RADF command, so that reading (and using) agency and deployment fields to resolve station code conflicts is restricted to specified station codes, not universal. In practice there are seldom more than a couple cases in any given cluster where agency and deployment codes are needed. Making sure that the entire dataset had correct specification of agency and deployment (to match whatever was in the station files) when you only need it for a couple readings from a single station was a nightmare. In fact, the code now uses the agency and deployment fields along with the station code for every comparison, but in most cases the fields are simply blank. RADF just specifies that those fields, both for station files and phase readings, will actually be read for certain station codes.
2019/12/5 (v10.5.0): Major changes in plotting, related to the adoption of GMT6, which has a nice new option for handling plotting of topography using the “@earth_relief_rru” option in grdcut. I was also running into trouble with the old .grd files when using GMTv6. Some of the GLOBE tiles were ill-constituted and are now breaking stricter requirements. Rather than try to fix my old .grd files I decided to abandon GLOBE and GINA as options and use ETOPO1 (the “01m” option from the GMT server) as the sole option for dem1. The new system also supports high-rez DEMs from SRTM but I have not yet experimented with that. ETOPO1 has a little less resolution than GLOBE but it still works well for basic plotting, and it will be easier to support since I don’t need to serve the data file. The argument to the command “dem1” is simply “on” or “off” now. It appears that the current plotting code will still work in GMT5 (with one exception) so that is still permitted. The exception is the call to grdcut, and for that there is a special loop for GMT5 there, referencing the old ETOPO1 file in /tables/gmt/dem/ETOPO. I also fixed a long-time annoyance with the .kml file. The symbols only displayed correctly if the directory is still in the working directory, so it could find the image files in the tables/kml directory. Now each cluster directory acquires a directory called “_kml” containing the necessary images and the .kml file refers to those, so it displays correctly as long as the .kml file is in the same folder with the _kml directory.
Last Updated on